2,551 Commits over 639 Days - 0.17cph!
Merge: from useplayerupdatejobs_purge
- Clean: Removal of UsePlayerUpdateJobs 0 and 1 code
- Optim: RelationshipManager now uses cached server occlusion results instead of running new ones
Tests: unit tests + ran around on craggy, used heli, zipline, swam
Clean: remove dead using statements
Tests: none, trivial change
Update(tests): adding TickInterpolatorCache tests
- added overloads to accept index directly instead of entire baseplayer
Tests: ran unit tests
Clean: nuke TickInterpolator
We lose consistency unit test, so i'll add a couple basic ones in next change
Tests: ran AH unit tests
Clean: minor variable replacements in TickInterpolatorCache
Tests: none
Clean: replace all usages of TickInterpolator with TickInterpolatorCache in AntiHack
Tests: ran AH unit tests
Clean: remove all uses of TickInterpolator in BasePlayer logic
Tests: compiles
Clean: update UsePlayerUpdateJobs servervar description with a new min level
- ran codegen
Tests: compiles
Clean: remove all ConVar.Server.UsePlayerUpdateJobs > 0 checks
Tests: compiles
Clean: remove TriggerParent.UsePlayerV2Shortcuts servervar
Tests: compiles
Clean: remove UsePlayerTasks alias, since it's now always true
Tests: compiles
Optim: RelationshipManager - replace active server occlusion query with a cached result fetch
Tests: compiles
Clean: simplify serial OcclusionLineOfSight to match batched version
- removed all extra replication code that we no longer need
Tests: ran server occlusion consistency tests
Update: expose all BasePlayer state caches
Needed for Occlusion cleanup rewrite
Tests: compiles
Update: replace OcclusionCanUseFrameCache with true and simplify
This is a bit more than just clean, since there's an extra section of code that will run with it active. Since we didn't have any big bugs with jobs 2 player replication, treating this cache as always valid.
Tests: compiles
Clean: remove BasePlayer.ServerUpdate and all it's sub-callgraph
This also removed most of occlusion v1, but got a bit more to clean there
Tests: compiles
Clean: rip out serial player update(Jobs 0) flow
- moved idle kick logic into ServerUpdateParallel
Need to purge non-called methods next
Tests: compiles
Clean: remove server.EmergencyDisablePlayerJobs and relevant code
No more safety, where we're going we need bravery
Tests: compiles
Update: remove not-in-playercache checks around TickCache where appropriate
- also changed resizing to be dependent on playercache capacity, not length, as that was a bug that somehow never tripped
Tests: compiles
Update: connected players always register with PlayerCache disregarding Jobs mode
- Cleaned up a couple checks that now become irrelevant
Tests: compiles
Clean: remove occlusion v1 path from jobs 2
Occlusion v1 path still exists for jobs 0 - I'll rip that out a bit later
Tests: compiles
Clean: remove Jobs 1 paths in BasePlayer.ServerUpdateParallel
Tests: compiles
Merge: from triggerparent_jobs_isinside
- Buildfix
Tests: built server locally
Clean: add a reference to unity's issue tracker
Tests: none, trivial change
Buildfix: use JobHandle.IsValid extension instead of comparing to default directly
- add the above extension
Tests: built server locally
Update: turn GamePhysics.DefaultMaxResultsPerQuery into a server var
- breaking API changes for GamePhysics.CheckSpheres, CheckCapsules, EnvironmentManager.Get
- Ran codegen
Tests: none, trivial change
▅▇▍▍█▋▊ ▌▅▍▊ ▇▍█▉█▉▇▄▇█▄▉▄▋▇▅▋▆▅▍▆▌▊▌▊▄▅▊▉▄ ▇▊▆▅▆▅ ▇▌▉ ▇▌▇▅▅▄ ▋▌▅▄▉▍▉▍ ▍▇ █▇▉▍▇▇ ▊█▇▄▇ ▆▍▉▆ ▌▌▋▌▅▍▋ ▉▋▌▅▄▊▄▄▌▋▆▌▋▌▋▌▅▍▆▌▋▄ ▊█▍▊▌▊▄▌▄ ▌▇▋▌▌▅ ▅▇▍█ ▆ ▆▊▄▄▋▇ ▅▄▊▍▍▅▍█▅ ▇▆▇▌█ ▍▍ ▊██▄▄ ▍▋ ▍▉▆ █▉▋▍ █▌ ▍██ ▄▅▅▍ ▆ ▌▇▊▋ ▍▍▅
▉▄▆▋▍▌▆▇ ▍▉▌▉▆▍▌▋▅ ▊▄▍▇▋▇▋ ▊▋▇▅▍▇▊▄ █▉█▉▅▉ ▉▄▆ ▅▄▅▊▇▌▆▋ ▇▅█▊▍▍▋▊█▄▆█▅ ▇▊▍▌▅▊█▆ ▉▊▅▄▋ ▆▋▆▄▆▍▄▇▊▅▆█▉▊▋▍▋█▆▉▅▋█▅▋ ▆▅▌█▋▉▆▉▆▊▄██▋▇█▋▋█▋█▉ ▍▋▇▌▅ ▌▉▉█▆▄▆▄▋▅ ▍▌▊▍▌▇▍▊█▆▊▍▄▇ ▄▍█▇▊▅ ▅▄▍▇ ▇ ▄▊▆▋▇▅ ▋▉▆▌▌▌▋▉▆ ▍▄▌▉█ ▍█ ▆█▆▊▍ ▇▉ ▄▌▍ ▆▊▋▋ ▋▌ ▅▅▍ █▋▉▅ ▅ ▊▋█▅ ▅▆▇
Merge: from hascloseconnections_fix
- Bugfix for BaseNetworkable.HasCloseConnections and GetCloseConnections mot seeing outside of small layer
Tests: ran unit tests
Bugfix: ensure players are detected on the border of the network cell
- expanded unit tests to cover these cases
Tests: ran unit tests
Bugfix: fix HasConnectionsClose & GetCloseConnections missing players in medium and large ranges
- also add a unit test to validate Connections overload
Tests: ran unit tests
Bugfix(tests): adjust player spawn distance for TestNetworkRange.OutsideRange cases
They can land on a boundary, making it more difficult to reason when working across small/medium/large layers
Tests: ran the tests, same results
Bugfix(tests): expand TestServer's network grid from 1k to 2k
Previously it was too easy to accidentally setup an out-of-bounds scenario, leading to confusing test failures
Tests: ran related tests, discovered couple tests are failing (unrelated to this change)
New(tests): tests for BaseNetworkable.HasCloseConnections and GetCloseConnections
- named cell sizes for various layers in NetworkVisibilityGrid
Tests: ran tests, failing where expected
Merge: from serverprofiler_recordscope_pause
- Update: ServerProfiler recorder scopes can be paused/resumed (but needs more work on export side, looks bad)
- Bugfixes for recorder scopes corrupting memory and breaking perfsnapshot
Tests: unit tests + recorded a multiframe coro with recorder scope + perfsnapshots
Update: add Pause/Resume to ScopeRecorder and SlowScopeRecorder
Tests: tried them in throw-away code, works but exported result looks wonky and needs more work
Update: ServerProfiler - add ability to pause & resume active recorders
- added couple unit tests
- release binaries built from 61cba2fc
Tests: ran unit tests
Bugfix: Simplify RecordContext memory management in ServerProfiler.Core
- release binaries built from 39f0cfa4
Worst bug was that 2 threads sharing one recording buffer
Tests: recorded a bunch of test coroutines (2 frames worth each) via RecordScope, then ran a perfsnapshot for 10 frames - no issues.
Update: ServerProfiler.Init now resets managed internal state
- also added handle cleanup to ScopeRecorder and SlowScopeRecorder dispose, in case double-dispose gets called
This reduces test boilerplate slightly (as scope recorders have a timeout)
Tests: ran unit tests - they pass. still chasing native storage corruption
Bugfix: when exporting a snapshot from recorder scopes skip fast-forward logic to 0 callstack depth
Tests: recorded coroutine, was able to see all 3 log calls (before yield, after 1st yield and before yield break). But something's corrupting memory for subsequent perfsnapshots
Merge: from serverprofiler_recordscope_pause
Need it for experiments with profiling coroutines
Bugfix: prevent taking perfsnapshots if a recorder scope is currently active and vice versa
- also fix recorder triggering NREs because it tries to run perfsnapshot code
Tests: ran profiler scope spanning multiple frames. Checked output, it's not making sense, investigating deeper
Bugfix: don't overwrite callstack depths for frame 0 with values for other frames when multiple frames are found
Tests: ran Export2FramesTorn - it passes and looks correct-ish (same wrong offset as in Export2Frames)
Bugfix(tests): fix invalid test logic in ExportExtraEnd2Frames
- renamed ExportExtraEnd2Frames to Export2FramesTorn
- added Export2Frames (shows invalid frame start - this is new)
Tests: ran unit tests, failures where expected