1,688 Commits over 427 Days - 0.16cph!
Tests: ServerOcclusionTests.GeneratePairs - generate correct number of pairs
This further shrinks runtimes, as previously we generated waaaay too many
Tests: ran ServerOcclusionTests set
Tests: optim TestOcclusionLineOfSight_PerfSerial/-Parallel
By properly constructing and caching base players - tests now take less than a second.
Tests: ran test set
Tests: add TestOcclusionLineOfSight_PerfSerial/-Parallel
Both run for too long - likely due to how I create players for the tests. Will fix next
Tests: ran the new set
Tests: TestOcclusionLineOfSight_Consistency - rset occlusion cache between serial and parallel runs
Tests: ran the test set
Tests: add ServerOcclusionTests.TestOcclusionLineOfSight_Consistency
Compares results between serial and batched versions.
Tests: ran the new test set
Update: prep BasePlayer.OcclusionLineOfSight (both serial and batched) for use in tests
Tests: none, trivial change
Tests: rewrite ServerOcclusionTests to make ServerOcclusion usage clear
Is it ugly on some lines? Yes. But is it explicit and beautiful? Also yes.
Tests: ran the unit tests
Clean: propagate network time from FinalizeTickParallel
Tests: none, trivial change
Optim: NetworkPositionTick - remove extra InvalidateNetworkCache
Cache has been previously invalidated in FinalizeTickParallel, so no need to discard it again
Tests: none, trivial change
Optim: OcclusionSendUpdates can now reuse occlusion results
- Got rid of old OcclusionLineOfSight that used to send updates internally, as there's no need for it now
Tests: 2p session on Craggy with UsePlayerUpdateJobs 2
Clean: remove couple TODOs
- one was just completed
- another was overzealous
Tests: none, trivial changes
Optim: SendNetworkPositions - reuse occlusion query results
Tests: 2p on Craggy with UsePlayerUpdateJobs 2
Update: move ServerUpdateOcclusionParallel inside FinalizeTickParallel
- FinalizeTickParallel invalidates players network cache - with UsePlayerUpdateJobs 2 we can skip it later, but 1 is has double-invalidate
This increases the coverage of OcclusionFrameCache, allowing to simplify a bunch of code.
Tests: 2p on Craggy with UsePlayerUpdateJobs 0, 1, 2 and disconnects. 0p server with UsePlayerUpdateJobs 1, 2
Merge: from connectedplayer_rewrite
Got far enough along in this direction and things seem to work
Bugfix: ServerUpdatePlayerTick - restore Player.serverTickInterval functionality
Got lost during the rewrite
Tests: 2p on Craggy with UsePlayerUpdateJobs 2
Update: ServerOcclusion - add a global cache of all player pair results that lives for a frame
- Cache is valid after it's been updated, controlled via OcclusionCanUseFrameCache
Optimizes SendEntityUpdates and anything in the end of frame invokes by skipping LOS checks. This doesn't affect tick confirmation due to code ordering - will have to reorganize that
Tests: 2p on Craggy with UsePlayerUpdateJobs 2
Clean: use ReadOnlySpan in SendEntitySnapshots/-WithChildren
Tests: compiles in editor
Update: SendEntityUpdates - don't try to skip occlusion explicitly
Bit of a 180 turn. Current code is problematic to optimize at high level - but it will be easier if we introduce a global occlusion pair cache. At least I hope.
Tests: compiles in editor
Buildfix: TryRemove -> Remove
Tests: editor compiles
Update: bring back non-concurrent dict for BasePlayer.lastPlayerVisibility
Tests: none, trivial change
Update: OcclusionSendUpdates - rewrite lost pair handling to gather->send form
Should enable to revert back to non-concurrect dictionary for BasePlayer.lastPlayerVisibility
Tests: 2p session on craggy with UsePlayerUpdateJobs 0 and 2
Clean: OcclusionSendUpdates - factor out SendEntitySnapshotsWithChildren
Decided against merging SendEntitySnapshotsWithChildren and SendEntitySnapshots, as -WithChildren relies on grouped sending - it would complicate the code unnecessarily.
Tests: 2p session on Craggy with UsePlayerUpdateJobs 2
Clean: SendEntityUpdates - refactor out SendEntitySnapshots
Prepping for server occlusion unification
Tests: none, simple change
Update: SendEntityUpdates - add parallel sending of snapshots
It's incomplete, but now the similarities with OcclusionSendUpdates are more clear and can be unified. Also revelaed a bunch of concerns with ShouldNetworkTo and deferring to parent choice.
Tests: 2p on Craggy with UsePlayerUpdateJobs 2. Observed animal desync, but weirdly it was still there after setting UsePlayerUpdateJobs 0 and reconnecting - will investigate later.
Bugfix: OcclusionSendUpdates - fix up wrong last batch size calculation
- also removed dead variables
- removed FoundMain/Worker to SendAsSnapshotMain/Worker
Tests: none, trivial changes
Update: SendEntityUpdates - plug in occlusion fast path
- use-after-free bugfix
Tests: 2p on Craggy with UsePlayerUpdateJobs 2
Update: shaping up SendEntityUpdates, not complete
I'll need to refactor out sending logic from parallel server occlusion, and reuse some previous results, but at this point the direction is clear
Tests: compiles in editor
Update: slightly more landscaping
Tests: none, trivial changes
Update: being brave and replacing an if-continue with an assert
Tests: none, read through code to confirm it should hold
Update: start on BasePlayer.ConnectedPlayersUpdate
- Inlined BasePlayer.ConnectedPlayerUpdate and cleaned up the styling
- Annotated potential loops to optimize/offload
- Removed dead IsReceivingSnapshot check
Tests: none, trivial changes
Bugfix: Parallel ServerOcclusion - get rid of extra ShouldNetworkTo checks
- Added a note explaining why we're not offloading network-cached children to worker threads
Tehnically previous version added an extra check due to APIs being used - this brings it back in line with serial count.
Tests: 2p session on craggy with UsePlayerUpdateJobs 2
Clean: simplify SendAsSnapshotWIthChildren
- remove includeChildrensChildren - it was always set to true
- remove the children null check - it's always created/set
Tests: compiles in editor
Bugfix: Parallel ServerOcclusion - serialization now happens only on main thread
- main thread doesn't try to process lost pairs and already-serialized found pairs tasks anymore - serializing work is longest
Previously serialization was offloaded to worker threads, but after investigation discovered that it would kaboom due to accessing scripting API. So instead, we offload to worker threads only players + entities that we can guarantee won't trigger serialization.
Tests: 2p session on craggy with UsePlayerUpdateJobs 0, 1, 2 - ran around, switched weapons, disconnected
Bugfix: Parallel Occlusion marks entity snapshots as out of order
Since we can send them from muiltiple threads, they are unordered - this would cause a client disconnect previously. Not anymore
Tests: none yet, need to try validating with server demos
Update: Escape hatch for Server sending entity messages out of order
- Client skips ordering validation based on message contents
Needed for the parallel server occlusion, useful for other places
Tests: local SERVER+CLIENT session, though no code uses this yet
Clean: removing dead param from SendAsSnapshot
I need the param for different logic, and it's been lost in the sauce since 2015.
Tests: editor compiles
Update: Test.ServerOcclusion - rewrite perf tests
- Also added ParallelJob perf test covering recent
Shows that 1k pairs case goes from 2ms to 0.2ms with ParallelJob - hoping it'll carry over to live env.
Tests: ran the perf tests
Update: Test.ServerOcclusion - update stale cases with new ones
This was much more painful than expected. Need to update the perf test next
Tests: ran the unit tests
Bugfix: replacing old occlusion cache to unblock unit tests
All expect-true tests are currently failing - need updating the query locations
Tests: ran the Server Occlusion tests
Update: batched OcclusionLineOfSight now handles sending updates internally (like serial)
- driven by OcclusionLineOfSightNoBroadcast - this thing we can test
Don't like this encapsulation, but it should prevent issues like missing foundPairs.
Tests: none, trivial change
Bugfix: player replication no longer stutters with UsePlayerUpdateJobs 2
Was iterating over wrong occlusion results - need to refactor to avoid future confusion
Tests: 2p session on Craggy with UsePlayerUpdateJobs 2
Bugfix: rewrite batched OcclusionLineOfSight to supports sleepers
Tests: 2p session on Craggy with UsePlayerUpdateJobs 2, disconnected multiple times - no more out-of-bounds exceptions
Update: BasePlayer.NetworkPositionTick now uses batched server occlusion
Tests: 2p session on Craggy with UsePlayerUpdateJobs 0 and 2. Saw a bug on disconnect, will fix next.
Update: moving occlusion notification logic to it's own utility func
Realized I'll need it for the task-enabled SendNetworkPositions
Tests: none, trivial change
Update: change OcclusionLineOfSight interface to use (int, int) instead if deconstructed pairs
- Also fixed invalid allocator use for a native list - TempJob was wastefull
In the middle of writing SendNetworkPositions, and realized previous interface could be error prone
Tests: none, trivial change
Update: starting work on batched BasePlayer.NetworkPositionTick
Doesn't do any batching/tasks yet, just preliminary clean of non-player logic. Goal is to batch underlying server occlusion
Tests: 2p session on craggy with UsePlayerUpdateJobs 2
Update: Refactor FinalizeTickParallel to isolate ApplyChanges logic
Should help with profiling, and sets up for batched BasePlayer.NetworkPositionTick
Tests: local 2 palyer session on craggy with UsePlayerUpdateJobs 1 and 2
Bugfix: ensure we update player eyes before we kick off various jobs
Previously this would cause cached state to have stale eye positions.
Tests: none, trivial change
Update: refactor ServerUpdateParallel to only contain high-level calls
This should improve profiling view by clearly delineating logic
Tests: 2p session on craggy with UsePlayerUpdateJobs 2
Clean: mark with comments when each player cache is last updated
Tests: none, trivial change