2,433 Commits over 608 Days - 0.17cph!
Merge: from dynamic_object_work_queue_shrinking
- Optim: ObjectWorkQueues now shrink to larger capacities overtime
Tests: procgen map, ensured outpost turrets still tracked and fired at me
Update: debug.printqueues now also prints queue capacity
Tests: printed queues while being blasted by turrets
Optim: ObjectWorkQueue now shrinks to dynamically adjusted minimal capacity instead of 0
- explicitly calling Clear resets the state back to inital 256 capacity
Grows capacity by 10% every minute if 60 overcapacity events have been recorded. This should eliminate allocations for bursty workloads in long term.
Tests: visited outpost, got blasted by turrets
Merge: from fix_invoke_flood
- Optim: prevent repeating invokes flooding work after stalls
Tests: observed animals moving normally
Optim: change scheduled time for repeating invokes
Previously, we would schedule repeating invokes from current time, meaning overtime they would drift towards one frame, causing work spikes. Now we maintain interval in respect to original time, which should preserve original scattering
Tests: ran around with a lit torch on procgen, some wildlife moving
Merge: from remove_clear_from_unsub
- Optim: speed up unsubbign from network groups by skipping clearing their entities from the queue
Tests: teleported around 2.5k procgen map
Optim: skip clearing entity queue when unsubbing from a network group
Was originally done to help with old spectating logic, but it's now handled separately.
Tests: teleported around on a 2.5k procgen map
Merge: from imrpoved_network_groups/serverocclusion_player_fastpath
- Optim: skip unnecessary global occlusion group lookups
Tests: unit tests
Optim: skip checking global occlusion groups for small and large network grid layers
Players can only be on medium layer at the moment. Controlled by a UsePlayerOnlyOnMediumLayerShortcut compile switch
Tests: ran unit tests
Update: Jobs 3 - enable parallel sub updates switch
Tests: none, related unit tests pass
Update: pregenerate radius tile offset to avoid potential races
Tests: ran unit tests
Bugfix(tests): TestNew_LaggyTeleport - fix sleepers incorrect position after recent refactor
Tests: ran unit tests, they pass
Bugfix(tests): fix incorrectly converted TestNew_MoveOther spawn pos
Tests: ran unit tests, now only 1 failure
Bugfix(tests): increase DummyServer's network grid from 512 to 1024
Some tests ended up spawning outside of the network grid, leading to test failures.
Tests: ran unit tests, from 30 failures down to 9 (2 tests)
Update(tests): bring back spawning on boundaries of network ranges
Current logic considers those cells to be in range, so rely on that fact. Doesn't fix tests, but validates edge cases
Tests: ran unit tests, same tests failt
Clean(tests): consolidate network-range-distance calculating logic
Got tired of fixing it up manually in every test
Tests: ran unit tests, some still fail (likely due to out of network grid bounds cases)
Merge: from spectate_stay_after_dc
- Update: spectating players now always stay on a disconnected player's sleeper instead of searching for new target
Tests: spectated 2nd player, 2nd disconnected, spectator stayed
Update: when player disconnects don't try to find a new target for spectators, just let them hoves in 3p
Tests: spectated a player that disconnected, stayed on it's sleeper
Merge: from useplayerupdatejobs 3
- Optim: new UsePlayerUpdateJobs 3 mode that adds parallelizes more work and reduces task-related allocs
- New: our fork of UniTask
Tests: unit tests and simple testing on Craggy (booted with Jobs 3, teleported around)
Clean(tests): replace more handrolled index set generation with AntiHackTests.GenerateIndexPermutations
Tests: ran all afected tests
Bugfix(tests): use 1^-5 epsilon for water level consistency in TestWaterLevelsConsistency
Tests: ran unit test
Merge: from demo_3p_camera_fix
- Bugfix for 3p camera spazzing out in client demos
Tests: played back a demo and switched to 3p and back
Bugfix: ensure 3p camera in client demos uses selected player's eyes
Tests: play back demo, was able to go 3rd person on active player
Merge: from jobs2_demos_fix
- Bugfix for players not moving in client demos recorded on Jobs 2 servers
Tests: recorded on craggy with jobs 2, played back - all's gud
Bugfix: fix player not moving in demos recorded on Jobs 2 servers
The cached list of unoccluded players was missing owner player
Tests: recorded a demo on craggy with jobs 2, played back - player and camera was moving
Merge: from useplayerupdatejobs3/free_tasks
- New: fork of Cysharp/UniTask with less allocs
- Optim: reduce allocs around tasks in Jobs 3 mode
Tests: jobs3 on craggy with 2 players, built client and booted
Optim: Jobs 3 - save allocs by using UniTask for sending entity snapshots asynchonously
Think that's all existing tasks converted
Tests: loaded on craggy and teleported to/from the island
Clean: get rid of handwritten UpdateSubs_AsyncState state machine
Tests: unit tests
Optim: Jobs 3 - rewrite EAC and analytics tasks into UniTasks to remove allocs
Discovered server profiler is megaborked, no idea what caused it. Will investigate after rewrite is done
Tests: craggy in editor with jobs 3
Optim: Jobs 3 - OcclusionSendUpdates now uses UniTasks
- added UseUniTasks feature flag controlled by UsePlayerUpdateJobs 3
Positive experiemnt, can get rid of the hand-rolled state machine and use async-await.
Tests: profiled 2 players in editor beign destroyed by server occlusion - no allocs for task
Update: server enables SetPoolRunnersActive to reduce allocs of SwitchToThreadPoolAwaitable during tier0 init
Tests: loaded on craggy (with and without jobs 3), teleported around
Optim: UpdateSubscriptions - replace Tasks with UniTasks
They're slightly slower on a stress test, but they allocate an order less (and sometimes don't allocate) - 0.5MB vs 8KB over 10 run
Tests: unit tests
Update: plug in our fork of UniTask
- ecb0489
It has reduced allocations, but there are still some APIs that allocate
Tests: unit tests pass
Optim: FPTask.Run is now 0 allocs
Using reflection, manually invoke internals while sneaking in our cached callback
Tests: ran unit test
Update: add UniTask for evaluation
In simle case only has 1 alloc that's not pooled, same as FPTask - will try to solve it locally first
Tests: ran unit test
Update: basic FPTask + dumb single task scheduler
This gives us 1 alloc/40bytes per task baseline, but with a bit of hacking I think can bring it to 0
Tests: ran unit test
Update: more research - looks like we do need our own task type
Tests: uni tests
Update: initial investigation trying to reduce async-await gc overhead
Goal is to find better alternative to custom async states I've been handwriting, as they still require an alloc per task
Tests: ran unit test
Bugfix(tests): fix invalid position logic in ServerOcclusionGroup tests
Tests: unit tests pass
Merge: from defer_tick_analytics
- Optim: start analytics tasks earlier to avoid blocking main thread
Tests: Jobs3 on craggy in editor
Update: bring back pooling for ApplyChangesParallel_AsyncState
Theorising it'll be safer for hot-reload/manual domain reload flow
Tests: craggy with jobs 3
Update: skip creating analytics task if analytics is disabled
Tests: none, trivial change
Optim: Jobs 3 - kick off analytics tasks earlier in the frame to avoid potentially blocking main thread
It will overlap with server occlusion, which sohuld give it extra time to finish.
Tests: none, trivial change
Update: replace couple managed accesses with BasePlayer.CachedState
Tests: none, trivial change
Optim: save on action alloc when setting up tasks for UpdateSubs_AsyncState
Tests: none, trivial change