branchrust_reboot/main/parallel_validatemovecancel
17 Commits over 0 Days - ∞cph!
Bugfix: ServerDemoPlayer - handle player reconnecting multiple times
Not 100% sure it's the correct way, but I think it works for now.
Tests: played back new staging demo 3 times
Tests: replace Assert.AreEqual with Assert.IsTrue
- Brings TickIntrpolatorCache test from 30s+ down to 6s
Turns out AreEqual is slow and inflates the test times by quite a bit.
Tests: ran unit test
Optim: TickInterpolatorCache reduce number of segments being copied when growing
Unit test is still slow, need to dig a bit more
Tests: ran unit tests
Clean: fixing whitespace issues after auto-merge
Tests: none, trivial changes
Merge: from main
Tests: none, no conflicts
Debug: sprinkling additional validation checks in ServerUpdateParallel
Trying to narrow down at which point this null sneaks in
Tests: none, trivial change
New: TickInterpolatorCache - a sparse, bulk TickInterpolator array
- Comes with it's own stress tests (they pass, but need to investigate perf)
- Depends on PlayerCache, but I need to modify it to provide more stability
Building block towards jobifying tick history processing.
Tests: ran unit tests
Optim: replace couple managed loops with a burst job
Tests: none, trivial changes
Buildfix: move ValidateTransformCache to SERVER region
- also fixed missing Profiler.EndSample()
Tests: built client and server in editor
Merge: from main
Tests: none, trivial merge
Clean: promote server var to a const
- no codegen since I didn't do one when I added this
It's temp code, but it makes things safer while I investigate, so no reason to disable it at runtime
Tests: none, trivial change
Bugfix: purge player cache when player update jobs has emergency shutdown
Allows to restart player update jobs cleanly in the same session
Tests: in editor with debugger forced an emergency scenario, confirmed the cache was empty and rebuilt
Update: move player transform cache updates to be first step of parallel player processing
- Bugfix - using Steam networking backend can have temporary outtages, which can cause gaps in processing and player cache desyncing
- also enables us to compose parallel flows better in the future
Tests: local editor session on craggy
Update: add temp emergency disable of player job processing
- Only active on staging servers while I investigate the crashes
Tests: forced an error via debugger - confirmed fallback is working
Tests: move the PlayerCache stress test to it's relvant tests file and clean up
- commented out the expected-to-fail case
Tests: ran the tests
Update: adding throw away tests to investigate how I cuased a native crash yesterday
So far everything points to BasePlayer being removed after we cache all transforms for the burst jobs and before the first RecacheTransform invoke, but I haven't tracked where it's coming from.
Tests: ran the hacky unit test