branchrust_reboot/main/requesttrees_spikecancel

26 Commits over 30 Days - 0.04cph!

56 Days Ago
Merge: from main Tests: none(trivial merge)
2 Months Ago
Merge: from main Tests: none
2 Months Ago
Optim: Avoid expensive reserializations when trees de-/spawn - Controlled by TreeManager.UseLazySerialization, enabled by default - Tracked by LazyUpdate scopes - Simplifies some profiling scope names - Added OnTreeDestroyed profiling scope This should save us 0.25ms per cell reserialization on larger worlds. Tests: local multiplayer session, connected, destroyed a tree, reconnected. Confirmed tree impostor wasn't there and profiling scopes showed lazy serialization. Disabled lazy serialization and reconnected - still good and no spikes in profiler on chopping down trees.
2 Months Ago
Optim: Preserialize the tree grid before sending to the player - Added ClientRPC overload that accepts a MemoryStream to support above - Early out of the tree streaming logic if no players are in the streaming queue Local test on 4.5k Proc world showed that it took ~1m to stream entire world for 1 player instead of previous 3.5m Tests: minimal, booted procgen map in CLIENT+SERVER local session, waited until everything streamed in
2 Months Ago
Merge: from main Tests: none
2 Months Ago
Clean: removing leftover log Tests: none, trivial change
2 Months Ago
Merge: from main Tests: none (no conflicts)
2 Months Ago
Bugfix: Don't try to send tree batches to disconnected players - Also replace a broken link in a comment Tests: In Server+Client mode disconnected the client - no NRE
3 Months Ago
Update: Log TreeManager's treaming grid dimension on init - Available as part of Network level 1 logging Tests: booted in editor with server-only mode
3 Months Ago
Merge: from main Tests: editor boot
3 Months Ago
Update: Use RustLog instead of debug log in TreeManager - Also fixed a minor bug that would not display full timing accuracy for old method - Moved logs to level 1 of Network (was 2) Tests: enabled network logs and tried with tree streaming enabled/disabled
3 Months Ago
Update: adding doc string to Pool.FreeUnmanaged(ref Stopwatch) Tests: none, trivial change
3 Months Ago
Optim: avoid StopWatch allocations via Pooling Tests: on Craggy flew out until only impostors visible
3 Months Ago
Clean: updated a comment implying a potential bug - it was wrong Tests: none, trivial change
3 Months Ago
Update: TreeManager's grid is defined by cell size - Exposed via TreeManager.CellSize convar (takes effect at boot only) - Reorganized code a smidge to reduce how scattered a bunch of info was Tests: On craggy connected with 2nd player and flew out until only impostors were visible.
3 Months Ago
Update: Whole tree streaming logic is time budgetted - Right now has a budget of 1ms, controlled by UpdateBudgetMS convar - Streaming preference is given to players who have more cells left to stream This change protects us from having too many players in the streaming queue eating up all the frame budget. Tests: On craggy confirmed that streaming still completes
3 Months Ago
Update: consolidate tree manager cell streaming - Hidden behind the Network level 2 logging Right now we don't fit the 10micros budget(~60micros), so we end up sending 1 cell per player - going to add a frame budget with player sorting to avoid hogging the server Tests: On 4.5k procgen map tracked first and second player connecting, was able to observe cost of jit
3 Months Ago
Clean: Removing a couple TODOs - Not doing vehicle specific streaming logic as 4.5k world grid gets streamed in less than a second - removing old TODO since my change implements it Tests: none, trivial change
3 Months Ago
Update: Budget every tree-cell send On Procgen 4.5k map from a save sending one cell can take 0.25ms - this change should smooth out the cost further. Tests: Booted on craggy, confirmed that player recieved all cells.
3 Months Ago
Bugfix: Don't send newly spawned trees to player if they land in a batch yet to be sent to them - Added a profiling scope so we can track if it's taking too much time(it's a TxP complexity algo, but P tends to 0 very quickly, so we should be able to afford this) Tests: Tested on 4.5k procgen world and connected from a separate client and chopped a bunch of trees saw no duplicates. That said, it's very difficult to proc this (<1s window).
3 Months Ago
Clean: forgot to save an extra comment Tests: none, trivial change
3 Months Ago
Update: replacing cell budget with a time-per-player budget - Setting this budget to 10micros initially This budget doesn't evaluate every cell, but rather groupings of cells. I'll re-evaluate this once I get to testing big procedural worlds. Tests: Confirmed entire Craggy grid gets streamed to the player
3 Months Ago
Update: added a runtime switch to disable tree batch streaming - Enabled by default Tests: booted with it being both turned on and off
3 Months Ago
Merge: from main Tests: none
3 Months Ago
Merge: from main Tests: none
3 Months Ago
Optim: Send trees in batches over multiple frames to avoid spike on player connection This is experimental, but it seems stable initially. On 4.5k procgen world from a save it took 90ms(very rough numbers, including client-side spawning overhead) in total to send all trees, but each frame 1 player took on average 0.2ms to gather & send data. Also need to fix a bunch of bugs Tests: Ran craggy in local editor session - no exceptions.