userDaniel Pcancel

435 Commits over 123 Days - 0.15cph!

14 Days Ago
Merge: from main Tests: none
14 Days Ago
Merge: from requesttrees_spike This reduces overhead of streaming grid cells of tree impostors from 3.5m to 1m on a 4.5k server. Tests: 2 editors(1 server, 1 client) in same session - streamed no changes, removed 1 tree then re-streamed, and streamed with disabled lazy serialization
14 Days Ago
Merge: from main Tests: none
14 Days Ago
Optim: Avoid expensive reserializations when trees de-/spawn - Controlled by TreeManager.UseLazySerialization, enabled by default - Tracked by LazyUpdate scopes - Simplifies some profiling scope names - Added OnTreeDestroyed profiling scope This should save us 0.25ms per cell reserialization on larger worlds. Tests: local multiplayer session, connected, destroyed a tree, reconnected. Confirmed tree impostor wasn't there and profiling scopes showed lazy serialization. Disabled lazy serialization and reconnected - still good and no spikes in profiler on chopping down trees.
14 Days Ago
Optim: Preserialize the tree grid before sending to the player - Added ClientRPC overload that accepts a MemoryStream to support above - Early out of the tree streaming logic if no players are in the streaming queue Local test on 4.5k Proc world showed that it took ~1m to stream entire world for 1 player instead of previous 3.5m Tests: minimal, booted procgen map in CLIENT+SERVER local session, waited until everything streamed in
14 Days Ago
Merge: from main Tests: none
19 Days Ago
Merge: from requesttrees_spike - Fixes NRE when players disconnect during tree streaming (fixed by discarding those players early) Tests: in Editor CLIENT+SERVER mode disconnected before 4k proc-map streamed in - server was gud
19 Days Ago
Clean: removing leftover log Tests: none, trivial change
19 Days Ago
Merge: from main Tests: none (no conflicts)
19 Days Ago
Bugfix: Don't try to send tree batches to disconnected players - Also replace a broken link in a comment Tests: In Server+Client mode disconnected the client - no NRE
24 Days Ago
Add: Perf Test dud to boot ProcGen map Builds for player and starts switching the world, but I try to set it up incorrectly and still need to build/copy asset bundles. Tests: Confirmed that the player asserts when running the new test
24 Days Ago
Clean: removing unnecessary checks and files Tests: none, trivial changes
24 Days Ago
Update: when enabling PerfFwk ensure we have 64bit arch selected Somehow I had 32bit arch targetted locally, and it looks like this is a local-only setting, so it's possible others will also run into this - this should avoid issues (like previous problem with Rust.Harmony). Tests: while on WIndows32 target enabled the framework - confirmed it switched to 64bit arch
24 Days Ago
Update: don't trample on existing defines when changing mode switches Saves a bit of time when working with Performance Framework Tests: confirmed RUST_PERF_FWK stays when switching to none, client, client+server
24 Days Ago
Buildfix: Harmony loader is conditionally built via asmdef settings - Originally there was a mix of code macro checks and asmdef constraints - now it's just asmdef constraints and no code defines - Enabled it for all platforms except editor, instead of just 3 explicit ones(Unity's TestFramework currently builds Win32 players instead of Win64). Tests: with PerfFwk enabled and editor in different(Client, Server, both) modes, ran Pool perf tests in Player/standalone mode. Everything built and succeeded.
24 Days Ago
Update: Exclude unnecesary scenes when PerfFwk is enabled - We have disabled scenes in the list that UnityTestFramework ignores and tries to build, leading to issues - It also saves iteration time, since we only build scenes we'll use for perf testing Tests: same as before
24 Days Ago
Update: Moving perf tests to PerfFwk plugin - Added additional references to PerfFwk Original idea of mixing perf test code into main assembly didn't work out, so for now going the path of contianing them in an isolated assembly Tests: tested with other changes to run pooling tests in CLIENT+SERVER standalone mode
25 Days Ago
Undo: auto-reference of PerformanceTesting lib Trying out a different approach Tests: none
25 Days Ago
Merge: from main Tests: none
25 Days Ago
Merge: from requesttrees_spike Removes the "server_requesttrees" lag spike on player connect by spreading out the processing over the next frames. Server owners can disable this via `TreeManager.EnableTreeStreaming 0` and adjust it's performance via `PlayerBudgetMS`, `UpdateBudgetMS` and `CellSize` admin servervars. Tests: Booted Procgen 6k world - took ~3.5min to stream entire world to a player at 10 server fps, with no visual deterioration.
25 Days Ago
Update: Log TreeManager's treaming grid dimension on init - Available as part of Network level 1 logging Tests: booted in editor with server-only mode
25 Days Ago
Merge: from main Tests: editor boot
25 Days Ago
Update: Use RustLog instead of debug log in TreeManager - Also fixed a minor bug that would not display full timing accuracy for old method - Moved logs to level 1 of Network (was 2) Tests: enabled network logs and tried with tree streaming enabled/disabled
25 Days Ago
Update: adding doc string to Pool.FreeUnmanaged(ref Stopwatch) Tests: none, trivial change
25 Days Ago
Optim: avoid StopWatch allocations via Pooling Tests: on Craggy flew out until only impostors visible
25 Days Ago
Clean: updated a comment implying a potential bug - it was wrong Tests: none, trivial change
25 Days Ago
Update: TreeManager's grid is defined by cell size - Exposed via TreeManager.CellSize convar (takes effect at boot only) - Reorganized code a smidge to reduce how scattered a bunch of info was Tests: On craggy connected with 2nd player and flew out until only impostors were visible.
28 Days Ago
Update: Whole tree streaming logic is time budgetted - Right now has a budget of 1ms, controlled by UpdateBudgetMS convar - Streaming preference is given to players who have more cells left to stream This change protects us from having too many players in the streaming queue eating up all the frame budget. Tests: On craggy confirmed that streaming still completes
28 Days Ago
Update: consolidate tree manager cell streaming - Hidden behind the Network level 2 logging Right now we don't fit the 10micros budget(~60micros), so we end up sending 1 cell per player - going to add a frame budget with player sorting to avoid hogging the server Tests: On 4.5k procgen map tracked first and second player connecting, was able to observe cost of jit
28 Days Ago
Clean: Removing a couple TODOs - Not doing vehicle specific streaming logic as 4.5k world grid gets streamed in less than a second - removing old TODO since my change implements it Tests: none, trivial change
28 Days Ago
Update: Budget every tree-cell send On Procgen 4.5k map from a save sending one cell can take 0.25ms - this change should smooth out the cost further. Tests: Booted on craggy, confirmed that player recieved all cells.
28 Days Ago
Bugfix: Don't send newly spawned trees to player if they land in a batch yet to be sent to them - Added a profiling scope so we can track if it's taking too much time(it's a TxP complexity algo, but P tends to 0 very quickly, so we should be able to afford this) Tests: Tested on 4.5k procgen world and connected from a separate client and chopped a bunch of trees saw no duplicates. That said, it's very difficult to proc this (<1s window).
28 Days Ago
Clean: forgot to save an extra comment Tests: none, trivial change
28 Days Ago
Update: replacing cell budget with a time-per-player budget - Setting this budget to 10micros initially This budget doesn't evaluate every cell, but rather groupings of cells. I'll re-evaluate this once I get to testing big procedural worlds. Tests: Confirmed entire Craggy grid gets streamed to the player
28 Days Ago
Update: added a runtime switch to disable tree batch streaming - Enabled by default Tests: booted with it being both turned on and off
28 Days Ago
Merge: from main Tests: none
31 Days Ago
Backout: bring back all the scenes Without this, editor bootstrap workflow dies. Tests: none, trivial change
32 Days Ago
Update: Reducing which scenes we have set for build in Build Settings Our internal build macros only build 2 scenes, while Test Runner tries to build all (incl disabled) scenes set in the Build Settings. This allows us to run Player-mode tests without extra steps. Tests: With PerfFwk enabled, ran pool perf tests in Player mode - it ran and gathered results.
32 Days Ago
Update: Reorganization of the perf test framework - Unity.PerformanceTesting is marked as auto-referenced (this is another modification of original package) - Added a "Rust Editor/Performance Framework/Active" menu toggle to enable/disable perf test scripts - controls RUST_PERF_FWK define - Renamed and moved PerfFwk assembly to Plugins/ (as that's it's design) - Moved existing perf tests outside to the root assembly (Scipts/PerfTests) so that we can access gameplay code The goal is to both isolate the perf framework code from the codebase as much as possible(don't ship to players or unity devs to load stuff that's not sueful), while also being able to work with our main game scripts directly. Tests: - Switch the toggle on and off - no editor errors. - Built client&server with framework being enabled - it passed. -- Found no PerfFwk references in main game assembly for both client and server and no references to test classes
32 Days Ago
Buildfix: isolating perf test scripts into it's own assembly - For the time being stores existing perf tests - will reorganize when the structure is more clear in the future. Tests: Unity booted without errors, confirmed perf tests presence in Test Runner
32 Days Ago
Merge: from main Tests: none
32 Days Ago
Update: enable sound pooling by default If something goes derp, can be disabled via audio.enablesoundpooling 0 Tests: was explicitly enabled over last 3 days while running all recent changes
32 Days Ago
Merge: from main Tests: ran around on craggy with logs
32 Days Ago
Bugfix: don't leak looping sounds when quick-switching It's possible that equipped items with sound effects (setup via sound player) can enable-then-disable across 2 frames, before SoundManager picks up the pending play request. If it's a looping sound(like torches burn loop) it would stay alive forever. Now we cleanup the pending requests on disable of sound player. Tests: mousewheel quickswitched between rock and a torch and observed numbers in audio.printsounds - the burn loop is no longer accumulating.
32 Days Ago
Update: SoundManager pools internal lists The pooling effect is minimal, but it achieves 2 small improvements: - we don't hold lists in memory for sounds that don't reapper for a while - those can be reused in other parts of the code - audio.printsounds no longer reports 0 active sounds per definintion Tests: On craggy quickswitched between equipped items. Saw the various sounds appear and disapper in the logs.
32 Days Ago
Update: microphone drops any sounds it has accumulated on destroy Tests: tested by playing the flute when exploding the microphone stand connected to a speaker
32 Days Ago
Update: MiningQuarry drops sound assets when destoyed (though it's invulnerable) Tests: none, trivial change (same type of changes as previous)
32 Days Ago
Update: modular car's engine drops all sound resources on disable Tests: blew up a modular car
32 Days Ago
Update: AmbianceWaveSounds recycles sounds on disable Tests: none targetted(same type of change as before), but have been live on my branch for 2d while testing and working on other changes.
33 Days Ago
Update: Engine blend loop drops it's sound resources on disable Tests: none, trivial change (same type of change as previous)