1,623 Commits over 396 Days - 0.17cph!
Merge: from main
Tests: none, no conflicts
Update: BiomeBenchmark - first baby steps
- Generates empoty row-islands + biome map for these islands
- Generates biome map for row-islands
Confused why I get beach texture everywhere. Need to sort that next.
Tests: ran the BiomeBenchmark scene in editor
Update: BenchScene - StartBenchmark is now a coroutine
Helpful to make BiomeBenchmark setup straighforward
Tests: autobench in editor
Clean: remove deleted TerrainBenchmark from the build settings
Tests: none, trivial change
Bugfix: FoliageGridBenchmark - FoliageGrid was destroyed without waiting for job completion
This caused issues with native container cleanup
Tests: autobench in editor
Update: Merge Terrain and FoliageGrid benchmarks
- Re-enabled terrain rendering in FoliageGridBenchmark
- Delete TerrainBenchmark
Tests: autobench in editor
Bugfix: MonumentBenchmark - don't clean up reflection probes between monument spawns
This would cause sporadic exceptions during benchmark run.
Tests: autobench in editor
Update: Hook in PacketProfiling for outgoing traffic
Only tracks Rust packets.
Tests: on Craggy in editor with runtime_profiling 2 and breakpoints
Bugfix: MonumentBenchmark - clean-up CullingManager between monuments
- Also reduced cool down frames to 5 from 10
That's all NREs for monuments done, hopefully none in standalone as well
Tests: played MonumentBenchmark in editor
Bugfix: MonumentBenchmark - clear dungeon entrance cache since we destroy them now
Only 1 exception left
Tests: MonumentBenchmark scene in editor
Bugfix: MonumentScene - create missing World.Config that some monuments rely upon
Tests: ran the MonumentBenchmark in editor - less errors
Update: MonumentBenchmark - clean up any left-over prefabs
Some monuments spawn dungeon entries or the like, and they stay in the scene after we start benching a different prefab.
Tests: played MonumentBenchmark in editor
Update: speed up MonumentBenchmark
- 10 frames to cool down instead of 5s
- bench 60 frames instead of 120 per vantage stage
Cuts down from 35m to 4m50s.
Tests: played the MonumentBenchmark scene in editor
Update: MonumentBenchmark - bench all monuments instead of 3
Took 35minutes, need to reduce timers
Tests: autobench in editor
Debug: Bring back player cache validation on release
Saw on release env today that it did trip up twice and prevented issues, so there's still value in it.
Tests: none, trivial change
Bugfix: MonumentBenchmark - move WaterSystem init to Start
In standalone client it errorred due to different default awake order (Monument awake before WaterSystem)
Tests: autobench in editor
Update: add MonumentBenchmark to the autobench set
Tests: run atuobench in the editor
Bugfix: MonumentBenchmark - fix terrain and water systems
- Added missing WaterSystems setup and hooked in initialization
- Fixed terrain being setup with missmatching Craggy and CraggyHD assets(using Craggy now) and hooked in initialization
No more exceptions during monument benchmark
Tests: ran autobench in editor
Update: re-enable Effects and Firework benchmark scenes
They didn't have issues, so adding them to the pile
Tests: autobench in editor
Bugfix: Disable demo benchmarking part of autobench
- removing old demos
Our demos are too stale and no longer binary compatible with the protocol changes
Tests: ran autobench in editor
Update: move some of the validate logic back to DEV_BUILD only
These routines do sanity checks, and for the last couple days of testing they haven't picked up anything.
Tests: on Craggy in editor with UsePlayerUpdateJobs 1
Clean: remove recently-added debugging logic
Tests: none, trivial change
Bugfix: safeguard against NRE induced kicked players in UsePlayerUpdateJobs
- Added a couple warning comments
If kick happens in the middle of player update loop, it would invalidate one of cached indexes and access a null.
Tests: forced a disconnect mid processing, no NRE caught
Merge: from parallel_validatemove
- Additional debugging logic to help track down mystery NRE
Tests: ran around in editor in SERVER+CLIENT mode with useplayerupdatejobs 1
Merge: from main
Tests: none, no conflicts
Debug: add a latch to track player disconnects & removal during ServerUpdateParallel
Tests: unit tests + craggy in editor
Debug: adding progress tracking to FinalizeTickParallel to help track down origin of NRE
- also couple formatting fixes
Tests: used mode on Craggy in editor
Merge: from texttable_allocs
- TextTable now pads the last column as before
Tests: unit tests + editor test on 1-player team
Update: rewrite teaminfo to reduce allocs
Tests: tested in editor
Update: bring back last column padding to TextTable.ToString
This time, on the right branch
Tests: unit tests
Backout of
122696 - was meant to go into texttable_allocs branch
Update: bring back last column padding to TextTable.ToString
Tests: unit tests
Merge: from profiling_improvements
- New server profiler allocation tracking mode, start with "watchallocs", stop with "stopwatchingallocs", control export via various NotifyOn... server vars
- Json Snapshot compression is now streamed, saving 95% of memory in the process and reducing GC events
Tests: unit tests in editor, all forms of profiling in editor on Craggy in Server+Client mode, all forms of profiling in standalone server on Linux WSL
Merge: from main
Tests: none, no conflicts
Bugfix: update unit allocation tracking tests to work with new notification params
Tests: ran the "TestContinuousRecording" test
Update: all server profiler commands now respond if action started
Tests: ran all commands on Craggy in editor
Merge: from main
Tests: none, no conflicts
Update: additional memory metrics for memory profiling
- Using release binaries based on 77ac1774
- Renamed NotifyOnAllocCount to NotifyOnTotalAllocCount
- Added NotifyOnMainAllocCount, NotifyOnMainMemKB, NotifyOnWorkerAllocCount, NotifyOnWorkerMemKB (default 0 - disabled)
- Set NotifyOnTotalAllocCount to 16k and NotifyOnTotalMemKB to 12MB
Makes it easier to focus investigation in particular areas.
Tests: continuous profiling on Craggy with enabling individual metrics and verifying that it generated snapshots with expected "violations"
Update: ContinuousProfiler now has TotalAlloc and AllocCount metrics for allocation snapshotting
- Release binary build with 27d643a3
- exposed via NotifyOnTotalMemKB and NotifyOnAllocCount (set to 0 to disable)
Tests: tested both on Craggy in editor. Helped spot a potential small leak in PerformanceLogging
Bugfix: snapshot json export no longer emits extra coma, making json invalid
- also sneaking in AllocWithStack to be on execution thread only, not on alloc thread
Super rare, but would be hard to find in the code when it would happen in the future
Tests: persnapshot and watchallocs in editor
Bugfix: Prevent data races leading to torn continuous profiler snapshots
- Binary built using Release conf based on 237b5df3 commit
- Both Resume and Stop happen on profiler frame end (previously Resume was instant)
- Stop gets deferred to after snapshot is exported if requested during processing
- Profiler, if in initialized, always gets called on new frame (since internal state machine demands more steps than the user code can know about)
- Updated continuous profiling unit test to acoount for extra OnFrameEnd required
It was possible that a stop is requested during export process, leading to use-after-free exceptions on a managed thread and torn snapshot.
Tests: unit tests + 20 manual watchallocs->stopwatchingallocs calls
Update: Allow user to control how big of a callstack to record when tracking allocations
- Defaults to 16, should be enough to track where in code it originates
- Updated description
- windows binary built with 1a176138 commit
Tests: used it on craggy. Discovered an issue with preceeding commit, but this change works as expected
Optim: ProfielrExporter.Json - export now uses streaming compression
Avoids the need to allocate massive StringBuilder. Running watchallocs for 2 mins caused 3-4 GC collection events, instead of 1 during each export.
Tests: done a perfsnapshot and ran watchallocs for couple minutes
Bugfix: ContinuousProfiler - don't record Sync marks when paused for export
- Based off 23b9590b commit
This is last known bug - lib was still writing Sync marks for new frame, eventually leading to main thread buffer growth, which invalidated pointers during export.
Test: soaked for almost 1hour with watchallocs - no more unrecognized reads on main thread