userDaniel Pcancel
reporust_rebootcancel

1,623 Commits over 396 Days - 0.17cph!

3 Months Ago
Merge: from main Tests: none, no conflicts
3 Months Ago
Update: BiomeBenchmark - first baby steps - Generates empoty row-islands + biome map for these islands - Generates biome map for row-islands Confused why I get beach texture everywhere. Need to sort that next. Tests: ran the BiomeBenchmark scene in editor
3 Months Ago
Update: BenchScene - StartBenchmark is now a coroutine Helpful to make BiomeBenchmark setup straighforward Tests: autobench in editor
3 Months Ago
Clean: remove deleted TerrainBenchmark from the build settings Tests: none, trivial change
3 Months Ago
Bugfix: FoliageGridBenchmark - FoliageGrid was destroyed without waiting for job completion This caused issues with native container cleanup Tests: autobench in editor
3 Months Ago
Update: Merge Terrain and FoliageGrid benchmarks - Re-enabled terrain rendering in FoliageGridBenchmark - Delete TerrainBenchmark Tests: autobench in editor
3 Months Ago
Bugfix: MonumentBenchmark - don't clean up reflection probes between monument spawns This would cause sporadic exceptions during benchmark run. Tests: autobench in editor
3 Months Ago
Update: Hook in PacketProfiling for outgoing traffic Only tracks Rust packets. Tests: on Craggy in editor with runtime_profiling 2 and breakpoints
3 Months Ago
Merge: from main
3 Months Ago
Bugfix: MonumentBenchmark - clean-up CullingManager between monuments - Also reduced cool down frames to 5 from 10 That's all NREs for monuments done, hopefully none in standalone as well Tests: played MonumentBenchmark in editor
3 Months Ago
Bugfix: MonumentBenchmark - clear dungeon entrance cache since we destroy them now Only 1 exception left Tests: MonumentBenchmark scene in editor
3 Months Ago
Bugfix: MonumentScene - create missing World.Config that some monuments rely upon Tests: ran the MonumentBenchmark in editor - less errors
3 Months Ago
Update: MonumentBenchmark - clean up any left-over prefabs Some monuments spawn dungeon entries or the like, and they stay in the scene after we start benching a different prefab. Tests: played MonumentBenchmark in editor
3 Months Ago
Update: speed up MonumentBenchmark - 10 frames to cool down instead of 5s - bench 60 frames instead of 120 per vantage stage Cuts down from 35m to 4m50s. Tests: played the MonumentBenchmark scene in editor
3 Months Ago
Update: MonumentBenchmark - bench all monuments instead of 3 Took 35minutes, need to reduce timers Tests: autobench in editor
3 Months Ago
Debug: Bring back player cache validation on release Saw on release env today that it did trip up twice and prevented issues, so there's still value in it. Tests: none, trivial change
3 Months Ago
Bugfix: MonumentBenchmark - move WaterSystem init to Start In standalone client it errorred due to different default awake order (Monument awake before WaterSystem) Tests: autobench in editor
3 Months Ago
Update: add MonumentBenchmark to the autobench set Tests: run atuobench in the editor
3 Months Ago
Bugfix: MonumentBenchmark - fix terrain and water systems - Added missing WaterSystems setup and hooked in initialization - Fixed terrain being setup with missmatching Craggy and CraggyHD assets(using Craggy now) and hooked in initialization No more exceptions during monument benchmark Tests: ran autobench in editor
3 Months Ago
Update: re-enable Effects and Firework benchmark scenes They didn't have issues, so adding them to the pile Tests: autobench in editor
3 Months Ago
Bugfix: Disable demo benchmarking part of autobench - removing old demos Our demos are too stale and no longer binary compatible with the protocol changes Tests: ran autobench in editor
3 Months Ago
Merge: from main
3 Months Ago
Update: move some of the validate logic back to DEV_BUILD only These routines do sanity checks, and for the last couple days of testing they haven't picked up anything. Tests: on Craggy in editor with UsePlayerUpdateJobs 1
3 Months Ago
Clean: remove recently-added debugging logic Tests: none, trivial change
3 Months Ago
Bugfix: safeguard against NRE induced kicked players in UsePlayerUpdateJobs - Added a couple warning comments If kick happens in the middle of player update loop, it would invalidate one of cached indexes and access a null. Tests: forced a disconnect mid processing, no NRE caught
3 Months Ago
Merge: from parallel_validatemove - Additional debugging logic to help track down mystery NRE Tests: ran around in editor in SERVER+CLIENT mode with useplayerupdatejobs 1
3 Months Ago
Merge: from main Tests: none, no conflicts
3 Months Ago
Debug: add a latch to track player disconnects & removal during ServerUpdateParallel Tests: unit tests + craggy in editor
3 Months Ago
Debug: adding progress tracking to FinalizeTickParallel to help track down origin of NRE - also couple formatting fixes Tests: used mode on Craggy in editor
3 Months Ago
Merge: from main
3 Months Ago
Merge: from texttable_allocs - TextTable now pads the last column as before Tests: unit tests + editor test on 1-player team
3 Months Ago
Update: rewrite teaminfo to reduce allocs Tests: tested in editor
3 Months Ago
Update: bring back last column padding to TextTable.ToString This time, on the right branch Tests: unit tests
3 Months Ago
Merge: from main
3 Months Ago
Backout of 122696 - was meant to go into texttable_allocs branch
3 Months Ago
Update: bring back last column padding to TextTable.ToString Tests: unit tests
3 Months Ago
Merge: from main
3 Months Ago
Merge: from profiling_improvements - New server profiler allocation tracking mode, start with "watchallocs", stop with "stopwatchingallocs", control export via various NotifyOn... server vars - Json Snapshot compression is now streamed, saving 95% of memory in the process and reducing GC events Tests: unit tests in editor, all forms of profiling in editor on Craggy in Server+Client mode, all forms of profiling in standalone server on Linux WSL
3 Months Ago
Merge: from main Tests: none, no conflicts
3 Months Ago
Bugfix: update unit allocation tracking tests to work with new notification params Tests: ran the "TestContinuousRecording" test
3 Months Ago
Update: all server profiler commands now respond if action started Tests: ran all commands on Craggy in editor
3 Months Ago
Merge: from main Tests: none, no conflicts
3 Months Ago
Update: additional memory metrics for memory profiling - Using release binaries based on 77ac1774 - Renamed NotifyOnAllocCount to NotifyOnTotalAllocCount - Added NotifyOnMainAllocCount, NotifyOnMainMemKB, NotifyOnWorkerAllocCount, NotifyOnWorkerMemKB (default 0 - disabled) - Set NotifyOnTotalAllocCount to 16k and NotifyOnTotalMemKB to 12MB Makes it easier to focus investigation in particular areas. Tests: continuous profiling on Craggy with enabling individual metrics and verifying that it generated snapshots with expected "violations"
3 Months Ago
Update: ContinuousProfiler now has TotalAlloc and AllocCount metrics for allocation snapshotting - Release binary build with 27d643a3 - exposed via NotifyOnTotalMemKB and NotifyOnAllocCount (set to 0 to disable) Tests: tested both on Craggy in editor. Helped spot a potential small leak in PerformanceLogging
3 Months Ago
Bugfix: snapshot json export no longer emits extra coma, making json invalid - also sneaking in AllocWithStack to be on execution thread only, not on alloc thread Super rare, but would be hard to find in the code when it would happen in the future Tests: persnapshot and watchallocs in editor
3 Months Ago
Bugfix: Prevent data races leading to torn continuous profiler snapshots - Binary built using Release conf based on 237b5df3 commit - Both Resume and Stop happen on profiler frame end (previously Resume was instant) - Stop gets deferred to after snapshot is exported if requested during processing - Profiler, if in initialized, always gets called on new frame (since internal state machine demands more steps than the user code can know about) - Updated continuous profiling unit test to acoount for extra OnFrameEnd required It was possible that a stop is requested during export process, leading to use-after-free exceptions on a managed thread and torn snapshot. Tests: unit tests + 20 manual watchallocs->stopwatchingallocs calls
4 Months Ago
Update: Allow user to control how big of a callstack to record when tracking allocations - Defaults to 16, should be enough to track where in code it originates - Updated description - windows binary built with 1a176138 commit Tests: used it on craggy. Discovered an issue with preceeding commit, but this change works as expected
4 Months Ago
Optim: ProfielrExporter.Json - export now uses streaming compression Avoids the need to allocate massive StringBuilder. Running watchallocs for 2 mins caused 3-4 GC collection events, instead of 1 during each export. Tests: done a perfsnapshot and ran watchallocs for couple minutes
4 Months Ago
Bugfix: ContinuousProfiler - don't record Sync marks when paused for export - Based off 23b9590b commit This is last known bug - lib was still writing Sync marks for new frame, eventually leading to main thread buffer growth, which invalidated pointers during export. Test: soaked for almost 1hour with watchallocs - no more unrecognized reads on main thread
4 Months Ago
Merge: from main