branchrust_reboot/main/profiling_improvementscancel

148 Commits over 243 Days - 0.03cph!

13 Days Ago
Update: Hook in PacketProfiling for outgoing traffic Only tracks Rust packets. Tests: on Craggy in editor with runtime_profiling 2 and breakpoints
13 Days Ago
Merge: from main
16 Days Ago
Merge: from main Tests: none, no conflicts
16 Days Ago
Bugfix: update unit allocation tracking tests to work with new notification params Tests: ran the "TestContinuousRecording" test
16 Days Ago
Update: all server profiler commands now respond if action started Tests: ran all commands on Craggy in editor
16 Days Ago
Merge: from main Tests: none, no conflicts
16 Days Ago
Update: additional memory metrics for memory profiling - Using release binaries based on 77ac1774 - Renamed NotifyOnAllocCount to NotifyOnTotalAllocCount - Added NotifyOnMainAllocCount, NotifyOnMainMemKB, NotifyOnWorkerAllocCount, NotifyOnWorkerMemKB (default 0 - disabled) - Set NotifyOnTotalAllocCount to 16k and NotifyOnTotalMemKB to 12MB Makes it easier to focus investigation in particular areas. Tests: continuous profiling on Craggy with enabling individual metrics and verifying that it generated snapshots with expected "violations"
16 Days Ago
Update: ContinuousProfiler now has TotalAlloc and AllocCount metrics for allocation snapshotting - Release binary build with 27d643a3 - exposed via NotifyOnTotalMemKB and NotifyOnAllocCount (set to 0 to disable) Tests: tested both on Craggy in editor. Helped spot a potential small leak in PerformanceLogging
17 Days Ago
Bugfix: snapshot json export no longer emits extra coma, making json invalid - also sneaking in AllocWithStack to be on execution thread only, not on alloc thread Super rare, but would be hard to find in the code when it would happen in the future Tests: persnapshot and watchallocs in editor
17 Days Ago
Bugfix: Prevent data races leading to torn continuous profiler snapshots - Binary built using Release conf based on 237b5df3 commit - Both Resume and Stop happen on profiler frame end (previously Resume was instant) - Stop gets deferred to after snapshot is exported if requested during processing - Profiler, if in initialized, always gets called on new frame (since internal state machine demands more steps than the user code can know about) - Updated continuous profiling unit test to acoount for extra OnFrameEnd required It was possible that a stop is requested during export process, leading to use-after-free exceptions on a managed thread and torn snapshot. Tests: unit tests + 20 manual watchallocs->stopwatchingallocs calls
36 Days Ago
Update: Allow user to control how big of a callstack to record when tracking allocations - Defaults to 16, should be enough to track where in code it originates - Updated description - windows binary built with 1a176138 commit Tests: used it on craggy. Discovered an issue with preceeding commit, but this change works as expected
36 Days Ago
Optim: ProfielrExporter.Json - export now uses streaming compression Avoids the need to allocate massive StringBuilder. Running watchallocs for 2 mins caused 3-4 GC collection events, instead of 1 during each export. Tests: done a perfsnapshot and ran watchallocs for couple minutes
37 Days Ago
Bugfix: ContinuousProfiler - don't record Sync marks when paused for export - Based off 23b9590b commit This is last known bug - lib was still writing Sync marks for new frame, eventually leading to main thread buffer growth, which invalidated pointers during export. Test: soaked for almost 1hour with watchallocs - no more unrecognized reads on main thread
37 Days Ago
Merge: from main
38 Days Ago
Merge: from main
48 Days Ago
Bugfix: ContinuousProfiler now atomically updates it's write indices - internal fix in ServerProfiler.Core, based on e39afb43 - Removed now-unhelpful echeck for right mark type at the start of main thread perf stream (all cases confirmed legal now) After soaking it for 15 minutes total, only main thread export gets lost in the binary sauce. Hopefully last bug. Tests: soaked 3 times on craggy, only hit unexpected mark type on main thread
48 Days Ago
Update: ContinuousProfiling will emergency stop if fails to export Yet to pin down the worker thread telemetry stream seeing stale/garbage data. Tests: soaked on Craggy in editor
49 Days Ago
Bugfix: fix main source of invalid profiling stream from ServerProfiler - Drop dead threads on every succesful frame - Reset all writing indices on new frame and on resuming continuous profiling post-export - release binary built using 019295b4 There's still another issue hiding somewhere, but it's much more stable now. Tests: on craggy exporting every 3rd frame for 5 minutes straight. Previously would trip after 20seconds.
49 Days Ago
Update: prototype of continuous allocation tracking is working - started with profile.watchallocs [Name, default="Allocs"] - can be stopped with profile.stopwatchingallocs Dumps [Name].json.gz with data about allocs and associated callstacks. Still hardcoded to trigger export every 3 frames. Exporter frequently gets lost in the bin stream, and needs to be optimized (craggy has noticeable stutter). Tests: in CLIENT+SERVER editor on Craggy with allocation tracking.
50 Days Ago
Bugfix: fix missing callstacks - Updated binary that contains the fix internally to 6440ec7 (still hardcoded to always capture callstacks) - was due to ABI missmatch - Updated continuous profiler unit test to check for AllocsWithStack and a bit of the data set - Partially updated ProfileExporter, and definitelly borked ProfilerBinViewer Got updated overhead numbers - always capturing a callstack leads to 9micros per allocation cost(even though inflated due to tests adding 38 calls per alloc). Tests: unit tests + perf test
50 Days Ago
Update: initial stack gathering support for allocs in Continuous mode - using release libs based on d48bcf49, with hardcoded stack gathering for now Somehow it's 15% faster than mono_get_last_method, which doesn't make sense - need to update the exporter to figure out what's being generated. Tests: none Profiling shows
50 Days Ago
Test: adding an profiler-allocation overhead estimate test - Switched to relase binaries of d340789f Without profiler recording, allocs cost us ~0.3micros, with recording it costs 1micro. Next will see if we can afford gathering full callstacks for each alloc. Tests: unit tests
50 Days Ago
Merge from: main Tests: none (no conflicts)
50 Days Ago
Update: Continuous profiling that only captures allocations (for now) - using debug binaries built from d340789f, it triggers a snapshot every 3rd frame for testing - added a test to validate the loop of capture-and-resume - Native.StartRecording -> Native.TakeSnapshot Pretty barebones for now, need to profile callstack gathering to see how expensive it is for continuous profiling. Tests: unit test
54 Days Ago
Update: updated binaries after merge of task branch Tests: snapshot in editor on craggy
54 Days Ago
Bugfix: ProfileExport.JSON - correctly identify frame start when there's 0 or 1 callstack depth at the start of recording This is a standalone-specific issue Tests: built standalone, did a perfsnapshot there
55 Days Ago
Merge: from main Tests: none, no conflicts
55 Days Ago
Bugfix: ServerProfiler - ensure worker threads that get created initialize with the right fixed buffer size - built from 76319c30 commit Previously they wouild initialize with the default 16KB. Tests: perfsnapshot_stream in editor on craggy with varying main thread buffer sizes
55 Days Ago
Update: profile.quiet persistent server command to control whether perfsnapshot commands should post chat messages Tests: ran perfsnapshot_stream with quiet set to true - no chat messages
55 Days Ago
Update: perfsnapshot_stream [Name, MainThreadBuffer, WorkerThreadBuffer, Debug] server command - Fixed string buffer over-allocating (need to replace it with a memory stream) - Fixed frame index wrapping due to now being able to larger than byte.MaxValue - Fixed invalid final mark reconstruction that would lead to 180d+ slices Allows streaming of up to 128MB of performance data before generating a snapshot. Seems to be stable up to 64MB, but afterwards it's a bit of a dice-roll. Haven't caught where it's failing yet. Tests: perf snapshot in editor on craggy.
55 Days Ago
Update: ServerProfiler - initial FixedStorage support - Added test for FixedStorage snapshot recording - update ProfileExporter to handle FixedStorage binary stream quirks - updated binaries based on 2ce19cfe New mode will allow us to stream profiling info until the buffer fills up. RCon commands will come next Tests: unit tests
55 Days Ago
Buildfix: exclude ServerProfiler tests from non-server builds Tests: built CLIENT config in editor
55 Days Ago
Bugfix: ProfileExporter.JSON - don't spam allocs-0 counter for worker threads Tests: snapshot in editor on craggy
55 Days Ago
Merge: from main
55 Days Ago
Update: ServerProfiler.Core binaries - Relase bins built from 66537fcc Didn't remember if I snuck in debug bins at one point, so updating them to be safe Tests: unit tests
55 Days Ago
Bugfix: ProfileExporter.JSON - reset allocation graphs - Reset when a new frame starts - Reset on worker threads if it allocates Tests: snapshot in editor on craggy
55 Days Ago
Clean: ProfileExporter.JSON - don't cache per-frame callstack depths Was never used, so don't waste allocs. Tests: none, trivial change
55 Days Ago
Clean: ProfileExporter.JSON - remove debug logs Tests: none, trivial change
55 Days Ago
Bugfix: ProfileExporter.JSON - gracefully handle managed allocations coming from native runtime - Emit "<mono-native-runtime>" if we don't have managed callstack Finally caught it - this can happen when mono tries to invoke a managed callback which requires a managed allocation (the callback accepts string[], for example) as a first method in managed code. Was able to repro in editor due to it's script compilation callbacks. Tests: triggered perfsnapshot 40 times without issues
55 Days Ago
Bugfix: ProfileExporter - avoid reading allocs at the start of the frame as method-entries - Added a bunch of temporary logging to help track down last issue Rare, but legal due to our filtering of code. Tests: snapshotted a bunch of times in editor (there's still one issue with main thread export)
56 Days Ago
Update: ProfileBinViewer displays binary offsets for marks Tests: opened a couple bin snapshots
56 Days Ago
Bugfix: avoid out-of-bounds access when scanning for alloc-only threads Fixes perf snapshot export failing to generate while processing worker threads. There's still a rare case of main thread crashing - investigating Tests: Exported a snapshot in editor
2 Months Ago
Update: ProfileBinViewer can now grok the new profile .bin files Tests: opened bin profile from editor
2 Months Ago
Update: rewrite ServerProfile exporters to work with new data layour - expanded unit test - Core binary should be the same, but copied from the freshest release build - purposefully borked ProfileBinViewer - will fix next Tests: unit tests & took a snapshot on craggy
2 Months Ago
Update: Rewrote how ServerProfiler.Core stores profiling data - got a 2% improvement on 1mil empty call benchmark It will be easier to add new features to the profiler (user scopes, user counters, continuous capture). Need to rewrite the exporters now. Tests: unit tests
2 Months Ago
Tests: perf test to measure ServerProfiler overhead Going to try a different internal storage approach to make future changes easier (and potentially faster) Tests: ran the new tests
2 Months Ago
Update: further filtering of methods - Dropping BaseEntity.Is* methods that are just HasFlag wrappers - Dropping new Rust.Data BufferStream and RangeHandle Should reduce overhead on serialization Tests: snapshot on Craggy in editor
2 Months Ago
Merge: from main Tests: none, no conflicts
2 Months Ago
Merge: from main Tests: none, no conflicts
2 Months Ago
Update: Further removal of about 10% methods from perf snapshot - Built using e70c083b Tests: took a snapshot on craggy