branchrust_reboot/main/profiling_improvementscancel
133 Commits over 212 Days - 0.03cph!
Bugfix: ContinuousProfiler now atomically updates it's write indices
- internal fix in ServerProfiler.Core, based on e39afb43
- Removed now-unhelpful echeck for right mark type at the start of main thread perf stream (all cases confirmed legal now)
After soaking it for 15 minutes total, only main thread export gets lost in the binary sauce. Hopefully last bug.
Tests: soaked 3 times on craggy, only hit unexpected mark type on main thread
Update: ContinuousProfiling will emergency stop if fails to export
Yet to pin down the worker thread telemetry stream seeing stale/garbage data.
Tests: soaked on Craggy in editor
Bugfix: fix main source of invalid profiling stream from ServerProfiler
- Drop dead threads on every succesful frame
- Reset all writing indices on new frame and on resuming continuous profiling post-export
- release binary built using 019295b4
There's still another issue hiding somewhere, but it's much more stable now.
Tests: on craggy exporting every 3rd frame for 5 minutes straight. Previously would trip after 20seconds.
Update: prototype of continuous allocation tracking is working
- started with profile.watchallocs [Name, default="Allocs"]
- can be stopped with profile.stopwatchingallocs
Dumps [Name].json.gz with data about allocs and associated callstacks. Still hardcoded to trigger export every 3 frames. Exporter frequently gets lost in the bin stream, and needs to be optimized (craggy has noticeable stutter).
Tests: in CLIENT+SERVER editor on Craggy with allocation tracking.
Bugfix: fix missing callstacks
- Updated binary that contains the fix internally to 6440ec7 (still hardcoded to always capture callstacks) - was due to ABI missmatch
- Updated continuous profiler unit test to check for AllocsWithStack and a bit of the data set
- Partially updated ProfileExporter, and definitelly borked ProfilerBinViewer
Got updated overhead numbers - always capturing a callstack leads to 9micros per allocation cost(even though inflated due to tests adding 38 calls per alloc).
Tests: unit tests + perf test
Update: initial stack gathering support for allocs in Continuous mode
- using release libs based on d48bcf49, with hardcoded stack gathering for now
Somehow it's 15% faster than mono_get_last_method, which doesn't make sense - need to update the exporter to figure out what's being generated.
Tests: none
Profiling shows
Test: adding an profiler-allocation overhead estimate test
- Switched to relase binaries of d340789f
Without profiler recording, allocs cost us ~0.3micros, with recording it costs 1micro. Next will see if we can afford gathering full callstacks for each alloc.
Tests: unit tests
Merge from: main
Tests: none (no conflicts)
Update: Continuous profiling that only captures allocations (for now)
- using debug binaries built from d340789f, it triggers a snapshot every 3rd frame for testing
- added a test to validate the loop of capture-and-resume
- Native.StartRecording -> Native.TakeSnapshot
Pretty barebones for now, need to profile callstack gathering to see how expensive it is for continuous profiling.
Tests: unit test
Update: updated binaries after merge of task branch
Tests: snapshot in editor on craggy
Bugfix: ProfileExport.JSON - correctly identify frame start when there's 0 or 1 callstack depth at the start of recording
This is a standalone-specific issue
Tests: built standalone, did a perfsnapshot there
Merge: from main
Tests: none, no conflicts
Bugfix: ServerProfiler - ensure worker threads that get created initialize with the right fixed buffer size
- built from 76319c30 commit
Previously they wouild initialize with the default 16KB.
Tests: perfsnapshot_stream in editor on craggy with varying main thread buffer sizes
Update: profile.quiet persistent server command to control whether perfsnapshot commands should post chat messages
Tests: ran perfsnapshot_stream with quiet set to true - no chat messages
Update: perfsnapshot_stream [Name, MainThreadBuffer, WorkerThreadBuffer, Debug] server command
- Fixed string buffer over-allocating (need to replace it with a memory stream)
- Fixed frame index wrapping due to now being able to larger than byte.MaxValue
- Fixed invalid final mark reconstruction that would lead to 180d+ slices
Allows streaming of up to 128MB of performance data before generating a snapshot. Seems to be stable up to 64MB, but afterwards it's a bit of a dice-roll. Haven't caught where it's failing yet.
Tests: perf snapshot in editor on craggy.
Update: ServerProfiler - initial FixedStorage support
- Added test for FixedStorage snapshot recording
- update ProfileExporter to handle FixedStorage binary stream quirks
- updated binaries based on 2ce19cfe
New mode will allow us to stream profiling info until the buffer fills up. RCon commands will come next
Tests: unit tests
Buildfix: exclude ServerProfiler tests from non-server builds
Tests: built CLIENT config in editor
Bugfix: ProfileExporter.JSON - don't spam allocs-0 counter for worker threads
Tests: snapshot in editor on craggy
Update: ServerProfiler.Core binaries
- Relase bins built from 66537fcc
Didn't remember if I snuck in debug bins at one point, so updating them to be safe
Tests: unit tests
Bugfix: ProfileExporter.JSON - reset allocation graphs
- Reset when a new frame starts
- Reset on worker threads if it allocates
Tests: snapshot in editor on craggy
Clean: ProfileExporter.JSON - don't cache per-frame callstack depths
Was never used, so don't waste allocs.
Tests: none, trivial change
Clean: ProfileExporter.JSON - remove debug logs
Tests: none, trivial change
Bugfix: ProfileExporter.JSON - gracefully handle managed allocations coming from native runtime
- Emit "<mono-native-runtime>" if we don't have managed callstack
Finally caught it - this can happen when mono tries to invoke a managed callback which requires a managed allocation (the callback accepts string[], for example) as a first method in managed code. Was able to repro in editor due to it's script compilation callbacks.
Tests: triggered perfsnapshot 40 times without issues
Bugfix: ProfileExporter - avoid reading allocs at the start of the frame as method-entries
- Added a bunch of temporary logging to help track down last issue
Rare, but legal due to our filtering of code.
Tests: snapshotted a bunch of times in editor (there's still one issue with main thread export)
Update: ProfileBinViewer displays binary offsets for marks
Tests: opened a couple bin snapshots
Bugfix: avoid out-of-bounds access when scanning for alloc-only threads
Fixes perf snapshot export failing to generate while processing worker threads. There's still a rare case of main thread crashing - investigating
Tests: Exported a snapshot in editor
Update: ProfileBinViewer can now grok the new profile .bin files
Tests: opened bin profile from editor
Update: rewrite ServerProfile exporters to work with new data layour
- expanded unit test
- Core binary should be the same, but copied from the freshest release build
- purposefully borked ProfileBinViewer - will fix next
Tests: unit tests & took a snapshot on craggy
Update: Rewrote how ServerProfiler.Core stores profiling data
- got a 2% improvement on 1mil empty call benchmark
It will be easier to add new features to the profiler (user scopes, user counters, continuous capture). Need to rewrite the exporters now.
Tests: unit tests
Tests: perf test to measure ServerProfiler overhead
Going to try a different internal storage approach to make future changes easier (and potentially faster)
Tests: ran the new tests
Update: further filtering of methods
- Dropping BaseEntity.Is* methods that are just HasFlag wrappers
- Dropping new Rust.Data BufferStream and RangeHandle
Should reduce overhead on serialization
Tests: snapshot on Craggy in editor
Merge: from main
Tests: none, no conflicts
Merge: from main
Tests: none, no conflicts
Update: Further removal of about 10% methods from perf snapshot
- Built using e70c083b
Tests: took a snapshot on craggy
Update: further reduce capture scope by ~19%
- Built from c4679e41
Tests: snapshot in editor on Craggy
Update: updated profiler binaries
- Built from 7b1b923e
Contains additional filters for BaseNetwork::ShouldNetworkTo, BaseEntity::ShouldNetworkTo and entire Facepunch.Network.Subscriber class
Tests: took a snapshot in editor
Update: ServerProfiler now usable on older Linux distros
- Binaries built using revision 45d79338
Should work on Debian 12.
Tests: Ran on WSL Ubuntu 20.04 and Ubuntu 24.04 and took snapshots - all worked.
Merge: from main
Tests: none, no conflicts
Bugfix: ServerProfiler - properly filter out constructors
- Due to a typo (missing .) it would never match constructors and never filter them out
Using release libs built from ec8c5522
Tests: snapshot in editor on Craggy - confirmed no contructors were recorded.
Bugfix: ServerProfiler - sync up non-main-thread timelines to main thread
Previously it was possible to have a small gap at the very start of the snapshot on non-main thread views.
Tests: exported a snapshot in the editor.
Bugfix: ServerProfiler - properly handle sheared callstack at the start of worker threads during export
This revealed that I have a bug with non-main thread timing (things are slightly offset). Will fix next.
Tests: injected data that was tripping up the export originally. Validated that it does trip up before fix, and now exports correctly after fix. Exported 10 snapshots in the editor - no failures, and all look correct.
Bugfix: ProfileBinViewer - fix callstack depth calculation
- Also replace RadioBoxGroup for threads with a Dropdown - turns out 50 threads radio buttons don't vertically fit my monitor. Who knew.
Hoping it's the same issue in the json exporter.
Tests: Opened a debug snapshot from official server.
Merge: from main
Tests: none, no conflicts
Update: ServerProfiler - Filter out ~25% of profiling scope by further removing tiny/cheap methods
- Using binaries built from f27f0281
There are some controversial changes:
* Filter out Newtonsoft.Json - we can't modify it's internals anyway
* Filter out setters (set_*) - overwhelming majority are cheap, but hides expensives ones. But we'll see the nested calls if there are any.
* Filter out IPooled callbacks - half of them are not implemented (usually LeavePool), and most fo them are cheap
There are bunch more, but not worth bringing up details.
Tests: on Craggy in editor
Merge: from main
Tests: none
Update: more profiling exclusions
- Don't track NetRead and NetWrite
- Dont' track Facepunch.System's containers (including pooling), StringPool and ArrayPool
- Don't track EntityRef
- Don't track all Enumerators (previously only Facepunch's was excluded)
- Don't track all GetHashCode
- Don't track TimeWarning (debug-only calls, but can be frequent)
Tests: Took a snapshot of default procgen map in Editor(Client+Server), confirmed about 13% reduction in uncompressed json size.
Merge: from main
Tests: none