userDaniel Pcancel
reporust_rebootcancel

2,251 Commits over 549 Days - 0.17cph!

Today
Update: ServerProfiler.Core - more method annotation exclusions - release bins built from c969bbab Mostly focused on reducing the overhead of Scientists2's FSM evaluation and getting rid of injected Burst codegen gunk Tests: craggy in C+S editor, entered deep sea, went to ghostship to wake up scientists, took a snapshot
Yesterday
Bugfix: handle similar-to-inf budget timespans Tests: spawned on Craggy - no exceptions
Yesterday
Merge: from main
Yesterday
Buildfix: remove non-existent call Tests: none, trivial change
Yesterday
Clean: remove extra level of indentation Tests: compiles
Yesterday
Update: InvokeProfiler now pushes executed_time and invokes_executed Tests: compiles
Yesterday
Update(breaking-change): WorkQueueProfiler now also reports BudgetTime - breaking as this doesn't match CSV template Tests: none, compiles
Yesterday
Update: WorkQueueProfiler now sends an extra aggregate record for queues We could aggregate it on the backend, but that would mean sending through potentially a hefty amount of empty records Tests: compiles
Yesterday
Clean: add a couple TODOs for when I'll be going through old analytics code cleanup Tests: none, trivial changes
Yesterday
Update: extracted PersistentObjectWorkQueue.TelemStats into WorkQueueTelemStats - made ObjectWorkQueue populate it - every queue now always logs it's budget time, even if it has no work to run (so that we can estimate budgetted/total time %) Opens it up for use in custom queues as well, but I'll cross that bridge later Tests: compiles
Yesterday
Bugfix: ensure Runtime profiler reports WorkQueue and Invokes from the same frame - moved it's logic to be invoked via PostUpdateHook.OnLateUpdate, rather than slamming it directly into PostUpdatehook internals Previously invokes would be from last frame, while work queues would be from current. It's still a little wrong, as we're reporting it as data from last frame - but at least it's consistently wrong Tests: none, will deal with any fallout later
Yesterday
Update: rejig a couple parts of TelemStats to simplify code Tests: compiles
2 Days Ago
Merge: from main
2 Days Ago
Merge: from leavedeepsea_teleport_fix - Bugfix: using leavedeepsea should no longer cause random bugs/random wake up positions Tests: went on to a ghostship, then used leavedeepsea
2 Days Ago
Bugfix: unparent player if running leavedeepsea This fixes player waking up in random location, potentially being killed for going out of bounds Tests: on Craggy, went up to ghostship top and used leavedeepsea couple times
5 Days Ago
Merge: from serverprofiler_codeapi - New: immediate mode profiling API for capturing specific regions of code. servervars to control it in "profile" group - Unit tests covering all new logic Tests: compile test + ran unit tests
5 Days Ago
Merge: from main
5 Days Ago
Update: update ServerProfiler.Core bins to Release - built on 2a311df Tests: ran all server profiler unit tests
5 Days Ago
Update: add profile.ImmediateModeEnabled feature flag - codegen + unit test Turns off all managed-side logic for new API Tests: ran unit tests
5 Days Ago
Update: introduce export interval (profile.ExportIntervalS, defaults to 30m) + ability to reset the interval (profile.ResetExportInterval) - codegen and extra unit tests Tests: unit tests
5 Days Ago
Bugfix: ProfileExporter.JSON can now export 0-frame main thread profiles Test: ran previously failing unit tests, checked their exported files - all's gud
5 Days Ago
Update: immediate mode API improvements - debug windows binary built from 2a311dfb - ScopeRecorder automatically exports to json and cleans up recorder state - added RecordScopeIfSlow(..., TimeSpan, ...) API, same as above except exports only if there was a delay - updated unit tests since some scenarios are now impossible Need to fix export next, wrap it with a couple server vars and update to release bins - then it's done Tests: ran most of the unit tests (stress tests skipped as they would overflow with export tasks)
5 Days Ago
Update: ServerProfiler.Core - various improvements and fixes - debug windows binary from f50b4fc9 - change internal constants to be more sensible (assumed worker thread count 4 -> 32, max recorders 64 -> 16, max alloc 1GB -> 512MB) - bugfix for not cleaning up dead thread state when running immediate mode recording - MemoryPool no longer allocates from heap as a fallback when it's over capacity Think core lib is done enough for now, gonna move to finishing rust side Tests: ran unit tests
5 Days Ago
Update: add TextContextExhaustionTest - reduce TestDeferCleanup internal loop count to 8 from 16 (as was still possible to starve the pool) Tests: ran unit tests, pass (got local unsubmitte fixes)
5 Days Ago
Update: add TestDeferCleanup test Works, but discovered that I forgot to clean up threads in ServerProfiler.Core, so I'm starving out the pool Tests: ran new test
5 Days Ago
Update: minor changes - MakeScopeRecording -> RecordScope - fail starting to record if profiler isn't initialized Tests: unit tests
5 Days Ago
Update: ServerProfiler.Core - MemoryReadings are now implemented via MemoryPool - debug windows bins from 47635f61 - ABI break for MemoryData Tests: unit tests + 10x of StressTestImmediateCaptureMT
5 Days Ago
Update: ServerProfiler.Core - use memory pooling - debug windows binary built from af80ca2c - this fixes/reduces occurance of the MT race - also reduces capture overhead (at least in debug, 2.2s -> 0.75ms) - added MPMCQueue license file Need to revive support for MemoryReadings, will do that next. Tests: unit tests + StressTestImmediateCaptureMT 10 times
5 Days Ago
Update: ServerProfiler.Core - replaced my own MPSC queue with a third-party MPMC queue - debug windows binary from 268ce0c3 Needed to add memory pooling, my own version couldn't handle non-integral types Tests: unit tests
8 Days Ago
Update: add StressTestImmediateCaptureMT test It smashes the profiler from all 20 threads doing allocations and calling methods, while main tries to record just 1 method in a loop. This triggers heap corruption - think allocation pooling would solve this. Tests: ran extra unit test - it failed drastically
8 Days Ago
Bugfix: kind-of-fix the thread race with Immediate Mode API (late profiler callback might be in progress as we're releasing resources, leading to invalid write) - built debug binaries from a3312fa9 - Added a mini stress test for main thread only, needs multithreading to fully validate I need to optimize internals a bit, to avoid allocation overhead Tests: ran unit tests on repeat 10 times - no issues
8 Days Ago
Update: first working version of immediate capture API - binaries built from b3a39bd2 commit Has a bug with a race, will fix next Tests: passes unit tests
9 Days Ago
Update: blockout Immediate-Record API - Added unit tests to validate usage Tests: ran unit tests, has expected failures
9 Days Ago
Merge: from playerinventory_oncycle_optim - Bugfix for leaking onCycle items when calling Item::Remove Tests: unit tests + cooked meat, consumed, cooked again
9 Days Ago
Bugfix: fix leaking onCycle items when calling Item::Remove - Consolidated onCycle callback cleanup to DoRemove - ItemManager::DoRemoves(bool) can now force remove all items - Added a unit test to validate the logc Tests: ran unit test, cooked meat on a campfire, ate it, cooked again - no exception
10 Days Ago
Merge: from useplayertasks_removegroupocludee_nre - Bugfix for an edge case of moving players during load of a save Tests: ran unit tests
10 Days Ago
Bugfix: OcclusionGroup - account for server loading a save potentially recalculating network group - added a unit test to stress this scenario Seems like a weird edge case, but it means we gotta work around it Tests: ran unit tests
10 Days Ago
Merge: from playerinventory_oncycle_optim - Buildfix for client Tests: none, trivial change
10 Days Ago
Buildfix: add SERVER guards Tests: none, trivial change
10 Days Ago
Merge: from playerinventory_oncycle_optim - Bugfix for exception of duplicate key when loading container with cookables Tests: unit tests
10 Days Ago
Bugfix: ItemCointainer loading items no longer throws due to stale itemsWithOnCycle Fixed by resetting itemsWithOnCycle before population Tests: unit tests
10 Days Ago
Update: add TestLoad that exposes a bug for caching onCycle Still not the one I'm looking for, but would bite eventually Tests: ran unit test - fails as expected
11 Days Ago
Update: add TestOnCycleStackables test to validate onCycle caching Weirdly it passes with no duplicate-key exceptions. Maybe the exception is just a symptom, gonna check elsewhere Tests: ran unit test
11 Days Ago
Merge: from main
11 Days Ago
Merge: from useplayertasks_removegroupocludee_nre - Bugfix for player connecting to a sleeper from a save emitting an error Tests: unit tests + 2p on Craggy
11 Days Ago
Merge: from main
11 Days Ago
Update: OcclusionValidateGroups now also checks all active players and all sleepers Tests: none, trivial change
11 Days Ago
Bugfix: OcclusionGroup - handle connecting to a sleeper loaded from a save Done via initializing sleeper if it supports server occlusion in PostServerLoad Tests: unit tests + 2p on Craggy with a sleeper in a save
11 Days Ago
Update: another flow change to ReconnectToASleeperFromSave - added missing PostServerLoad call - adjusted expectations Tests: ran unit test - still fails as expected
11 Days Ago
Update: adjusted the expectations for ReconnectToASleeperFromSave test Realized that original scenario was slightly misimplemented, the flow doesn't exist in our code Tests: ran unit test, it fails as expected