userDaniel Pcancel
reporust_rebootcancel

2,236 Commits over 549 Days - 0.17cph!

Yesterday
Merge: from serverprofiler_codeapi - New: immediate mode profiling API for capturing specific regions of code. servervars to control it in "profile" group - Unit tests covering all new logic Tests: compile test + ran unit tests
Yesterday
Merge: from main
Yesterday
Update: update ServerProfiler.Core bins to Release - built on 2a311df Tests: ran all server profiler unit tests
Yesterday
Update: add profile.ImmediateModeEnabled feature flag - codegen + unit test Turns off all managed-side logic for new API Tests: ran unit tests
Yesterday
Update: introduce export interval (profile.ExportIntervalS, defaults to 30m) + ability to reset the interval (profile.ResetExportInterval) - codegen and extra unit tests Tests: unit tests
Yesterday
Bugfix: ProfileExporter.JSON can now export 0-frame main thread profiles Test: ran previously failing unit tests, checked their exported files - all's gud
Yesterday
Update: immediate mode API improvements - debug windows binary built from 2a311dfb - ScopeRecorder automatically exports to json and cleans up recorder state - added RecordScopeIfSlow(..., TimeSpan, ...) API, same as above except exports only if there was a delay - updated unit tests since some scenarios are now impossible Need to fix export next, wrap it with a couple server vars and update to release bins - then it's done Tests: ran most of the unit tests (stress tests skipped as they would overflow with export tasks)
Yesterday
Update: ServerProfiler.Core - various improvements and fixes - debug windows binary from f50b4fc9 - change internal constants to be more sensible (assumed worker thread count 4 -> 32, max recorders 64 -> 16, max alloc 1GB -> 512MB) - bugfix for not cleaning up dead thread state when running immediate mode recording - MemoryPool no longer allocates from heap as a fallback when it's over capacity Think core lib is done enough for now, gonna move to finishing rust side Tests: ran unit tests
Yesterday
Update: add TextContextExhaustionTest - reduce TestDeferCleanup internal loop count to 8 from 16 (as was still possible to starve the pool) Tests: ran unit tests, pass (got local unsubmitte fixes)
Yesterday
Update: add TestDeferCleanup test Works, but discovered that I forgot to clean up threads in ServerProfiler.Core, so I'm starving out the pool Tests: ran new test
Yesterday
Update: minor changes - MakeScopeRecording -> RecordScope - fail starting to record if profiler isn't initialized Tests: unit tests
Yesterday
Update: ServerProfiler.Core - MemoryReadings are now implemented via MemoryPool - debug windows bins from 47635f61 - ABI break for MemoryData Tests: unit tests + 10x of StressTestImmediateCaptureMT
2 Days Ago
Update: ServerProfiler.Core - use memory pooling - debug windows binary built from af80ca2c - this fixes/reduces occurance of the MT race - also reduces capture overhead (at least in debug, 2.2s -> 0.75ms) - added MPMCQueue license file Need to revive support for MemoryReadings, will do that next. Tests: unit tests + StressTestImmediateCaptureMT 10 times
2 Days Ago
Update: ServerProfiler.Core - replaced my own MPSC queue with a third-party MPMC queue - debug windows binary from 268ce0c3 Needed to add memory pooling, my own version couldn't handle non-integral types Tests: unit tests
5 Days Ago
Update: add StressTestImmediateCaptureMT test It smashes the profiler from all 20 threads doing allocations and calling methods, while main tries to record just 1 method in a loop. This triggers heap corruption - think allocation pooling would solve this. Tests: ran extra unit test - it failed drastically
5 Days Ago
Bugfix: kind-of-fix the thread race with Immediate Mode API (late profiler callback might be in progress as we're releasing resources, leading to invalid write) - built debug binaries from a3312fa9 - Added a mini stress test for main thread only, needs multithreading to fully validate I need to optimize internals a bit, to avoid allocation overhead Tests: ran unit tests on repeat 10 times - no issues
5 Days Ago
Update: first working version of immediate capture API - binaries built from b3a39bd2 commit Has a bug with a race, will fix next Tests: passes unit tests
6 Days Ago
Update: blockout Immediate-Record API - Added unit tests to validate usage Tests: ran unit tests, has expected failures
6 Days Ago
Merge: from playerinventory_oncycle_optim - Bugfix for leaking onCycle items when calling Item::Remove Tests: unit tests + cooked meat, consumed, cooked again
6 Days Ago
Bugfix: fix leaking onCycle items when calling Item::Remove - Consolidated onCycle callback cleanup to DoRemove - ItemManager::DoRemoves(bool) can now force remove all items - Added a unit test to validate the logc Tests: ran unit test, cooked meat on a campfire, ate it, cooked again - no exception
7 Days Ago
Merge: from useplayertasks_removegroupocludee_nre - Bugfix for an edge case of moving players during load of a save Tests: ran unit tests
7 Days Ago
Bugfix: OcclusionGroup - account for server loading a save potentially recalculating network group - added a unit test to stress this scenario Seems like a weird edge case, but it means we gotta work around it Tests: ran unit tests
7 Days Ago
Merge: from playerinventory_oncycle_optim - Buildfix for client Tests: none, trivial change
7 Days Ago
Buildfix: add SERVER guards Tests: none, trivial change
7 Days Ago
Merge: from playerinventory_oncycle_optim - Bugfix for exception of duplicate key when loading container with cookables Tests: unit tests
7 Days Ago
Bugfix: ItemCointainer loading items no longer throws due to stale itemsWithOnCycle Fixed by resetting itemsWithOnCycle before population Tests: unit tests
7 Days Ago
Update: add TestLoad that exposes a bug for caching onCycle Still not the one I'm looking for, but would bite eventually Tests: ran unit test - fails as expected
8 Days Ago
Update: add TestOnCycleStackables test to validate onCycle caching Weirdly it passes with no duplicate-key exceptions. Maybe the exception is just a symptom, gonna check elsewhere Tests: ran unit test
8 Days Ago
Merge: from main
8 Days Ago
Merge: from useplayertasks_removegroupocludee_nre - Bugfix for player connecting to a sleeper from a save emitting an error Tests: unit tests + 2p on Craggy
8 Days Ago
Merge: from main
8 Days Ago
Update: OcclusionValidateGroups now also checks all active players and all sleepers Tests: none, trivial change
8 Days Ago
Bugfix: OcclusionGroup - handle connecting to a sleeper loaded from a save Done via initializing sleeper if it supports server occlusion in PostServerLoad Tests: unit tests + 2p on Craggy with a sleeper in a save
8 Days Ago
Update: another flow change to ReconnectToASleeperFromSave - added missing PostServerLoad call - adjusted expectations Tests: ran unit test - still fails as expected
8 Days Ago
Update: adjusted the expectations for ReconnectToASleeperFromSave test Realized that original scenario was slightly misimplemented, the flow doesn't exist in our code Tests: ran unit test, it fails as expected
8 Days Ago
Update: OcclusionGroupTest - add TestNew_ReconnectToASleeperFromSave test Catches a bug in how server occlusion handles this specific initialization path Tests: ran unit test, it fails
9 Days Ago
Merge: from useplayertasks_removegroupocludee_nre - Bugfix for NREs and errors when using -enable-new-server-occlusion-groups - Unit tests to cover parts of old logic and entirety of new logic behavior. 20 tests totaling 255 permutations. Tests: unit tests + 2p on craggy with noclip, teleportation, disconnect, killing sleepers and using OcclusionValidateGroups
9 Days Ago
Clean: Calrify in a comment what provisions does new logic can guarantee Tests: none, trivial change
9 Days Ago
Merge: from main Tests: none
9 Days Ago
Merge: from useplayertasks_removegroupocludee_nre Tests: unit tests + bunch of manual tests
9 Days Ago
Bugfix: OcclusionValidateGroups - fix a false positive by inverting a subscription check Was testing subscriptions from perspective of occlusion group participants, not the owner of the group (and the group tracks what the owner is subbed to) Tests: teleported away then spammed OcclusionValidateGroups - no more "stale participants" messages for a short second
9 Days Ago
Bugfix: OcclusionGroup - handle case when player reconnects and reclaims his sleeper - Added unit tests - Reconnect(4), KillSleeper(4), KillSleeperAndReconnect(8) and DieAndRespawn(4) Tests: unit tests + 2p on Craggy with p2 reconnecting near p1. used OcclusionValidateGroups (need a minor fix)
9 Days Ago
Update: adapt OcclusionValidateGroups to new logic Tests: used it in various scenarios in 2p on Craggy. Spotted a bug for reconnecting players, need to add a unit test and fix
9 Days Ago
Update: OcclusionGroupTests - added LaggyTeleport test Think I'm done, going to do a bit of manual testing and merge it back Tests: unit tests
9 Days Ago
Update: restore lastPlayerVisibility support in new logic - updated tests to validate corret removal of timestamp Just have one last test to implement, and it's done Tests: ran unit tests
9 Days Ago
Update: prep unit tests for lastPlayerVisibility validation Tests: ran unit tests
9 Days Ago
Update: OcclusionGroupTests now check if 2nd player in a test only references itself in the group Tests: ran unit tests
9 Days Ago
Clean: extract a bit of code to handle initialization Tests: ran all unit tests
9 Days Ago
Update: rewrite OcclusiongGroups logic to be simpler - updated unit tests with new expectations Each player now has their own occlusion group that they modify as they navigate network grid. OcclusionGroups are driven by network subscription logic. Got couple TODOs and couple unit tests to add, then done Tests: ran unit tests, all passed
9 Days Ago
Clean: reclaim occludees name Tests: compiles