2,239 Commits over 549 Days - 0.17cph!
Merge: from leavedeepsea_teleport_fix
- Bugfix: using leavedeepsea should no longer cause random bugs/random wake up positions
Tests: went on to a ghostship, then used leavedeepsea
Bugfix: unparent player if running leavedeepsea
This fixes player waking up in random location, potentially being killed for going out of bounds
Tests: on Craggy, went up to ghostship top and used leavedeepsea couple times
Merge: from serverprofiler_codeapi
- New: immediate mode profiling API for capturing specific regions of code. servervars to control it in "profile" group
- Unit tests covering all new logic
Tests: compile test + ran unit tests
Update: update ServerProfiler.Core bins to Release
- built on 2a311df
Tests: ran all server profiler unit tests
Update: add profile.ImmediateModeEnabled feature flag
- codegen + unit test
Turns off all managed-side logic for new API
Tests: ran unit tests
Update: introduce export interval (profile.ExportIntervalS, defaults to 30m) + ability to reset the interval (profile.ResetExportInterval)
- codegen and extra unit tests
Tests: unit tests
Bugfix: ProfileExporter.JSON can now export 0-frame main thread profiles
Test: ran previously failing unit tests, checked their exported files - all's gud
Update: immediate mode API improvements
- debug windows binary built from 2a311dfb
- ScopeRecorder automatically exports to json and cleans up recorder state
- added RecordScopeIfSlow(..., TimeSpan, ...) API, same as above except exports only if there was a delay
- updated unit tests since some scenarios are now impossible
Need to fix export next, wrap it with a couple server vars and update to release bins - then it's done
Tests: ran most of the unit tests (stress tests skipped as they would overflow with export tasks)
Update: ServerProfiler.Core - various improvements and fixes
- debug windows binary from f50b4fc9
- change internal constants to be more sensible (assumed worker thread count 4 -> 32, max recorders 64 -> 16, max alloc 1GB -> 512MB)
- bugfix for not cleaning up dead thread state when running immediate mode recording
- MemoryPool no longer allocates from heap as a fallback when it's over capacity
Think core lib is done enough for now, gonna move to finishing rust side
Tests: ran unit tests
Update: add TextContextExhaustionTest
- reduce TestDeferCleanup internal loop count to 8 from 16 (as was still possible to starve the pool)
Tests: ran unit tests, pass (got local unsubmitte fixes)
Update: add TestDeferCleanup test
Works, but discovered that I forgot to clean up threads in ServerProfiler.Core, so I'm starving out the pool
Tests: ran new test
Update: minor changes
- MakeScopeRecording -> RecordScope
- fail starting to record if profiler isn't initialized
Tests: unit tests
Update: ServerProfiler.Core - MemoryReadings are now implemented via MemoryPool
- debug windows bins from 47635f61
- ABI break for MemoryData
Tests: unit tests + 10x of StressTestImmediateCaptureMT
Update: ServerProfiler.Core - use memory pooling
- debug windows binary built from af80ca2c
- this fixes/reduces occurance of the MT race
- also reduces capture overhead (at least in debug, 2.2s -> 0.75ms)
- added MPMCQueue license file
Need to revive support for MemoryReadings, will do that next.
Tests: unit tests + StressTestImmediateCaptureMT 10 times
Update: ServerProfiler.Core - replaced my own MPSC queue with a third-party MPMC queue
- debug windows binary from 268ce0c3
Needed to add memory pooling, my own version couldn't handle non-integral types
Tests: unit tests
Update: add StressTestImmediateCaptureMT test
It smashes the profiler from all 20 threads doing allocations and calling methods, while main tries to record just 1 method in a loop. This triggers heap corruption - think allocation pooling would solve this.
Tests: ran extra unit test - it failed drastically
Bugfix: kind-of-fix the thread race with Immediate Mode API (late profiler callback might be in progress as we're releasing resources, leading to invalid write)
- built debug binaries from a3312fa9
- Added a mini stress test for main thread only, needs multithreading to fully validate
I need to optimize internals a bit, to avoid allocation overhead
Tests: ran unit tests on repeat 10 times - no issues
Update: first working version of immediate capture API
- binaries built from b3a39bd2 commit
Has a bug with a race, will fix next
Tests: passes unit tests
Update: blockout Immediate-Record API
- Added unit tests to validate usage
Tests: ran unit tests, has expected failures
Merge: from playerinventory_oncycle_optim
- Bugfix for leaking onCycle items when calling Item::Remove
Tests: unit tests + cooked meat, consumed, cooked again
Bugfix: fix leaking onCycle items when calling Item::Remove
- Consolidated onCycle callback cleanup to DoRemove
- ItemManager::DoRemoves(bool) can now force remove all items
- Added a unit test to validate the logc
Tests: ran unit test, cooked meat on a campfire, ate it, cooked again - no exception
Merge: from useplayertasks_removegroupocludee_nre
- Bugfix for an edge case of moving players during load of a save
Tests: ran unit tests
Bugfix: OcclusionGroup - account for server loading a save potentially recalculating network group
- added a unit test to stress this scenario
Seems like a weird edge case, but it means we gotta work around it
Tests: ran unit tests
Merge: from playerinventory_oncycle_optim
- Buildfix for client
Tests: none, trivial change
Buildfix: add SERVER guards
Tests: none, trivial change
Merge: from playerinventory_oncycle_optim
- Bugfix for exception of duplicate key when loading container with cookables
Tests: unit tests
Bugfix: ItemCointainer loading items no longer throws due to stale itemsWithOnCycle
Fixed by resetting itemsWithOnCycle before population
Tests: unit tests
Update: add TestLoad that exposes a bug for caching onCycle
Still not the one I'm looking for, but would bite eventually
Tests: ran unit test - fails as expected
Update: add TestOnCycleStackables test to validate onCycle caching
Weirdly it passes with no duplicate-key exceptions. Maybe the exception is just a symptom, gonna check elsewhere
Tests: ran unit test
Merge: from useplayertasks_removegroupocludee_nre
- Bugfix for player connecting to a sleeper from a save emitting an error
Tests: unit tests + 2p on Craggy
Update: OcclusionValidateGroups now also checks all active players and all sleepers
Tests: none, trivial change
Bugfix: OcclusionGroup - handle connecting to a sleeper loaded from a save
Done via initializing sleeper if it supports server occlusion in PostServerLoad
Tests: unit tests + 2p on Craggy with a sleeper in a save
Update: another flow change to ReconnectToASleeperFromSave
- added missing PostServerLoad call
- adjusted expectations
Tests: ran unit test - still fails as expected
Update: adjusted the expectations for ReconnectToASleeperFromSave test
Realized that original scenario was slightly misimplemented, the flow doesn't exist in our code
Tests: ran unit test, it fails as expected
Update: OcclusionGroupTest - add TestNew_ReconnectToASleeperFromSave test
Catches a bug in how server occlusion handles this specific initialization path
Tests: ran unit test, it fails
Merge: from useplayertasks_removegroupocludee_nre
- Bugfix for NREs and errors when using -enable-new-server-occlusion-groups
- Unit tests to cover parts of old logic and entirety of new logic behavior. 20 tests totaling 255 permutations.
Tests: unit tests + 2p on craggy with noclip, teleportation, disconnect, killing sleepers and using OcclusionValidateGroups
Clean: Calrify in a comment what provisions does new logic can guarantee
Tests: none, trivial change
Merge: from main
Tests: none
Merge: from useplayertasks_removegroupocludee_nre
Tests: unit tests + bunch of manual tests
Bugfix: OcclusionValidateGroups - fix a false positive by inverting a subscription check
Was testing subscriptions from perspective of occlusion group participants, not the owner of the group (and the group tracks what the owner is subbed to)
Tests: teleported away then spammed OcclusionValidateGroups - no more "stale participants" messages for a short second
Bugfix: OcclusionGroup - handle case when player reconnects and reclaims his sleeper
- Added unit tests - Reconnect(4), KillSleeper(4), KillSleeperAndReconnect(8) and DieAndRespawn(4)
Tests: unit tests + 2p on Craggy with p2 reconnecting near p1. used OcclusionValidateGroups (need a minor fix)
Update: adapt OcclusionValidateGroups to new logic
Tests: used it in various scenarios in 2p on Craggy. Spotted a bug for reconnecting players, need to add a unit test and fix
Update: OcclusionGroupTests - added LaggyTeleport test
Think I'm done, going to do a bit of manual testing and merge it back
Tests: unit tests
Update: restore lastPlayerVisibility support in new logic
- updated tests to validate corret removal of timestamp
Just have one last test to implement, and it's done
Tests: ran unit tests
Update: prep unit tests for lastPlayerVisibility validation
Tests: ran unit tests
Update: OcclusionGroupTests now check if 2nd player in a test only references itself in the group
Tests: ran unit tests