2,566 Commits over 639 Days - 0.17cph!
Update: dispose netgroups when NetVisGrid gets disabled
Tests: booted into craggy and stopped - no errors
Clean: remove Provider.Init method
Tests: ran UpdateSubscriptionConsistency test, booted into craggy
Optim: Get rid of NetGroup pre-allocation, use thread-safe lazy allocation
Comparing memory snapshots, we go from 2.1mil NetGroups down to 131, saving 140MB
Tests: booted to Craggy and ran around. ran UpdateSubscriptionsConsistency unit test.
Clean: move group initialization from PopulateCells to GetGroup
Tests: ran around on craggy
Clean: remove Visibility.Manager.groups
- also remove all provider == null checks
Tests: compiles
Update: add ClientVisibilityProvider - mostly a dud Provider, most methods do nothing
- Upodated all places to use ClientVisibilityProvider instead of null
This allows to remove groups from Network.Visiblity.Manager
Tests: booted into craggy, ran GameTraceTests
Update: instead of iterating all netgroups, add Provider.ForEach(layer, callback) method
- get rid of Visibility.Manager.Groups accessor
This removes public access to groups dictionary, so almost there to remove it
Tests: booted to craggy, opened deep sea and ran deepsea.printentitycount
Update: Visibility.Provider now can create network groups in a thread-safe fashion
- organized hardcoded groups into their own collection
- Network.Visibility.Manager now defers to provider's GetGroup and TryGetGroup
Not sure if I want to support NetworkVisibilityGrid on CLIENT or not, but will try to clean it up in a bit
Tests: Booted and ran around on craggy. compiled for all modes separately
Merge: from useplayerupdatejobs_purge
- Clean: Removal of UsePlayerUpdateJobs 0 and 1 code
- Optim: RelationshipManager now uses cached server occlusion results instead of running new ones
- Bugfix: stop NPCs/Bots writing tick history, corrupting internal memory
Tests: booted a server from a save and connected to it
Bugfix: avoid invalid tick transformation from NPCs/bots that would corrupt array header
- added an assert to GetPlayerTickIterator, only place where it's not checked by default
Tests: booted a standalone server with a save - was able to connect and run around
Update: replace all unsafe usage with spans, should throw if I go out of bounds
Tests: compiles
Subtract: roll back
148302 - merge from usepalyerupdatejobs_purge
Release servers are crashing (but debug not). Will chase up next week
Merge: from useplayerupdatejobs_purge
- Clean: Removal of UsePlayerUpdateJobs 0 and 1 code
- Optim: RelationshipManager now uses cached server occlusion results instead of running new ones
Tests: unit tests + ran around on craggy, used heli, zipline, swam
Clean: remove dead using statements
Tests: none, trivial change
Update(tests): adding TickInterpolatorCache tests
- added overloads to accept index directly instead of entire baseplayer
Tests: ran unit tests
Clean: nuke TickInterpolator
We lose consistency unit test, so i'll add a couple basic ones in next change
Tests: ran AH unit tests
Clean: minor variable replacements in TickInterpolatorCache
Tests: none
Clean: replace all usages of TickInterpolator with TickInterpolatorCache in AntiHack
Tests: ran AH unit tests
Clean: remove all uses of TickInterpolator in BasePlayer logic
Tests: compiles
Clean: update UsePlayerUpdateJobs servervar description with a new min level
- ran codegen
Tests: compiles
Clean: remove all ConVar.Server.UsePlayerUpdateJobs > 0 checks
Tests: compiles
Clean: remove TriggerParent.UsePlayerV2Shortcuts servervar
Tests: compiles
Clean: remove UsePlayerTasks alias, since it's now always true
Tests: compiles
Optim: RelationshipManager - replace active server occlusion query with a cached result fetch
Tests: compiles
Clean: simplify serial OcclusionLineOfSight to match batched version
- removed all extra replication code that we no longer need
Tests: ran server occlusion consistency tests
Update: expose all BasePlayer state caches
Needed for Occlusion cleanup rewrite
Tests: compiles
Update: replace OcclusionCanUseFrameCache with true and simplify
This is a bit more than just clean, since there's an extra section of code that will run with it active. Since we didn't have any big bugs with jobs 2 player replication, treating this cache as always valid.
Tests: compiles
Clean: remove BasePlayer.ServerUpdate and all it's sub-callgraph
This also removed most of occlusion v1, but got a bit more to clean there
Tests: compiles
Clean: rip out serial player update(Jobs 0) flow
- moved idle kick logic into ServerUpdateParallel
Need to purge non-called methods next
Tests: compiles
Clean: remove server.EmergencyDisablePlayerJobs and relevant code
No more safety, where we're going we need bravery
Tests: compiles
Update: remove not-in-playercache checks around TickCache where appropriate
- also changed resizing to be dependent on playercache capacity, not length, as that was a bug that somehow never tripped
Tests: compiles
Update: connected players always register with PlayerCache disregarding Jobs mode
- Cleaned up a couple checks that now become irrelevant
Tests: compiles
Clean: remove occlusion v1 path from jobs 2
Occlusion v1 path still exists for jobs 0 - I'll rip that out a bit later
Tests: compiles
Clean: remove Jobs 1 paths in BasePlayer.ServerUpdateParallel
Tests: compiles
Merge: from triggerparent_jobs_isinside
- Buildfix
Tests: built server locally
Clean: add a reference to unity's issue tracker
Tests: none, trivial change
Buildfix: use JobHandle.IsValid extension instead of comparing to default directly
- add the above extension
Tests: built server locally
Update: turn GamePhysics.DefaultMaxResultsPerQuery into a server var
- breaking API changes for GamePhysics.CheckSpheres, CheckCapsules, EnvironmentManager.Get
- Ran codegen
Tests: none, trivial change
▅▇▍▍█▋▊ ▌▅▍▊ ▇▍█▉█▉▇▄▇█▄▉▄▋▇▅▋▆▅▍▆▌▊▌▊▄▅▊▉▄ ▇▊▆▅▆▅ ▇▌▉ ▇▌▇▅▅▄ ▋▌▅▄▉▍▉▍ ▍▇ █▇▉▍▇▇ ▊█▇▄▇ ▆▍▉▆ ▌▌▋▌▅▍▋ ▉▋▌▅▄▊▄▄▌▋▆▌▋▌▋▌▅▍▆▌▋▄ ▊█▍▊▌▊▄▌▄ ▌▇▋▌▌▅ ▅▇▍█ ▆ ▆▊▄▄▋▇ ▅▄▊▍▍▅▍█▅ ▇▆▇▌█ ▍▍ ▊██▄▄ ▍▋ ▍▉▆ █▉▋▍ █▌ ▍██ ▄▅▅▍ ▆ ▌▇▊▋ ▍▍▅
▉▄▆▋▍▌▆▇ ▍▉▌▉▆▍▌▋▅ ▊▄▍▇▋▇▋ ▊▋▇▅▍▇▊▄ █▉█▉▅▉ ▉▄▆ ▅▄▅▊▇▌▆▋ ▇▅█▊▍▍▋▊█▄▆█▅ ▇▊▍▌▅▊█▆ ▉▊▅▄▋ ▆▋▆▄▆▍▄▇▊▅▆█▉▊▋▍▋█▆▉▅▋█▅▋ ▆▅▌█▋▉▆▉▆▊▄██▋▇█▋▋█▋█▉ ▍▋▇▌▅ ▌▉▉█▆▄▆▄▋▅ ▍▌▊▍▌▇▍▊█▆▊▍▄▇ ▄▍█▇▊▅ ▅▄▍▇ ▇ ▄▊▆▋▇▅ ▋▉▆▌▌▌▋▉▆ ▍▄▌▉█ ▍█ ▆█▆▊▍ ▇▉ ▄▌▍ ▆▊▋▋ ▋▌ ▅▅▍ █▋▉▅ ▅ ▊▋█▅ ▅▆▇
Merge: from hascloseconnections_fix
- Bugfix for BaseNetworkable.HasCloseConnections and GetCloseConnections mot seeing outside of small layer
Tests: ran unit tests
Bugfix: ensure players are detected on the border of the network cell
- expanded unit tests to cover these cases
Tests: ran unit tests
Bugfix: fix HasConnectionsClose & GetCloseConnections missing players in medium and large ranges
- also add a unit test to validate Connections overload
Tests: ran unit tests
Bugfix(tests): adjust player spawn distance for TestNetworkRange.OutsideRange cases
They can land on a boundary, making it more difficult to reason when working across small/medium/large layers
Tests: ran the tests, same results