v2.3.1
What's changed
- Introduce a simple heuristic to estimate the optimal
ConcurrentDictionary
bucket count forConcurrentLru
/ConcurrentLfu
/ClassicLru
based on thecapacity
constructor arg. When the cache is at capacity, theConcurrentDictionary
will have a prime number bucket count and a load factor of 0.75.- When capacity is less than 150 elements, start with a
ConcurrentDictionary
capacity that is a prime number 33% larger than cache capacity. Initial size is large enough to avoid resizing. - For larger caches, pick
ConcurrentDictionary
initial size using a lookup table. Initial size is approximately 10% of the cache capacity such that 4ConcurrentDictionary
grow operations will arrive at a hash table size that is a prime number approximately 33% larger than cache capacity.
- When capacity is less than 150 elements, start with a
SingletonCache
sets the internalConcurrentDictionary
capacity to the next prime number greater than the capacity constructor argument.- Fix ABA concurrency bug in
Scoped
by changingReferenceCount
to use reference equality (viaobject.ReferenceEquals
). - .NET6 target now compiled with
SkipLocalsInit
. Minor performance gains. - Simplified
AtomicFactory
/AsyncAtomicFactory
/ScopedAtomicFactory
/ScopedAsyncAtomicFactory
by removing redundant reads, reducing code size. ConcurrentLfu.Count
now does not lock the underlyingConcurrentDictionary
, matchingConcurrentLru.Count
.- Use
CollectionsMarshal.AsSpan
to enumerate candidates withinConcurrentLfu.Trim
on .NET6.
Full changelog: v2.3.0...v2.3.1