Skip to content

v2.3.1

Compare
Choose a tag to compare
@bitfaster bitfaster released this 22 Oct 23:50
· 169 commits to main since this release
60e78bf

What's changed

  • Introduce a simple heuristic to estimate the optimal ConcurrentDictionary bucket count for ConcurrentLru/ConcurrentLfu/ClassicLru based on the capacity constructor arg. When the cache is at capacity, the ConcurrentDictionary will have a prime number bucket count and a load factor of 0.75.
    • When capacity is less than 150 elements, start with a ConcurrentDictionary capacity that is a prime number 33% larger than cache capacity. Initial size is large enough to avoid resizing.
    • For larger caches, pick ConcurrentDictionary initial size using a lookup table. Initial size is approximately 10% of the cache capacity such that 4 ConcurrentDictionary grow operations will arrive at a hash table size that is a prime number approximately 33% larger than cache capacity.
  • SingletonCache sets the internal ConcurrentDictionary capacity to the next prime number greater than the capacity constructor argument.
  • Fix ABA concurrency bug in Scoped by changing ReferenceCount to use reference equality (via object.ReferenceEquals).
  • .NET6 target now compiled with SkipLocalsInit. Minor performance gains.
  • Simplified AtomicFactory/AsyncAtomicFactory/ScopedAtomicFactory/ScopedAsyncAtomicFactory by removing redundant reads, reducing code size.
  • ConcurrentLfu.Count now does not lock the underlying ConcurrentDictionary, matching ConcurrentLru.Count.
  • Use CollectionsMarshal.AsSpan to enumerate candidates within ConcurrentLfu.Trim on .NET6.

Full changelog: v2.3.0...v2.3.1