Skip to content

Releases: bitfaster/BitFaster.Caching

v2.5.2

15 Sep 01:04
be68ba5
Compare
Choose a tag to compare

What's changed

  • Fix race between update and TryRemove(KeyValuePair) for both ConcurrentLru and Fix ConcurrentLfu. Prior to this fix, values may be deleted if the value is updated to no longer match the TryRemove input argument while TryRemove is executing.
  • Fix ConcurrentLfu torn writes for large structs using SeqLock.

Full changelog: v2.5.1...v2.5.2

v2.5.1

09 Jun 22:40
fb787c1
Compare
Choose a tag to compare

What's changed

  • Fix ConcurrentLfu time-based expiry policy failing to update the entry's expiry on read. Prior to this fix, expiry is only updated when the read buffer is processed (following a cache write, or when the read buffer is full).
  • Fix ConcurrentLru torn writes for large structs using SeqLock.
  • Fix torn writes for 64-bit current time on 32-bit platforms for ConcurrentLru AfterAccessPolicy and DiscretePolicy.
  • P/Invoke TickCount64 to evaluate current time for .NET Standard on Windows, Duration.SinceEpoch is 5x faster resulting in lower latency lookups for ConcurrentTLru/ConcurrentTLfu.
  • Use Stopwatch.GetTimestamp to evaluate current time on MacOS, Duration.SinceEpoch is about 20% faster resulting in slightly lower latency lookups for ConcurrentTLru/ConcurrentTLfu.

Full changelog: v2.5.0...v2.5.1

v2.5.0

09 May 02:02
1621b21
Compare
Choose a tag to compare

What's changed

  • Provide time-based expiry for ConcurrentLfu, matching ConcurrentLru. This closely follows the implementation in Java's Caffeine, using a port of Caffeine's hierarchical timer wheel to perform all operations in O(1) time. Expire after write, expire after access and expire after using IExpiryCalculator can be configured via ConcurrentLfuBuilder extension methods.
  • Provide ICacheExt and IAsyncCacheExt to enable client code compiled against .NET Standard to use the builder APIs and cache methods added since v2.0. These new methods are excluded in the base interfaces for .NET Standard, since adding them would be a breaking change.
  • Provide the Duration convenience methods FromHours and FromDays.

Full changelog: v2.4.1...v2.5.0

v2.4.1

11 Dec 03:03
2d08830
Compare
Choose a tag to compare

What's changed

  • Fixed a race condition in ConcurrentLfu for add-remove-add of the same key.
  • MpscBoundedBuffer.Clear() is now thread safe, fixing a race in ConcurrentLfu clear.
  • Fixed ConcurrentLru Count and IEnumerable<KeyValuePair<K,V>> to filter out expired items when used with time-based expiry.
  • BitFaster.Caching is now compiled with <nullable>enable</nullable>, and APIs are annotated to support null reference type static analysis.

Full changelog: v2.4.0...v2.4.1

v2.4.0

24 Nov 22:26
87aad5b
Compare
Choose a tag to compare

What's changed

  • Provide two new time-based expiry schemes for ConcurrentLru:
    • Expire after access: evict after a fixed duration since an entry's most recent read or write. This is equivalent to MemoryCache's sliding expiry, and is useful for data bound to a session that expires due to inactivity.
    • Per item expiry time: evict after a duration calculated for each item using the specified IExpiryCalculator. Expiry time may be set independently at creation, after a read and after a write.
  • Align TryRemove overloads with ConcurrentDictionary for IAsyncCache and AsyncAtomicFactory, matching the implementation for ICache added in v2.3.0. This adds two new overloads:
    • bool TryRemove(K key, out V value) - enables getting the value that was removed.
    • bool TryRemove(KeyValuePair<K, V> item) - enables removing an item only when the key and value are the same.
  • Add extension methods to make it more convenient to use AsyncAtomicFactory with a plain ConcurrentDictionary. This is similar to storing an AsyncLazy<T> instead of T, but with the same exception propagation semantics and API as ConcurrentDictionary.GetOrAdd.
  • BitFaster.Caching assembly marked as trim compatible to enable trimming when used in native AOT applications.
  • AtomicFactory value initialization logic modified to mitigate lock convoys, based on the approach given here.
  • Fixed ConcurrentLru.Clear to correctly handle removed items present in the internal bookkeeping data structures.

Full changelog: v2.3.3...v2.4.0

v2.3.3

11 Nov 21:36
532db75
Compare
Choose a tag to compare

What's changed

  • Eliminated all races in ConcurrentLru eviction logic, and the transition between the cold cache and warm cache eviction routines. This prevents a variety of rare 'off by one item count' situations that could needlessly evict items when the cache is within bounds.
  • Fix ConcurrentLru.Clear() to always clear the cache when items in the warm queue are marked as accessed.
  • Optimize ConcurrentLfu drain buffers logic to give ~5% better throughput (measured by the eviction throughput test).
  • Cache the ConcurrentLfu drain buffers delegate to prevent allocating a closure when scheduling maintenance.
  • BackgroundThreadScheduler and ThreadPoolScheduler now use TaskScheduler.Default, instead of implicitly using TaskScheduler.Current (fixes CA2008).
  • ScopedAsyncCache now internally calls ConfigureAwait(false) when awaiting tasks (fixes CA2007).
  • Fix ConcurrentLru debugger display on .NET Standard.

Full changelog: v2.3.2...v2.3.3

v2.3.2

25 Oct 00:31
1352584
Compare
Choose a tag to compare

What's changed

  • Fix ConcurrentLru NullReferenceException when expiring and disposing null values (i.e. the cached value is a reference type, and the caller cached a null value).
  • Fix ConcurrentLfu handling of updates to detached nodes, caused by concurrent reads and writes. Detached nodes could be re-attached to the probation LRU pushing out fresh items prematurely, but would eventually expire since they can no longer be accessed.

Full changelog: v2.3.1...v2.3.2

v2.3.1

22 Oct 23:50
60e78bf
Compare
Choose a tag to compare

What's changed

  • Introduce a simple heuristic to estimate the optimal ConcurrentDictionary bucket count for ConcurrentLru/ConcurrentLfu/ClassicLru based on the capacity constructor arg. When the cache is at capacity, the ConcurrentDictionary will have a prime number bucket count and a load factor of 0.75.
    • When capacity is less than 150 elements, start with a ConcurrentDictionary capacity that is a prime number 33% larger than cache capacity. Initial size is large enough to avoid resizing.
    • For larger caches, pick ConcurrentDictionary initial size using a lookup table. Initial size is approximately 10% of the cache capacity such that 4 ConcurrentDictionary grow operations will arrive at a hash table size that is a prime number approximately 33% larger than cache capacity.
  • SingletonCache sets the internal ConcurrentDictionary capacity to the next prime number greater than the capacity constructor argument.
  • Fix ABA concurrency bug in Scoped by changing ReferenceCount to use reference equality (via object.ReferenceEquals).
  • .NET6 target now compiled with SkipLocalsInit. Minor performance gains.
  • Simplified AtomicFactory/AsyncAtomicFactory/ScopedAtomicFactory/ScopedAsyncAtomicFactory by removing redundant reads, reducing code size.
  • ConcurrentLfu.Count now does not lock the underlying ConcurrentDictionary, matching ConcurrentLru.Count.
  • Use CollectionsMarshal.AsSpan to enumerate candidates within ConcurrentLfu.Trim on .NET6.

Full changelog: v2.3.0...v2.3.1

v2.3.0

06 Oct 01:22
4330c16
Compare
Choose a tag to compare

What's changed

  • Align TryRemove overloads with ConcurrentDictionary for ICache (including WithAtomicGetOrAdd). This adds two new overloads:
    • bool TryRemove(K key, out V value) - enables getting the value that was removed.
    • bool TryRemove(KeyValuePair<K, V> item) - enables removing an item only when the key and value are the same.
  • Fix ConcurrentLfu.Clear() to remove all values when using BackgroundThreadScheduler. Previously values may be left behind after clear was called due to removed items present in window/protected/probation polluting the list of candidates to remove.
  • Fix ConcurrentLru.Clear() to reset the isWarm flag. Now cache warmup behaves the same for a new instance of ConcurrentLru vs an existing instance that was full then cleared. Previously ConcurrentLru could have reduced capacity during warmup after calling clear, depending on the access pattern.
  • Add extension methods to make it more convenient to use AtomicFactory with a plain ConcurrentDictionary. This is similar to storing a Lazy<T> instead of T, but with the same exception propagation semantics and API as ConcurrentDictionary.GetOrAdd.

Full changelog: v2.2.1...v2.3.0

v2.2.1

22 Aug 02:00
929b2cf
Compare
Choose a tag to compare

What's changed

  • Fix a ConcurrentLru bug where a repeated pattern of sequential key access could lead to unbounded growth.
  • Use Span APIs within MpscBoundedBuffer/StripedMpscBuffer/ConcurrentLfu on .NET6/.NETCore3.1 build targets. Reduces ConcurrentLfu lookup latency by about 5-7% in the lookup benchmark.

Full changelog: v2.2.0...v2.2.1