Releases: bitfaster/BitFaster.Caching
Releases · bitfaster/BitFaster.Caching
v2.5.2
What's changed
- Fix race between update and
TryRemove(KeyValuePair)
for bothConcurrentLru
and FixConcurrentLfu
. Prior to this fix, values may be deleted if the value is updated to no longer match theTryRemove
input argument whileTryRemove
is executing. - Fix
ConcurrentLfu
torn writes for large structs using SeqLock.
Full changelog: v2.5.1...v2.5.2
v2.5.1
What's changed
- Fix
ConcurrentLfu
time-based expiry policy failing to update the entry's expiry on read. Prior to this fix, expiry is only updated when the read buffer is processed (following a cache write, or when the read buffer is full). - Fix
ConcurrentLru
torn writes for large structs using SeqLock. - Fix torn writes for 64-bit current time on 32-bit platforms for
ConcurrentLru
AfterAccessPolicy
andDiscretePolicy
. - P/Invoke
TickCount64
to evaluate current time for .NET Standard on Windows,Duration.SinceEpoch
is 5x faster resulting in lower latency lookups forConcurrentTLru
/ConcurrentTLfu
. - Use
Stopwatch.GetTimestamp
to evaluate current time on MacOS,Duration.SinceEpoch
is about 20% faster resulting in slightly lower latency lookups forConcurrentTLru
/ConcurrentTLfu
.
Full changelog: v2.5.0...v2.5.1
v2.5.0
What's changed
- Provide time-based expiry for
ConcurrentLfu
, matchingConcurrentLru
. This closely follows the implementation in Java's Caffeine, using a port of Caffeine's hierarchical timer wheel to perform all operations in O(1) time. Expire after write, expire after access and expire after usingIExpiryCalculator
can be configured viaConcurrentLfuBuilder
extension methods. - Provide
ICacheExt
andIAsyncCacheExt
to enable client code compiled against .NET Standard to use the builder APIs and cache methods added since v2.0. These new methods are excluded in the base interfaces for .NET Standard, since adding them would be a breaking change. - Provide the
Duration
convenience methodsFromHours
andFromDays
.
Full changelog: v2.4.1...v2.5.0
v2.4.1
What's changed
- Fixed a race condition in
ConcurrentLfu
for add-remove-add of the same key. MpscBoundedBuffer.Clear()
is now thread safe, fixing a race inConcurrentLfu
clear.- Fixed
ConcurrentLru
Count
andIEnumerable<KeyValuePair<K,V>>
to filter out expired items when used with time-based expiry. - BitFaster.Caching is now compiled with
<nullable>enable</nullable>
, and APIs are annotated to support null reference type static analysis.
Full changelog: v2.4.0...v2.4.1
v2.4.0
What's changed
- Provide two new time-based expiry schemes for
ConcurrentLru
:- Expire after access: evict after a fixed duration since an entry's most recent read or write. This is equivalent to MemoryCache's sliding expiry, and is useful for data bound to a session that expires due to inactivity.
- Per item expiry time: evict after a duration calculated for each item using the specified
IExpiryCalculator
. Expiry time may be set independently at creation, after a read and after a write.
- Align
TryRemove
overloads withConcurrentDictionary
forIAsyncCache
andAsyncAtomicFactory
, matching the implementation forICache
added in v2.3.0. This adds two new overloads:bool TryRemove(K key, out V value)
- enables getting the value that was removed.bool TryRemove(KeyValuePair<K, V> item)
- enables removing an item only when the key and value are the same.
- Add extension methods to make it more convenient to use
AsyncAtomicFactory
with a plainConcurrentDictionary
. This is similar to storing anAsyncLazy<T>
instead ofT
, but with the same exception propagation semantics and API asConcurrentDictionary.GetOrAdd
. - BitFaster.Caching assembly marked as trim compatible to enable trimming when used in native AOT applications.
AtomicFactory
value initialization logic modified to mitigate lock convoys, based on the approach given here.- Fixed
ConcurrentLru.Clear
to correctly handle removed items present in the internal bookkeeping data structures.
Full changelog: v2.3.3...v2.4.0
v2.3.3
What's changed
- Eliminated all races in
ConcurrentLru
eviction logic, and the transition between the cold cache and warm cache eviction routines. This prevents a variety of rare 'off by one item count' situations that could needlessly evict items when the cache is within bounds. - Fix
ConcurrentLru.Clear()
to always clear the cache when items in the warm queue are marked as accessed. - Optimize
ConcurrentLfu
drain buffers logic to give ~5% better throughput (measured by the eviction throughput test). - Cache the
ConcurrentLfu
drain buffers delegate to prevent allocating a closure when scheduling maintenance. BackgroundThreadScheduler
andThreadPoolScheduler
now useTaskScheduler.Default
, instead of implicitly usingTaskScheduler.Current
(fixes CA2008).ScopedAsyncCache
now internally callsConfigureAwait(false)
when awaiting tasks (fixes CA2007).- Fix
ConcurrentLru
debugger display on .NET Standard.
Full changelog: v2.3.2...v2.3.3
v2.3.2
What's changed
- Fix
ConcurrentLru
NullReferenceException
when expiring and disposing null values (i.e. the cached value is a reference type, and the caller cached a null value). - Fix
ConcurrentLfu
handling of updates to detached nodes, caused by concurrent reads and writes. Detached nodes could be re-attached to the probation LRU pushing out fresh items prematurely, but would eventually expire since they can no longer be accessed.
Full changelog: v2.3.1...v2.3.2
v2.3.1
What's changed
- Introduce a simple heuristic to estimate the optimal
ConcurrentDictionary
bucket count forConcurrentLru
/ConcurrentLfu
/ClassicLru
based on thecapacity
constructor arg. When the cache is at capacity, theConcurrentDictionary
will have a prime number bucket count and a load factor of 0.75.- When capacity is less than 150 elements, start with a
ConcurrentDictionary
capacity that is a prime number 33% larger than cache capacity. Initial size is large enough to avoid resizing. - For larger caches, pick
ConcurrentDictionary
initial size using a lookup table. Initial size is approximately 10% of the cache capacity such that 4ConcurrentDictionary
grow operations will arrive at a hash table size that is a prime number approximately 33% larger than cache capacity.
- When capacity is less than 150 elements, start with a
SingletonCache
sets the internalConcurrentDictionary
capacity to the next prime number greater than the capacity constructor argument.- Fix ABA concurrency bug in
Scoped
by changingReferenceCount
to use reference equality (viaobject.ReferenceEquals
). - .NET6 target now compiled with
SkipLocalsInit
. Minor performance gains. - Simplified
AtomicFactory
/AsyncAtomicFactory
/ScopedAtomicFactory
/ScopedAsyncAtomicFactory
by removing redundant reads, reducing code size. ConcurrentLfu.Count
now does not lock the underlyingConcurrentDictionary
, matchingConcurrentLru.Count
.- Use
CollectionsMarshal.AsSpan
to enumerate candidates withinConcurrentLfu.Trim
on .NET6.
Full changelog: v2.3.0...v2.3.1
v2.3.0
What's changed
- Align
TryRemove
overloads withConcurrentDictionary
forICache
(includingWithAtomicGetOrAdd
). This adds two new overloads:bool TryRemove(K key, out V value)
- enables getting the value that was removed.bool TryRemove(KeyValuePair<K, V> item)
- enables removing an item only when the key and value are the same.
- Fix
ConcurrentLfu.Clear()
to remove all values when usingBackgroundThreadScheduler
. Previously values may be left behind after clear was called due to removed items present in window/protected/probation polluting the list of candidates to remove. - Fix
ConcurrentLru.Clear()
to reset the isWarm flag. Now cache warmup behaves the same for a new instance ofConcurrentLru
vs an existing instance that was full then cleared. PreviouslyConcurrentLru
could have reduced capacity during warmup after calling clear, depending on the access pattern. - Add extension methods to make it more convenient to use
AtomicFactory
with a plainConcurrentDictionary
. This is similar to storing aLazy<T>
instead ofT
, but with the same exception propagation semantics and API asConcurrentDictionary.GetOrAdd
.
Full changelog: v2.2.1...v2.3.0
v2.2.1
What's changed
- Fix a
ConcurrentLru
bug where a repeated pattern of sequential key access could lead to unbounded growth. - Use Span APIs within
MpscBoundedBuffer
/StripedMpscBuffer
/ConcurrentLfu
on .NET6/.NETCore3.1 build targets. ReducesConcurrentLfu
lookup latency by about 5-7% in the lookup benchmark.
Full changelog: v2.2.0...v2.2.1