-
Notifications
You must be signed in to change notification settings - Fork 31
Home
BitFaster.Caching is a high performance in-memory caching library for .NET.
The established caching library for .NET is Microsoft.Extensions.Caching.MemoryCache. BitFaster.Caching provides bounded size caches with a focus on performance to address the limitations of MemoryCache.
In most practical applications, only a subset of items can be kept in the cache. The role of the replacement policy is to choose the optimal items to keep in the cache, such that the hit rate is maximized and latency is minimized.
The MemoryCache replacement policy is not documented. Decompiling the code shows that it periodically estimates memory consumption and if above the configured threshold, first it tries to remove absolute/relative time expired items (time expiry is optional), then evict items based on priority. Within each priority, eviction order is undefined. Undefined is not an optimal strategy.
In practice, the probability distribution of key lookups is heavily skewed towards a subset of keys. In other words, some keys are more likely to be accessed. When the access pattern is constant over time a Least Frequently Used (LFU) replacement policy yields the highest hit rate. However, implementing a precise LFU is memory prohibitive and LFU workload adaptation is poor. A simpler alternative is Least Recently Used (LRU). This is easier to implement and adapts well to changes in access patterns.
BitFaster.Caching provides a ClassicLru with strict LRU policy and ConcurrentLru with a hybrid LRU/LFU policy that significantly outperforms LRU.
MemoryCache is bounded by the SizeLimit property. Cache memory consumption is estimated by comparing changes in the cache item count vs total memory consumption of the process and is often inaccurate. The worst case for MemoryCache is a fast-acting index scan when the fetched values do not fit in memory. Depending on the number of keys fetched and the rate of incoming lookups; a scan against MemoryCache is either a waste of memory (expensive - low value data is cached in memory), causes thrashing (degraded performance), or worse, out of memory (total failure). Avoiding these problems requires tuning ExpirationScanFrequency and CompactionPercentage. Earlier implementations (e.g. System.Runtime.Caching.MemoryCache) attempt to estimate these parameters at runtime and are prone to scan failures.
In BitFaster.Caching the cache size is bounded by item count, and limits are enforced on write operations. By explicitly choosing how many items to cache, the developer is in control of the cache budget and runaway memory usage is prevented.
Since BitFaster.Caching is generic, non-string keys can be used without being forced to allocate a new key string for each lookup. A cache provides a speedup when a lookup is faster than computing/querying/fetching the value. With faster lookups that don't allocate, caching can achieve speedups in lower level code (e.g. fast computations like parsing/json formatting small objects), in addition to RPC calls or disk reads. This enables caching to be plugged into several layers of the program without exploding ephemeral memory allocations.
MemoryCache has these limitations:
- Lookups require heap allocations when the native key type is not type string.
- Is not 'scan' resistant: fetching all keys will try to load everything into memory, which is bad.
- Non-optimal eviction policy. MemoryCache uses an heuristic to estimate memory used, and evicts items using a timer based background thread. The 'trim' process may remove useful items, and if the timer does not fire fast enough the resulting memory pressure can be problematic (e.g. thrashing, out of memory, increased GC).
- Does not scale well with concurrent writes.
- Contains perf counters that can't be disabled.