-
Notifications
You must be signed in to change notification settings - Fork 31
Home
BitFaster.Caching is a high performance in-memory caching library for .NET.
The most popular caching library for .NET is arguably System.Extensions.MemoryCache, but it's a heavyweight option with limitations (see below). In particular MemoryCache is not a good fit when all possible cached values do not fit in memory. In the worst case a burst of requests can cause everything to be cached; this is either a waste of memory (expensive), causes thrashing (degraded performance), or worse, out of memory (failure). I have seen thrashing and out of memory failures in production services. BitFaster.Caching provides bounded size caches with a focus on performance. By explicitly choosing how many items to cache, the developer is in control of the cache budget and runaway memory usage is prevented.
Since BitFaster.Caching is generic, non-string keys can be used without being forced to allocate a new key string for each lookup. A cache provides a speedup when a lookup is faster than computing/querying/fetching the value. With faster lookups that don't allocate, caching can achieve speedups in lower level code (e.g. fast computations like parsing/json formatting small objects), in addition to RPC calls or disk reads. This enables caching to be plugged into several layers of the program without exploding memory consumption.
MemoryCache is perfectly serviceable, but it has some limitations:
- Lookups require heap allocations when the native object key is not type string.
- Is not 'scan' resistant, fetching all keys will load everything into memory.
- Does not scale well with concurrent writes.
- Contains perf counters that can't be disabled.
- Uses an heuristic to estimate memory used, and evicts items using a timer. The 'trim' process may remove useful items, and if the timer does not fire fast enough the resulting memory pressure can be problematic (e.g. thrashing, out of memory, increased GC).