Skip to content

Latency

Alex Peck edited this page Sep 21, 2022 · 12 revisions

In these benchmarks, a cache miss is essentially free. These tests exist purely to compare the raw execution speed of the cache bookkeeping code. In a real setting, where a cache miss is presumably quite expensive, the relative overhead of the cache will be very small.

DISCLAIMER: Always measure performance in the context of your application. The results provided here are intended as a guide.

Benchmarks are based on BenchmarkDotNet, so are single threaded. Both ConcurrentLru and ConcurrentLfu scale well with concurrent workloads. The relative ranking of each cache implementation is stable across .NET Framework/Core/5/6 and on the CPU architectures available in Azure (e.g. Intel Skylake, AMD Zen). Absolute performance can vary.

Raw Lookup speed

In this test the same items are fetched repeatedly, no items are evicted. Representative of high hit rate scenario, when there are a low number of hot items. ConcurrentDictionary lookup is used to establish a baseline.

  • The ConcurrentLru family does not move items in the queues, it is just marking as accessed for pure cache hits and is therefore very fast. This test highlights the difference in performance between FastConcurrentLru, ConcurrentLru, a FastConcurrentLru with atomic GetOrAdd (AtomicFastLru) and ConcurrentTLru. All share the same underlying cache algorithm but have successively more features enabled.
  • ConcurrentLfu is performing a dictionary lookup and logging the read in the read buffer. It has been configured with the BackgroundThreadScheduler in this test.
  • ClassicLru must maintain item order, and is internally splicing the fetched item to the head (MRU position) of the linked list.
  • Both the older runtime MemoryCache and new extensions memory cache are tested. Since the key is an integer, the runtime cache must convert to a string and the extensions version must box the integer, hence there are memory allocations).

image

Tabular Benchmark Data
Method Runtime Mean StdDev Ratio Allocated
ConcurrentDictionary .NET 6.0 7.660 ns 0.0591 ns 1.00 -
FastConcurrentLru .NET 6.0 10.238 ns 0.0379 ns 1.34 -
ConcurrentLru .NET 6.0 16.274 ns 0.0923 ns 2.12 -
AtomicFastLru .NET 6.0 19.876 ns 0.2489 ns 2.59 -
FastConcurrentTLru .NET 6.0 27.807 ns 0.0966 ns 3.63 -
ConcurrentTLru .NET 6.0 34.152 ns 0.2927 ns 4.45 -
ConcurrentLfu .NET 6.0 37.237 ns 3.5232 ns 4.79 -
ClassicLru .NET 6.0 48.369 ns 0.2581 ns 6.31 -
RuntimeMemoryCacheGet .NET 6.0 111.611 ns 0.3618 ns 14.57 32 B
ExtensionsMemoryCacheGet .NET 6.0 53.526 ns 0.3109 ns 6.99 24 B
ConcurrentDictionary .NET Framework 4.8 16.301 ns 0.1134 ns 1.00 -
FastConcurrentLru .NET Framework 4.8 14.965 ns 0.1745 ns 0.92 -
ConcurrentLru .NET Framework 4.8 19.015 ns 0.1349 ns 1.17 -
AtomicFastLru .NET Framework 4.8 38.151 ns 0.5447 ns 2.34 -
FastConcurrentTLru .NET Framework 4.8 44.523 ns 0.2839 ns 2.73 -
ConcurrentTLru .NET Framework 4.8 48.666 ns 0.4090 ns 2.98 -
ConcurrentLfu .NET Framework 4.8 69.386 ns 3.8416 ns 4.05 -
ClassicLru .NET Framework 4.8 60.704 ns 0.6360 ns 3.72 -
RuntimeMemoryCacheGet .NET Framework 4.8 290.357 ns 4.0296 ns 17.80 32 B
ExtensionsMemoryCacheGet .NET Framework 4.8 114.464 ns 1.1239 ns 7.02 24 B

Lookup keys with a Zipf distribution

Take 1000 samples of a Zipfian distribution over a set of keys of size N and use the keys to lookup values in the cache. If there are N items, the probability of accessing an item numbered i or less is (i / N)^s.

s = 0.86 (yields approx 80/20 distribution)
N = 500

Cache size = N / 10 (so we can cache 10% of the total set). ConcurrentLru has approximately the same computational overhead as a standard LRU in this single threaded test.

Method Mean Error StdDev Ratio RatioSD
ClassicLru 175.7 ns 2.75 ns 2.43 ns 1.00 0.00
FastConcurrentLru 180.2 ns 2.55 ns 2.26 ns 1.03 0.02
ConcurrentLru 189.1 ns 3.14 ns 2.94 ns 1.08 0.03
FastConcurrentTLru 261.4 ns 4.53 ns 4.01 ns 1.49 0.04
ConcurrentTLru 266.1 ns 3.96 ns 3.51 ns 1.51 0.03