-
hi, @ben-manes , I have read your answer at this topic, i agree with you. I met the likely problem, i use caffeine to cache plan entry(200K per object) in a SQL engine(actually Presto), I'm thinking about that, is there any GC-friendly policy for caffeine? I used to realized it on hand, but the maximumSize seems to be static initialized with Cache, can not be update at runtime. Anything tips? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 10 replies
-
The cache does not monitor available memory to adjust any setting. If you wish to do so, then it can be changed dynamically using the cache.policy().eviction().ifPresent(eviction -> {
eviction.setMaximum(2 * eviction.getMaximum());
}); You can combine size-based eviction with soft references if you want a failsafe. This does add a small amount of per-entry memory overhead, though. The cache's size eviction policy should be more GC friendly than a typical LRU, which violates the generational hypothesis (making it a GC benchmark for worst-case behavior). A SQL database is MRU / LFU biased, so the cache will auto-configure itself such that the admission policy will more readily reject a recent but infrequently used entry. As an MRU, this should increase the chance that the evicted entry is in the younger generation and reduce the GC pressure. |
Beta Was this translation helpful? Give feedback.
The cache does not monitor available memory to adjust any setting. If you wish to do so, then it can be changed dynamically using the
Cache.policy()
api.You can combine size-based eviction with soft references if you want a failsafe. This does add a small amount of per-entry memory overhead, though.
The cache's size eviction policy should be more GC friendly than a typical LRU, which violates the generational hypothesis (making it a GC benchmark for worst-case behavior). A SQL database is MRU / LFU biased, so the cache will auto-configure itself such that the admission policy will more…