Is it possible to share the cache capacity for multiple caches? #1249
-
My question is about sharing a single cache capacity across multiple cache instances. For example, I have one global cache size limit and multiple cache instances whose summed size must not exceed that global limit. Given there is a way to do that, I could drop a particular cache instance, which would be equivalent to iterating, filtering, and evicting the items manually if I used a single cache instance. Or could there be a way to provide some external expiration predicate - say, a lambda that accepts the key and tells whether the item is expired regardless of other conditions? That would make it possible to evict those entries earlier. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
I think you might be able to come up with something by using the Cache.policy() apis, e.g. to adjust the maximum size. I don't think there is anything the cache can do more easily or efficiently that you could not. I believe the hard part is deciding what the correct cache sizes should be, which is usually in response to the workload and performance observed. You might consider using robinhood caching to drive the reallocations. |
Beta Was this translation helpful? Give feedback.
I think you might be able to come up with something by using the Cache.policy() apis, e.g. to adjust the maximum size. I don't think there is anything the cache can do more easily or efficiently that you could not.
I believe the hard part is deciding what the correct cache sizes should be, which is usually in response to the workload and performance observed. You might consider using robinhood caching to drive the reallocations.