-
Do I understand correctly, that currently the write time is the time the value is inserted into the cache and it does not take in account the time it actually takes to compute the value? I am using Caffeine through another library and I seem to be having a problem where slow value computations started before invalidating all entries end up in the cache. I was wondering if it possible to make them expired through value computation start time. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Correct, the time after the write completes and not when it starts. You could track that yourself and use a custom Expiry to adjust. If what you’re observing is a linearization problem, not expiration, then you’ll need another approach. An invalidateAll (Map.clear) is unable to observe in-flight loads because they are suppressed by ConcurrentHashMap’s iterator. Those would have to be tracked separately as key operations, like Map.remove, are linearizable. Another approach is to use a generation id as part of the key, so that entries from previous generations can’t be fetched and are eventually evicted. |
Beta Was this translation helpful? Give feedback.
Correct, the time after the write completes and not when it starts. You could track that yourself and use a custom Expiry to adjust.
If what you’re observing is a linearization problem, not expiration, then you’ll need another approach. An invalidateAll (Map.clear) is unable to observe in-flight loads because they are suppressed by ConcurrentHashMap’s iterator. Those would have to be tracked separately as key operations, like Map.remove, are linearizable. Another approach is to use a generation id as part of the key, so that entries from previous generations can’t be fetched and are eventually evicted.