Question about TinyFLU #663
-
If I understand it correctly the TinyFLU algorithm may result in some inserted entries NOT getting retained in the cache - is this right and if so is there a way to know when this happens (I would like to store these rejected entries in a L2 cache tier) are they treated as evicted so I can catch them that way? |
Beta Was this translation helpful? Give feedback.
Replies: 6 comments 6 replies
-
Yes, the policy will make choices that could cause it to aggressively remove recent arrivals. A good scenario is a loop, where the most recent entry's next access is the furthest out in the future. This would cause an LRU to fully miss, whereas MRU is optimal. Caffeine's policy is adaptive and can optimize to the workload. In your case an EvictionListener is likely the most suitable, which is called atomically with the hash table operation. The other alternative is a RemovalListener which is called asynchronously afterwards, which is ideal for reducing critical sections but may not be ideal if you cannot tolerate out-of-order events for a given key. |
Beta Was this translation helpful? Give feedback.
-
Thanks - have looked at Hazelcast in the past and then found it less stable
than Coherence but has not at all investigated Infinispan so will check it
out as well.
…On Sun, Feb 6, 2022 at 10:36 AM Ben Manes ***@***.***> wrote:
From your comment on the Coherence issue, but so as not to evangelize
competitors on their github, just an fyi that you might also look at
Infinispan. They use Caffeine for on-heap, have a custom LRU off-heap
cache, support RocksDB for local persistence, and have an option called
"passivation" that appears to be for deciding if a tier is inclusive vs
exclusive. I have not used either Coherence or Infinispan (or Ignite,
Hazelcast, etc) so I don't know how they behave in practice or how active
their communities are.
—
Reply to this email directly, view it on GitHub
<#663 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AADXQF4RO37TGVMBK2FMUSTUZY6I7ANCNFSM5NVAAPJA>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Hi Ben!
I am in a project where we are "stuck" with JDK 8 and Coherence 14 but have
some problem with the Coherence local cache implementation.
Based on the work you did with adopting Caffeine to Coherence would you say
it most likely will work to use the latest 2 version of Caffeine with an
older Coherence release (adding your own Cache Map implementation has
always been possible)?
Best Regards
Magnus
|
Beta Was this translation helpful? Give feedback.
-
Thanks for the quick reply - maybe we give it a try!
Best Regards
Magnus
…On Thu, Feb 23, 2023, 17:42 Ben Manes ***@***.***> wrote:
It would except for a caveat that I added a small feature to improve their
integration. Coherence offers a put(key, value, long timeToLiveMillis)
method for expiration. This is actually a Map compute because it has to
atomically invoke their listeners, which might enqueue the work async but
must be in the key's event order (e.g. replication consistency). The compute(key,
mappingFunction, duration) was added for this.
The alternatives are not too bad. Their older cache synchronizes all
writes, so if not an performance issue then that's an easy change. The
performance issue that may have prompted Coherence was slow eviction as
this blocks writes, and in my benchmarks their old cache was incredibly
slow at 1M+ entries.
The other way would be to wrap the value to hold the timestamp, e.g.
Expirable<V> and have Caffeine's expiration policy extract it from this
wrapper for the duration. Then a asMap().compute could modify the TTL
rather than need a the new method. The implications is a small memory
footprint added per entry and noisy code to handle the wrapper at all
usages.
So yes, but it would require that you do some refactoring to backport.
—
Reply to this email directly, view it on GitHub
<#663 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AADXQF6SZODGPOO4F7ZYU73WY6HQ3ANCNFSM5NVAAPJA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
We found the bug in Coherence - during eviction they iterated an ArrayList
of the keys to dropp and removed the entries in it as they where processed
to free memory - not a good idea with an ArrayList that in our case was
very large - resulted in a O(n**2) effect...
Best Regards
Magnus
…On Thu, Feb 23, 2023, 18:16 Javafanboy ***@***.***> wrote:
Thanks for the quick reply - maybe we give it a try!
Best Regards
Magnus
On Thu, Feb 23, 2023, 17:42 Ben Manes ***@***.***> wrote:
> It would except for a caveat that I added a small feature to improve
> their integration. Coherence offers a put(key, value, long
> timeToLiveMillis) method for expiration. This is actually a Map compute
> because it has to atomically invoke their listeners, which might enqueue
> the work async but must be in the key's event order (e.g. replication
> consistency). The compute(key, mappingFunction, duration) was added for
> this.
>
> The alternatives are not too bad. Their older cache synchronizes all
> writes, so if not an performance issue then that's an easy change. The
> performance issue that may have prompted Coherence was slow eviction as
> this blocks writes, and in my benchmarks their old cache was incredibly
> slow at 1M+ entries.
>
> The other way would be to wrap the value to hold the timestamp, e.g.
> Expirable<V> and have Caffeine's expiration policy extract it from this
> wrapper for the duration. Then a asMap().compute could modify the TTL
> rather than need a the new method. The implications is a small memory
> footprint added per entry and noisy code to handle the wrapper at all
> usages.
>
> So yes, but it would require that you do some refactoring to backport.
>
> —
> Reply to this email directly, view it on GitHub
> <#663 (reply in thread)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AADXQF6SZODGPOO4F7ZYU73WY6HQ3ANCNFSM5NVAAPJA>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
|
Beta Was this translation helpful? Give feedback.
-
Thanks for the info - very interesting!
Best Regards
Magnus
…On Thu, Feb 23, 2023 at 8:21 PM Ben Manes ***@***.***> wrote:
That makes sense and correlates well to my benchmarks.
[image: eviction]
<https://user-images.githubusercontent.com/378614/221008819-0d5612fc-c82d-43a0-b0b1-303dde3ca96e.png>
[image: concurrency]
<https://user-images.githubusercontent.com/378614/221008847-819ed96d-4c4d-49a5-a03e-07cb74e95d16.png>
—
Reply to this email directly, view it on GitHub
<#663 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AADXQFY4M2LEXGMY4JF4TCLWY62CZANCNFSM5NVAAPJA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
Yes, the policy will make choices that could cause it to aggressively remove recent arrivals. A good scenario is a loop, where the most recent entry's next access is the furthest out in the future. This would cause an LRU to fully miss, whereas MRU is optimal. Caffeine's policy is adaptive and can optimize to the workload.
In your case an EvictionListener is likely the most suitable, which is called atomically with the hash table operation. The other alternative is a RemovalListener which is called asynchronously afterwards, which is ideal for reducing critical sections but may not be ideal if you cannot tolerate out-of-order events for a given key.