-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question: difference in performance between Cache#get()
and Cache.asMap()#compute
#1761
Comments
Since this is always writing and that resets the entry's expiration time, that update needs to propagate to the expiration policy which maintains a time-ordered queue. To avoid global locks we use an intermediate buffer to publish an event (pub-sub message), which is typically asynchronously consumed to maintain this shared state with minimal blocking. However if that buffer is full then since we can't drop writes, it applies back pressure by forcing the writer to assist in processing this pending work. At a high write rate then it fills up quickly and degrades to your bottleneck by serializing the operations. A small configuration change that might help is to use Otherwise, there is pending work in #1320, which optimizes for your exact use-case. That is to bring in an optimization on The last option is to perform this TTL yourself by lazily relying only on the |
Thank you so much @ben-manes for taking the time not only to dispel the mystery but also educate at such a level. We indeed did not need |
And I guess it's redundant to say Ben, but caffeine has been an indispensable tool. |
thank you! I'm glad it's helpful. always feel more than welcome to reach out with questions, keeps me motivated 🙂 |
This is more a question than anything else.
After optimizing a service heavily, Caffeine is now appearing in our CPU hot spot reports, especially
scheduleDrainBuffers
which is invoked bycompute
.Cache config:
The size is more than enough than the expected number of items, it's only there as a defense mechanism against memory catastrophes.
Currently, when accessing the cache, we're always doing the following:
And I wonder if we can gain anything in terms of CPU usage if we do:
If so, it'd be lovely to understand why.
I assume both are thread-safe in the same manner.
The text was updated successfully, but these errors were encountered: