Skip to content

Items not removed when CPU bound (using spark) #795

Answered by ben-manes
stemill asked this question in Q&A
Discussion options

You must be logged in to vote

The cache should induce backpressure on writes if eviction cannot keep up. Can you try running our stress test?

By default the cache uses ForkJoinPool.commonPool() for any async work, which includes evictions. There have been bugs in some JDKs where FJP could drop tasks (race conditions causing internal data loss) and we've gradually made Caffeine more robust to these cases. When the cache induces backpressure on writes due to too many pending evictions it should block writers on the eviction lock, thereby unscheduling those threads or assisting if the eviction wasn't actually run.

The async eviction isn't necessary for our own logic, but we do have callbacks (like Caffeine.evictionListener

Replies: 4 comments 5 replies

Comment options

You must be logged in to vote
0 replies
Answer selected by ben-manes
Comment options

You must be logged in to vote
1 reply
@ben-manes
Comment options

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
4 replies
@stemill
Comment options

@ben-manes
Comment options

@stemill
Comment options

@ben-manes
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants