Skip to content

Commit

Permalink
Clarify the behavior of getAll if additional keys are loaded
Browse files Browse the repository at this point in the history
This is already documented on the cache loaders for their bulk
loading methods:
```
If the returned map contains extra keys not present in {@code keys}
then all returned entries will be cached, but only the entries for
{@code keys} will be returned from {@code getAll}.
```
  • Loading branch information
ben-manes committed Dec 2, 2021
1 parent 8c7160d commit 05a040c
Show file tree
Hide file tree
Showing 3 changed files with 10 additions and 6 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -108,8 +108,9 @@ CompletableFuture<V> get(@NonNull K key,
* <p>
* A single request to the {@code mappingFunction} is performed for all keys which are not already
* present in the cache. If another call to {@link #get} tries to load the value for a key in
* {@code keys}, that thread retrieves a future that is completed by this bulk computation. Note
* that multiple threads can concurrently load values for distinct keys.
* {@code keys}, that thread retrieves a future that is completed by this bulk computation. Any
* loaded values for keys that were not specifically requested will not be returned, but will be
* stored in the cache. Note that multiple threads can concurrently load values for distinct keys.
* <p>
* Note that duplicate elements in {@code keys}, as determined by {@link Object#equals}, will be
* ignored.
Expand Down Expand Up @@ -138,8 +139,9 @@ default CompletableFuture<Map<K, V>> getAll(@NonNull Iterable<? extends @NonNull
* <p>
* A single request to the {@code mappingFunction} is performed for all keys which are not already
* present in the cache. If another call to {@link #get} tries to load the value for a key in
* {@code keys}, that thread retrieves a future that is completed by this bulk computation. Note
* that multiple threads can concurrently load values for distinct keys.
* {@code keys}, that thread retrieves a future that is completed by this bulk computation. Any
* loaded values for keys that were not specifically requested will not be returned, but will be
* stored in the cache. Note that multiple threads can concurrently load values for distinct keys.
* <p>
* Note that duplicate elements in {@code keys}, as determined by {@link Object#equals}, will be
* ignored.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1373,7 +1373,7 @@ void afterWrite(Runnable task) {
scheduleDrainBuffers();
}

// The maintenance task may be scheduled but not running due. This might occur due to all of the
// The maintenance task may be scheduled but not running. This might occur due to all of the
// executor's threads being busy (perhaps writing into this cache), the write rate greatly
// exceeds the consuming rate, priority inversion, or if the executor silently discarded the
// maintenance task. In these scenarios then the writing threads cannot make progress and
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,9 @@ public interface Cache<K, V> {
* the value for a key in {@code keys}, implementations may either have that thread load the entry
* or simply wait for this thread to finish and return the loaded value. In the case of
* overlapping non-blocking loads, the last load to complete will replace the existing entry. Note
* that multiple threads can concurrently load values for distinct keys.
* that multiple threads can concurrently load values for distinct keys. Any loaded values for
* keys that were not specifically requested will not be returned, but will be stored in the
* cache.
* <p>
* Note that duplicate elements in {@code keys}, as determined by {@link Object#equals}, will be
* ignored.
Expand Down

0 comments on commit 05a040c

Please sign in to comment.