-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IAM: Add caching to HTTP client #3148
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also add an issue for adding metrics to the client cache for hits vs misses. (enterprise feature ;))
auth/client/iam/caching.go
Outdated
if err != nil { | ||
return nil, fmt.Errorf("error while reading response body for caching: %w", err) | ||
} | ||
h.mux.Lock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For readability purposes place a single lock/unlock at the beginning of the only public method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That causes IO to be included in the lock scope, so that causes scalability/performance issues. I refactored the function a bit to make it seem less random.
out of interest / sanity check; how many simultaneous requests (order of magnitude) do you think it takes before the mutex is slower than ignoring cache? A successful authorization_code flow locks around 15-20 for the verifier and for the requester/holder |
Locking is only done while checking whether there's a cached copy and when inserting an item into the cache (memory/CPU operations only), not during HTTP requests/IO. So caching with mutexes will be much, much faster than performing HTTP requests, even with many parallel connections (I'd say especially with many parallel connections). |
@woutslakhorst I noticed the v5 functionality does not use the caching HTTP client. |
Fixes #3142
@woutslakhorst @gerardsn is this an approach you agree on?
I think the Nuts node should having client-side caching features on board, since I've not often seen an outbound proxy, let alone one that performs caching. So I suspect most parties won't be considering an outbound proxy for caching capabilities. This could severely downgrade performance/UX, given how often the same resources are requested.
There are a few libraries that do this, but all of them were archived and unmaintained with reasons like:
This solution just looks at expiry time (typically
max-age
), keeps cached resources around for max. 1 hour (ormax-age
if less). It also maxes out the available cache to 1mb (ofc. should be configurable).