-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache strategy to reduce calls to the people API #447
Comments
Some comments:
The ESN already provides a Cache service, but it caches objects in memory with a TTL. When the item is not available in cache or when cache expires, it will call a function which resolve with a resource (the person in our case). I suggest that the ESN provides a new Cache storage based on some browser storage, which means that we can keep the same Cache API but choose to use the in memory one, or the storage based one. There are pretty good libraries which abstracts storage and support queries, one of them is dexie (IndexedDB wrapper, I already used it in a side project, pretty cool to use). You can use whatever you want, but please do not write your own serializer (ie do not JSON.stringify stuff yourself, this is time consuming and useless), use a library which does things for you. |
Please specify the calls that we want to cache...
Anyway we should plan a way to review the usefulness of such a cache, and closely monitor hit rates. I imagine we need applicative metrics to monitor this... Also, a cache partially solves the problem upon search. Suppose that I search To be fair, a good idea would be streaming the results (eg over webSocket). Each sources of the people API would stream its search results at its own pace, and we would only need to add a search result provider for the PEOPLE CACHE. That way, we would no longer be impacted by the slowest result
👍 |
@chibenwa : I think initially @vincent-zurczak seemed quite clear on the scope of the issue. By search, he meant the retrieve of people's profiles to match for example the author email with his name and avatar that you can see when in your message list in inbox, this part: He is not talking about the search search function :) Which means we have the mail author, we know what user it is, it should be easy to cache and retrieve instead of constantly calling the API, which is what Inbox seems to do for now. |
True but I remember the audit pin-pointed also the people search API |
That should be an other issue I believe then, as it is much more complex as you explained above |
There are several actions undertaken to improve the people API. Right now, it is used by both. Like @Arsnael said, this issue is about improving user information display (users are retrieved one by one), not searching users by general criteria. We are not redesigning this API at this stage. :) |
OK, we do not speak about search here, we speak about the People resolve endpoint (https://host:port/api/people/resolve/emailaddress/[email protected]) which is called for each element of the email list. |
Then I completly buy
Sorry for mixing topics @chamerling , I only saw the other ticket after... |
With Inbox, all the email authors are searched through the people API.
It occurs when the page opens up or when we scroll and retrieve older emails. This API is used to find people names and avatars.
It is not clear at the moment if the people API will remain and if there will be common services. But the short-term deadlines require to improve the current implementation, and using the browser's local storage to store information would save some requests. When there are thousands of users, saving invocations makes a change.
The idea is that inbox should save a map with user names and their avatar's address. If the author of an email is not found locally, then it will be asked to the people API and added into the cache. Even users that are not known by the people API (i.e. they are not part of the domain, work for another company, ...) should be added in the local cache.
Cache entries should have a time-to-live (3 days seems reasonable).
The text was updated successfully, but these errors were encountered: