-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High memory consumption #3427
Comments
Hi Aroune, depends entirely on the number and size of the documents you are indexing. Could you provide us with some more context? |
That is definitely a lot of docs, it may be necessary to add RAM to support that. You can get a better feel for this by inspecting the memory usage of Vespa via the metrics endpoint. https://docs.vespa.ai/en/operations/metrics.html |
Ok, so I as understand it seems pretty legit. |
@rkuo-danswer we are experiencing a similar issue, is it expected that the index container's memory usage grows proportionately to number of indexed documents? that seems like a poor design decision if that's the case - acts more like a memory leak. isn't the Vespa database meant to prevent a need for loading every document in ram? |
Not really ... in fact being in memory is a key component of being able to perform similarity searches across documents quickly. There are probably some significant optimizations we can apply here, but generally speaking this is expected behavior. |
Hello,
We have a Danswer host installed with docker. The host have 64GB of RAM, but vespa keeps getting killed for OOM. And then while it is restarted, the backend cannot reach again for vespa.
Memory:
Vespa getting killed for OOM:
Backend cannot reach vespa anymore:
If I restart the backend container it's working again.
Is it normal for Danswer to require more than 64GB of memory?
The text was updated successfully, but these errors were encountered: