You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have an issue where zombie Mongo queries are able to run for several hours—long after any client that requested them has perished. Unfortunately, a global query timeout seems impossible, but maybe we could wrap pymongo somehow so that it sends maxTimeMS with all queries: https://stackoverflow.com/a/60542564/2402324
Whatever solution we arrive at we should implement in KoBoCAT as well.
Somewhat related to kobotoolbox/kobocat#696 in that both that and this work together to cause MongoDB slowdowns, which lead to users getting 502s
The text was updated successfully, but these errors were encountered:
Elevated the priority because the servers are really struggling under the load.
As I mentioned earlier, I don't see a way to set maxTimeMS on every query used with our pymongo MONGO_DB, but maybe the easiest thing to do is to make a helper function that wraps MONGO_DB.instances.find() and adds the maxTimeMS argument.
The limit should be CELERY_TASK_TIME_LIMIT (converted to milliseconds) + some grace period
We have an issue where zombie Mongo queries are able to run for several hours—long after any client that requested them has perished. Unfortunately, a global query timeout seems impossible, but maybe we could wrap pymongo somehow so that it sends
maxTimeMS
with all queries: https://stackoverflow.com/a/60542564/2402324Whatever solution we arrive at we should implement in KoBoCAT as well.
Somewhat related to kobotoolbox/kobocat#696 in that both that and this work together to cause MongoDB slowdowns, which lead to users getting 502s
The text was updated successfully, but these errors were encountered: