You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Due to use of threading.Lock.acquire() inside async def __send_batch, the event loop can be blocked from progressing. This becomes problematic when multiple threads call batch.add_object as lock contention can occur between the many locks involved in synchronising the batching algorithm
In certain scenarios this lock contention leads to a deadlock as the event loop becomes blocked by a call to self.__active_requests_lock.acquire() inside async def __send_batch that stops the event loop from progressing thereby causing self.__active_requests to never increase or decrease in value, since all active requests are stuck in a blocked event loop
The same may occur with the other locks so it would be best to migrate all threading.Locks to asyncio.Locks and to call asyncio.Lock.acquire() within the sidecar event loop from synchronous functions
Once this is dealt with, the batching loop is much faster in sending objects. However, this has unintended consequences for the heuristics in the dynamic batch size calculating background thread. The heuristics involved there will need to be refactored to account for the timing behaviour change between using threading.Lock and asyncio.Lock
Due to use of
threading.Lock.acquire()
insideasync def __send_batch
, the event loop can be blocked from progressing. This becomes problematic when multiple threads callbatch.add_object
as lock contention can occur between the many locks involved in synchronising the batching algorithmIn certain scenarios this lock contention leads to a deadlock as the event loop becomes blocked by a call to
self.__active_requests_lock.acquire()
insideasync def __send_batch
that stops the event loop from progressing thereby causingself.__active_requests
to never increase or decrease in value, since all active requests are stuck in a blocked event loopThe same may occur with the other locks so it would be best to migrate all
threading.Lock
s toasyncio.Lock
s and to callasyncio.Lock.acquire()
within the sidecar event loop from synchronous functionsOnce this is dealt with, the batching loop is much faster in sending objects. However, this has unintended consequences for the heuristics in the dynamic batch size calculating background thread. The heuristics involved there will need to be refactored to account for the timing behaviour change between using
threading.Lock
andasyncio.Lock
Draft PR for reference: #1270
The text was updated successfully, but these errors were encountered: