a run is active - exception (c#, open AI assistant) #9633
-
Hi team, not sure if anyone else has noticed this, but we are getting more and more often the following exceptions:
The scenario is, a user asks a question (new chat), we create an assistant and start a thread. This is completely random, I would say one in 7 chats gets the error message |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments
-
HI @JP-droidic - On the surface this seems to indicate the perhaps the user message is being added while the run is still active. I might double check any potential concurrency issues and also verify that the all of the ( Also, can you confirm whether you are targeting the Open AI endpoint or an Azure AI endpoint? In the meantime, I'll setup a local stress test to explore this dynamic further. If I am able to reproduce this locally, it will aid in understanding what may be at play here. |
Beta Was this translation helpful? Give feedback.
-
It is very strange indeed. LLM stopped returning results(stream) maybe even a minute after the second message was sent. We use Open AI endpoint. Thank! |
Beta Was this translation helpful? Give feedback.
-
Review the API docs and saw this confirmation on what is going on: https://platform.openai.com/docs/assistants/deep-dive#thread-locks The error message is pretty clear. I suppose the question is why is the run active? I reviewed the code and termination states. My only thought on how a run might be active is if an exception was thrown; although, I'd expect such a failure to bubble up into your call-stack. I'll keep looking, but can you also please examine logs to acertain if perhap an exception has occurred? If so, this will help guide the fix. |
Beta Was this translation helpful? Give feedback.
HI @JP-droidic - On the surface this seems to indicate the perhaps the user message is being added while the run is still active. I might double check any potential concurrency issues and also verify that the all of the (
IAsyncEnumerable
) streamed responses have been processed.Also, can you confirm whether you are targeting the Open AI endpoint or an Azure AI endpoint?
In the meantime, I'll setup a local stress test to explore this dynamic further. If I am able to reproduce this locally, it will aid in understanding what may be at play here.