You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When an LLM is still generating sentences when an interruption occurs (e.g. "tell me a long story"), any currently queued TTS messages are aborted, however the LLM continues to generate sentences, causing the bot to start speaking again.
This makes the experience SUPER painful to users, causing them to want to scream at the bot until it shuts up.
Environment
pipecat-ai version: 0.0.52
Repro steps
Ask the bot to say something long, interrupt it while it's still generating messages.
Expected behavior
LLM stops generating when interrupted.
Actual behavior
LLM keeps generating, pushing more TTS frames and causes weird skipping.
Logs
The following logs mix our custom logging and processors, but they accompany the recording so you can see what I mean
When you see the various Got frame StartInterruptionFrame is when we're sending proper interruptions. As you can see in the attached video and the logs below, we send audio frames between Generating TTS logs, which means the LLM is still generating frames. Because the LLM is not interrupted, you can hear that previously generated frames are skipped, but the new ones generated play the TTS despite being interrupted.
Sometimes I get a sort of "inverse" behavior where it will stop generating new LLM frames, but instead will just continue to play the TTS that's already been generated
Description
When an LLM is still generating sentences when an interruption occurs (e.g. "tell me a long story"), any currently queued TTS messages are aborted, however the LLM continues to generate sentences, causing the bot to start speaking again.
This makes the experience SUPER painful to users, causing them to want to scream at the bot until it shuts up.
Environment
Repro steps
Ask the bot to say something long, interrupt it while it's still generating messages.
Expected behavior
LLM stops generating when interrupted.
Actual behavior
LLM keeps generating, pushing more TTS frames and causes weird skipping.
Logs
The following logs mix our custom logging and processors, but they accompany the recording so you can see what I mean
When you see the various
Got frame StartInterruptionFrame
is when we're sending proper interruptions. As you can see in the attached video and the logs below, we send audio frames betweenGenerating TTS
logs, which means the LLM is still generating frames. Because the LLM is not interrupted, you can hear that previously generated frames are skipped, but the new ones generated play the TTS despite being interrupted.tangia_dantest.2025-01-08.20_19_02.mp4
The text was updated successfully, but these errors were encountered: