You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
TTS inference is in batch mode, it's just right now running the same transcript and selecting the shortest. Need some tweak in the code to support batch processing
Hi, Is it possible to batch inference like LLMs do? Such as provide 10 transcripts and batch the requests to increase total throughput?
The text was updated successfully, but these errors were encountered: