You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm interested in contributing to VoiceCraft by adding emotion control functionality. My goal is to enable the model to generate audio with a specified emotion while cloning a voice from a reference audio. I have a few questions about implementing this feature:
I'm considering using the Emotional Speech Dataset (ESD) for training. Is this dataset suitable, or would you recommend alternatives?
Should the loss function be modified to account for the emotion tag?
The paper mentions that training VoiceCraft took around 240 hours. Would implementing this new feature require a similar training time, or could it be done more efficiently?
As someone new to open-source contributions, I'd appreciate any guidance on how to proceed with this feature addition. Thank you for your help!
The text was updated successfully, but these errors were encountered:
I'm interested in contributing to VoiceCraft by adding emotion control functionality. My goal is to enable the model to generate audio with a specified emotion while cloning a voice from a reference audio. I have a few questions about implementing this feature:
I'm considering using the Emotional Speech Dataset (ESD) for training. Is this dataset suitable, or would you recommend alternatives?
Should the loss function be modified to account for the emotion tag?
The paper mentions that training VoiceCraft took around 240 hours. Would implementing this new feature require a similar training time, or could it be done more efficiently?
As someone new to open-source contributions, I'd appreciate any guidance on how to proceed with this feature addition. Thank you for your help!
The text was updated successfully, but these errors were encountered: