-
Notifications
You must be signed in to change notification settings - Fork 759
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add standalone python scripts for local usage #95
Conversation
Did you see #34 ? |
This is planned to supersede that since I'd like to avoid attempting to do environment setup in the script itself. I also want to provide scripts for both speech editing and TTS. Will start after I finish my current PR |
Great, just wanted to be sure you knew about it / would re-use anything
that's useful if you can/want.
Good luck on your work.
…On Thu, Apr 18, 2024 at 12:15 AM Pranay Gosar ***@***.***> wrote:
This is planned to supersede that since I'd like to avoid attempting to do
environment setup in the script itself. I also want to provide scripts for
both speech editing and TTS. Will start after I finish my current PR
—
Reply to this email directly, view it on GitHub
<#95 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAA2SFJ42ORXK25QPUPKDBLY53X7RAVCNFSM6AAAAABGMCZJY6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANRSGU2DKOJSGM>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
勇気とユーモア
|
Definitely feel free to use whatever you can from this to save yourself work or time! Unfortunately, I got in a spot where I couldn't dedicate more time to the script I put up. I kept getting errors with audiocraft not being found when running it and, unfortunately, Python isn't my forte, so I wasn't sure how to rectify that between the parent environment and the inner Conda environment. You should still be able to reuse the environment setup stuff if you want especially installing Python modules conditionally and the pip stuff. I know @pgosar mentioned not doing setup stuff, but just an idea to throw out there you could put behind a --install-deps flag or something. Best of luck! |
PR should be functional now, tomorrow I will take a pass through and clean up the code a little and make sure I didn't miss any potential breakages. Every hardcoded variable concerning inference, outputs, inputs etc. has been turned into a command line argument. They are all optional. The default values are whatever they were set to originally. This should be merged in before my other PR #94 because I'll need to make changes to the speech editing script based on those changes |
I don't know if this is your exact issue but when I wrote the Google Colabs I had to clone Audiocraft into the VoiceCraft folder. Regardless, my scripts work without any special environment setup beyond what's in the README currently. |
@jasonppy Hi, I should be ready on my side |
Thanks, I'll test it in the next two days |
align_temp = f"{temp_folder}/mfa_alignments" | ||
beam_size = args.beam_size | ||
retry_beam_size = args.retry_beam_size | ||
os.system("source ~/.bashrc && \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the forced alignment output is not really used because the user need to specify cut_off_sec when calling the file
tts_demo.py
Outdated
|
||
# take a look at demo/temp/mfa_alignment, decide which part of the audio to use as prompt | ||
cut_off_sec = args.cut_off_sec # NOTE: according to forced-alignment file demo/temp/mfa_alignments/5895_34622_000026_000002.wav, the word "strength" stop as 3.561 sec, so we use first 3.6 sec as the prompt. this should be different for different audio | ||
target_transcript = args.target_transcript |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add something like
cut_off_sec, cut_off_word_idx = find_closest_word_boundary(cut_off_sec, word_alignment_fn, margin)
target_transcript = " ".join(orig_transcript.split(" ")[:cut_off_word_idx]) + " " + args.target_transcript
and find_closest_word_boundary
will find the word end boundary (in word_alignment_fn file) that is the closest to the user specified cut_off_sec, which also has some gap (specified by margin) between the next word start boundary. And return word_end_boundary + margin/2 as the cut_off_sec. margin can be a user specified parameter
parser.add_argument("-ot", "--original_transcript", type=str, | ||
default="But when I had approached so near to them The common object, which the sense deceives, Lost not by distance any of its marks,", | ||
help="original transcript") | ||
parser.add_argument("-tt", "--target_transcript", type=str, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you make the target_transcript not a concatenation of the transcript of the prompt and the real target transcript. As the user will not be able to specify the prompt and cut-off-sec without checking MFA alignment output. a workaround is written in the comments below
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
The main thing I'm concerned is that the user needs to specify cut_off_sec
(which is the prompt we want to cut from the input audio, and specify target_transcript
as a concatenation of transcript of the prompt and real target transcript. However, one can't not do that without looking at MFA output, which is the output of calling the script. A workaround is specified in comments.
I'll take a look at these in a day or two |
Sorry for the delay - had to complete my final projects/exams. I implemented the `find_closest_word_boundary such that based on the specified cut off seconds, it outputs a new one that takes into account the margins. This then means that based on your suggestion about the target_transcript, the user should be able to input only what new speech they want to generate, and the cut off point of the original audio to replace. I'm a little confused, is the behavior you want that the user can specify a target transcript only and then the script will figure out the cut off seconds? That would be quite easy to adjust my current implementation to do - all I'd need to do is search for the last matching word between the original and target transcript and set that point as the |
Work in progress to create a python script to run inference for speech editing and TTS that is separate from Jupyter
Will handle #56
TODO