Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Working in WSL but 10min+ inference #153

Open
holunzoo12 opened this issue Aug 15, 2024 · 3 comments
Open

Working in WSL but 10min+ inference #153

holunzoo12 opened this issue Aug 15, 2024 · 3 comments

Comments

@holunzoo12
Copy link

Windows 11 64bit, WSL Ubuntu, RTX 3080 10gb vram

Hey, not sure anyone still checks this but I've cobbled the environment together after a lot of trial and error and have it working on gradio. The only problem is generating audio using either models takes at least 10 minutes. It's also using almost all of my vram. I see a person on reddit saying they have it running locally through jupyter notebook and it runs in near real time on their 3080. They just needed to change a line in "inference_tts.ipynb" to recognize their GPU. is this possible in gradio? what line would I need to edit inside what file?

@ajkessel
Copy link

@holunzoo12 can you provide any detail on how you got it working under WSL? I've had no luck so far.

@holunzoo12
Copy link
Author

@holunzoo12 can you provide any detail on how you got it working under WSL? I've had no luck so far.

Sorry, it's been a bit too long and It took hours of confused tinkering to put things together to have it run without shooting errors out. I can't remember exactly what I did to get it working. I've given up on it for now since I don't have the hardware to run it anyways it seems. Very sorry.

@ajkessel
Copy link

Yeah, I gave up on this but whisperspeech seems solid on WSL.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants