-
Notifications
You must be signed in to change notification settings - Fork 907
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use multi-gpu to train #153
Comments
You could use this fork #152 |
Hi @fire-python, thanks for reaching out! I am not sure if you're requesting a multi-gpu training or a limitation on used GPUs for the model so that you can run multiple different Tacotron trainings at the same time.
I hope this answers your question? In any case I will be adding both those enhancements to the code. Thus I am leaving your issue open until I upload both of them :) Thanks for reaching out @fire-python! |
It's simply the first requesting. I will try the fork you mentioned. Thanks |
Hi, @Rayhane-mamah,
I have two gpus, and I want to assign training tasks parallelly to both. Could you give some advice, where to do some change to the code, such as inserting "with tf.device(...):", will allow me to do that? Thanks.
The text was updated successfully, but these errors were encountered: