-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarification on Number of Training Steps #65
Comments
Thank you for your attention. We set an upper limit of 2 million steps for training. And in practice, the training process is often terminated earlier based on observations from TensorBoard. |
So you mean to say that the number |
2 million, not 20 million. |
I'm not sure I understand. This number in your config is not 2 million, it is 20 million (there are 7, not 6 zeros). Are you saying that the number specified in the paper and the comments is wrong, or that the config is wrong? These two numbers are not consistent with each other. |
We have further updated the config for the better understanding. Thank you. |
Thank you for updating the config - did you use 2M or 20M when training the models presented in the paper? This affects things like the learning rate scheduler. |
In the paper and in the config comment, you state that you train the model for 1M steps each for the discriminator and generator. However, the config itself uses
Could you clarify which of these numbers is a typo?
The text was updated successfully, but these errors were encountered: