Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training loss logs #393

Open
FJonske opened this issue Sep 12, 2024 · 0 comments
Open

Training loss logs #393

FJonske opened this issue Sep 12, 2024 · 0 comments

Comments

@FJonske
Copy link

FJonske commented Sep 12, 2024

Hi,

do you happen to have any of your training/validation losses logged somewhere, and is there any chance I can have that log?

I'd like to get a rough feeling for whether hyperparameter setting changes I make have any positive or negative impact on convergence speed, without making a full training run. The reasoning here is that I have a more modern DGX available and can potentially train with more data, and that I intend to make up some speed by adjusting the batch size and learning rate upwards. I'd just like to know whether said runs look somewhat similar to yours in terms of training behavior/loss decrease per computation step.

Best regards,
Frederic

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant