-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[low-bit optim] Add COAT optimizer #1190
Comments
I would like to work on this @gau-nernst!! |
@MirMustafaAli Go ahead and submit a PR 😄. Let me know if you face any problems. |
@gau-nernst Which section should i look for to implement "dynamic range expansion"?. According to my understanding of repo it must be in float8 folder as it's aimed at hopper architecture utilizing type float8. Any pointers, PR's and reference methods which i can follow would be very much helpful for me. |
You can park it under You can extend our current Another option is to create a separate optimizer. See https://github.com/pytorch/ao/blob/000a49026459dd1dadf5ca34322d98e7b1680250/torchao/prototype/low_bit_optim/adam.py. You can wrap all of the logic in a functional way (see |
Thanks!! Will work on your advice. |
Paper: https://arxiv.org/abs/2410.19313
Code: https://github.com/NVlabs/COAT (not available yet)
Seems like we already have most of the building blocks. The only new logic is "dynamic range expansion"
We can start implementing it first, then wait for the official code release for numeric checks.
The text was updated successfully, but these errors were encountered: