We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dear authors,
thanks for your great work!
I would like to know do you use mixed precise for training? e.g. torch.amp = True or torch.cuda.amp.autocast(enabled=use_amp)?
torch.amp = True
torch.cuda.amp.autocast(enabled=use_amp)
I found that mip-splatting rasterization only support fp32 so I guess you didn't enable amp? But flash attention only support fp16. So I'm confused.
Best,
The text was updated successfully, but these errors were encountered:
I am have the same question. Looking forward to response.
Sorry, something went wrong.
No branches or pull requests
Dear authors,
thanks for your great work!
I would like to know do you use mixed precise for training? e.g.
torch.amp = True
ortorch.cuda.amp.autocast(enabled=use_amp)
?I found that mip-splatting rasterization only support fp32 so I guess you didn't enable amp? But flash attention only support fp16. So I'm confused.
Best,
The text was updated successfully, but these errors were encountered: