Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About training precision #123

Open
guanjunwu opened this issue Jan 7, 2025 · 1 comment
Open

About training precision #123

guanjunwu opened this issue Jan 7, 2025 · 1 comment

Comments

@guanjunwu
Copy link

Dear authors,

thanks for your great work!

I would like to know do you use mixed precise for training? e.g. torch.amp = True or torch.cuda.amp.autocast(enabled=use_amp)?

I found that mip-splatting rasterization only support fp32 so I guess you didn't enable amp? But flash attention only support fp16. So I'm confused.

Best,

@guanjunwu guanjunwu changed the title About training precise About training precision Jan 7, 2025
@zjh21
Copy link

zjh21 commented Jan 13, 2025

I am have the same question. Looking forward to response.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants