We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Float8Linear
torch._scaled_mm
output1 = pointwise(intput); output2 = transpose(output1)
use_fast_accum
copied from pytorch-labs/float8_experimental#187
The text was updated successfully, but these errors were encountered:
No branches or pull requests
configurability
Float8Linear
to individual modulesperformance
torch._scaled_mm
support for rowwise scaled float8 gemmoutput1 = pointwise(intput); output2 = transpose(output1)
pytorch#130015distributed
other
use_fast_accum
(float8 accumulation of gemm) option to UX - Allow for modifying the scaled_mm compute pytorch-labs/float8_experimental#144copied from pytorch-labs/float8_experimental#187
The text was updated successfully, but these errors were encountered: