We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Thank you for the awesome work. I want to allocate a new uint64_t type tensor.
uint P = conic.size(0); TorchTensor<uint64_t3> dL_duvd_int = TorchTensor<uint64_t3>.alloc(P, 3); TorchTensor<uint64_t3> dL_dconic_int = TorchTensor<uint64_t3>.alloc(P, 3); TorchTensor<uint64_t> dL_dopacity_int = TorchTensor<uint64_t>.alloc(P); TorchTensor<uint64_t3> dL_dfeature_int = TorchTensor<uint64_t3>.alloc(P, 3); TorchTensor<uint64_t2> dL_dndc_int = TorchTensor<uint64_t2>.alloc(P, 2);
However, it seems that the scalar type is not supported.
I am using CUDA 12.2, pytorch 2.1.1 and slangtorch 1.2.6
The text was updated successfully, but these errors were encountered:
Thanks for raising this. Is this issue blocking you? We plan to get to this one in Q1 2025. Any concerns with that timeline?
Sorry, something went wrong.
No feedback, so planed for this quarter.
No branches or pull requests
Thank you for the awesome work. I want to allocate a new uint64_t type tensor.
However, it seems that the scalar type is not supported.
I am using CUDA 12.2, pytorch 2.1.1 and slangtorch 1.2.6
The text was updated successfully, but these errors were encountered: