-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! #1
Comments
I solved that through transferring it into
|
I think you have to reduce the batch size, even though I have 2 X2080ti , batch-size set as 2 |
My GPU is a Titan Xp with 12GB of memory, and the image size is 576*576, but I still get a "out of memory" error even when I set the batch size to 1. |
I am facing it, can you share solution? @afpapqy @landiaokafeiyan |
I modified little bit and I can run without device error in models/swin_transformer_v2.py line 294 this is an example. You can get another variable to change the tensor's device status. |
Thank you for this solution! original: logit_scale = torch.clamp(self.logit_scale, max=torch.log(torch.tensor(1. / 0.01))).exp() |
Hi there,
Thanks for your excellent work. I have this problem when I train and test your code. Do you have any idea what is wrong? Since I find that the data and model are all in cuda.
Thanks in advance!
The text was updated successfully, but these errors were encountered: