-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Torch version #21
Comments
Hi, I guess it's the pytorch version issue. In higher versions, it is not allowed shorthands such as "loss += loss1" when calculating losses. Try "loss = loss + loss1" instead of the shorthand. If it still not work, then down-grade the version to 1.1 or 1.2. Thanks. |
I will try, thank you! |
Hi, there is another solution. d_loss should be back-propagated immediately once you have calculated it. |
Hello, I install the version of torch is 1.7.0, however, there is a mistake: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 512, 4, 4]] is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True). I modified inplace = True to inplace = false, the error is still existing. Can you give me some advice?
The text was updated successfully, but these errors were encountered: