-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
out of memory 训练的时候显存一直在增长 #252
Comments
后面有人提到了,train.py第76行,这两个顺序不对的话好像是会造成显存泄露
to
|
@deepxzy hi!感谢你的回答,我尝试你的方案,但是不work! |
我有类似的训练时内存不断增加的问题,调试之后发现是eval阶段内存占用会不断增大 |
I train on nvidia pytorch docker and also have this problem. Try not to use the pin_memory resolve this problem. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
已经尝试的方法:
loss.item()没问题
dataloader加载数据也没有增长数据。
The text was updated successfully, but these errors were encountered: