You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I use four A100 run python -m prover.launch --config=configs/RMaxTS.py --log_dir=logs/RMaxTS_results
But always get cuda out of memory error.
I added some print message:
Then i get
All models are tried to load in first GPU. Is this expected? Or maybe pytorch behavior is different for new version (I am using '2.5.1+cu121', and the version in requirement.txt is 2.2.1)
The text was updated successfully, but these errors were encountered:
I use four A100 run
python -m prover.launch --config=configs/RMaxTS.py --log_dir=logs/RMaxTS_results
But always get cuda out of memory error.
I added some print message:
Then i get
All models are tried to load in first GPU. Is this expected? Or maybe pytorch behavior is different for new version (I am using '2.5.1+cu121', and the version in requirement.txt is 2.2.1)
The text was updated successfully, but these errors were encountered: