You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to understand device allocation here. I have different GPUs capacities, and the program stops with OOM during backward() even when free space is available.
In the code, I see two critical parts for GPU alloc:
In class StyleTransfer, you create a device plan to spead the load of the 27ish layers of VGG over GPUs.
Hi,
I'm trying to understand device allocation here. I have different GPUs capacities, and the program stops with OOM during
backward()
even when free space is available.In the code, I see two critical parts for GPU alloc:
meaning you send 5 first layers to GPU0 and all other to GPU1.
so I'm not sure whether the load is spread at all during the backward descent.
How would you see a version where the load is spread evenly depending of each capacity?
Regards,
J
The text was updated successfully, but these errors were encountered: