You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
By default, joblib.Parallel will memmap any numpy ndarray objects that are shared between tasks when they are bigger than 1e6 bytes, to avoid lots of memory allocation. In any experiment involving the big MSD PSE matrix (~1 GB = 1e9 bytes), this matrix will get memmapped, and I think this is preventing the script from running fully parallel. We have tons of memory, so it's totally safe for this object to be shared across tasks. Setting max_nbytes = 2e9 is probably safe and may avoid this locking issue.
The text was updated successfully, but these errors were encountered:
By default, joblib.Parallel will memmap any numpy ndarray objects that are shared between tasks when they are bigger than 1e6 bytes, to avoid lots of memory allocation. In any experiment involving the big MSD PSE matrix (~1 GB = 1e9 bytes), this matrix will get memmapped, and I think this is preventing the script from running fully parallel. We have tons of memory, so it's totally safe for this object to be shared across tasks. Setting max_nbytes = 2e9 is probably safe and may avoid this locking issue.
The text was updated successfully, but these errors were encountered: