You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm coding the BFGS algorithm in DENISE. But I've an issue with Matrix initialization with function fmatrix(). Please can you check this error:
Message from PE 0
R U N - T I M E E R R O R:
allocation failure 2 in function fmatrix()
...now exiting to system.
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
I initialize three matrix for BFGS method :
identity = fmatrix(1, NLBFGS_vec, 1, NLBFGS_vec);
inv_hes = fmatrix(1, NLBFGS_vec, 1, NLBFGS_vec);
h_inv_plus = fmatrix(1, NLBFGS_vec, 1, NLBFGS_vec);
The text was updated successfully, but these errors were encountered:
I don't think that the implementation of BFGS instead of l-BFGS for a large-scale optimization problem like FWI in DENISE Black-Edition is a good idea due to the significant memory requirements for storing the inverse Hessian. The size of the inverse Hessian for the isotropic elastic PSV problem with three parameter classes (vp, vs, rho) would be (NX * NY * 3) x (NX * NY * 3) and not NLBFS_vec x NLBFS_vec. In case of the small sized Marmousi-2 problem with
NX = 500, NY = 174
and assuming a size of single precision floats of 4 bytes, the storage of the inverse Hessian would require
(500 * 174 * 3)**2 * 4 * 1e-9 ~ 272 GB RAM
What are the benefits of using BFGS instead of l-BFGS optimization?
Hello Daniel,
I'm coding the BFGS algorithm in DENISE. But I've an issue with Matrix initialization with function fmatrix(). Please can you check this error:
Message from PE 0
R U N - T I M E E R R O R:
allocation failure 2 in function fmatrix()
...now exiting to system.
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
I initialize three matrix for BFGS method :
identity = fmatrix(1, NLBFGS_vec, 1, NLBFGS_vec);
inv_hes = fmatrix(1, NLBFGS_vec, 1, NLBFGS_vec);
h_inv_plus = fmatrix(1, NLBFGS_vec, 1, NLBFGS_vec);
The text was updated successfully, but these errors were encountered: