-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Exodus inside MPI fails #174
Comments
Can you let me know the version numbers of MPI, Exodus, julia, NetCDF_jll and HDF5_jll that'd being used in this example? |
And I forgot to mention, if I put using Exodus below MPI.Init() it don't crash. |
There could be conflicting mpi versions then potentially. Have you tried running the example with mpiexecjl rather than mpiexec? See the MPI.jl docs if you're not sire what I mean. |
Yes you could be right, mpiexecjl is indeed working, but unfortunately we can't use it on our HPC. Is there maybe another solution? |
I'm no MPI expert but I think the solution is to rebuild Exodus_jll.jl locally built against your system MPI. This will likely involve modifying the build_tarballs.jl file found for Exodus in Yggdrasil https://github.com/JuliaPackaging/Yggdrasil I don't think building jll packages with BinaryBuilder with MPI is well documented, but there are examples that can be followed, such as Trilinos or HDF5. Alternatively, you can bypass Exodus_jll all together and build exodus locally for the SEACAS github page and then link things appropriately in a fork of Exodus.jl. I'm not sure if there's a better solution. |
Thank you I will look into that. |
Hi there, if I run this MPI example and use the Exodus Package, MPI is crashing
mpiexec -n 3 julia --project script.jl
Error:
I'm pretty sure that the package used to work with MPI, but I also tried older releases of the package.
Maybe I'm missing something. How can I keep using Exodus.
The text was updated successfully, but these errors were encountered: