Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PowerGridPcSenseMPI_TS Seg Fault #15

Open
aaronta opened this issue Mar 9, 2021 · 3 comments
Open

PowerGridPcSenseMPI_TS Seg Fault #15

aaronta opened this issue Mar 9, 2021 · 3 comments
Assignees
Labels

Comments

@aaronta
Copy link

aaronta commented Mar 9, 2021

The PowerGridPcSenseMPI_TS.cpp is failing at the end of the main loop with a segmentation fault.

The data has dimensions 192x192x92 and using the PowerGrid Docker container with the following Nextflow command:

mpirun --allow-run-as-root -n !{cores} /opt/PowerGrid/bin/PowerGridPcSenseMPI_TS -i !{prepFile} -n 30 -D2 -B 1000 -o ./

where cores=5.

The threads for the 5 MPI and the main loop seems to go all the way to the end because all 30 iterations for each 24 slices and 24 reps save out the mag and phs. Here is the last 2000 lines of the logfile:
command_last2000.log

The only command left in the main function and after the main loop structure is:

closeISMRMRDData(d, hdr, acqTrack);

The segmentation fault seems to be at the end of loop or the ISMRMRD command.

@acerjanic acerjanic added the bug label Apr 1, 2021
@acerjanic acerjanic self-assigned this Apr 1, 2021
@acerjanic
Copy link
Contributor

This has been an issue for a while that we've been working around.

Previously the reconstructed images would get written to disk and the segmentation fault only happened as we were exiting. Since the data was written out okay (at least previously), the only problem was the bad exit code we were getting.

Is the data still getting written out okay? If not, then this is a higher priority issue. If not, it's an issue, but not a critical one.

@aaronta
Copy link
Author

aaronta commented Apr 1, 2021

When I wrote the issue, I was using 92 slices instead of 96 (normal number for 24 slices with a multi-band factor of 4). I acquired new data with 96 slices and still got the seg fault after the last slice and repetition. It seems to get all the way through the looping structure and the only thing left is to close the file (closeISMRMRDData).

I looked at a different resolution but successful recon and realize I do get the same seg fault, so I must be looking at the wrong thing. I will do some more digging and follow-up.

@acerjanic
Copy link
Contributor

Can you send me a message outside of GitHub as to which machines are good to do the troubleshooting on?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants