-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GM injection in lumbar T2w imaging #5
Comments
Preliminary results on
|
@Nilser3 This is a very interesting and original approach. At first, I would go against this 'fake' creation of GM masks, but if you can prove that it provides better model performance, then I would say go for it! |
Both models will be tested in a new dataset coming soon:
|
When using the template, you will always add the "same" information (about the GM shape), so there will be an over redundancy of the information, without creating/using variability. This is probably a strong bias, to be taken into account. |
to test on a external dataset: |
about #5 (comment)
Important to also test on out-of-distribution. Two possibilities for now:
|
Model Fully Trained on Synthetically Generated DataThis is a model trained exclusively with synthetic data, using the following datasets:
After applying synth_gm_data.py script, two models were trained on 250 epochs: 2D"2d": {
"data_identifier": "nnUNetPlans_2d",
"preprocessor_name": "DefaultPreprocessor",
"batch_size": 46,
"patch_size": [
448,
448
],
"median_image_size_in_voxels": [
410.0,
410.0
],
"spacing": [
0.29296875,
0.29296875
], 3D-fullres"3d_fullres": {
"data_identifier": "nnUNetPlans_3d_fullres",
"preprocessor_name": "DefaultPreprocessor",
"batch_size": 2,
"patch_size": [
160,
224,
192
],
"median_image_size_in_voxels": [
330.0,
410.0,
410.0
],
"spacing": [
0.5000009536743164,
0.29296875,
0.29296875
],
showing promising results. Results
I find it interesting to see that the Following the philosophy of the arXiv Can segmentation models be trained with fully synthetically generated data?, I consider that we can add this synthetically generated data in the training of a contrast-region-agnostic GM segmentation model. |
Description
The GM image at the lumbo-sacral level looks very different compared to the cervical GM, so when training a region-agnostic GM segmentation algorithm, more lumbar image data is required.
We only have one dataset with visible lumbar GM (
lumbar-vanderbilt
) on images in T2S sequence.Proposition
There is a data published in this paper: An open-access lumbosacral spine MRI dataset with enhanced spinal nerve root structure resolution, it contains T2w sagittal whole spine and axial T2 images of the lumbo-sacral spinal cord (GM not visible):
The idea is to register the axial T2w images to the PAM50 template, in order to obtain a probabilistic GM mask in the subject space.
For this, we can use the
seg_sc_contrast_agnostic
model to segment the SC in the axial T2w image and we can extract the intervertebral discs labeling by applying thetotalspineseg
model in the sagittal T2w image.With which we could apply:
and by extracting the probabilistic masks of the GM of the PAM50 we would obtain:
Multiplying the soft mask by the image_T2w_ax, and add this product to the image image_T2w_ax we would inject a GM image, also by binarizing the soft mask, we would obtain a GM segmentation (see my code).
Preliminary results:
Related issues:
#2
The text was updated successfully, but these errors were encountered: