Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GM injection in lumbar T2w imaging #5

Open
Nilser3 opened this issue Dec 15, 2024 · 8 comments
Open

GM injection in lumbar T2w imaging #5

Nilser3 opened this issue Dec 15, 2024 · 8 comments

Comments

@Nilser3
Copy link
Collaborator

Nilser3 commented Dec 15, 2024

Description

The GM image at the lumbo-sacral level looks very different compared to the cervical GM, so when training a region-agnostic GM segmentation algorithm, more lumbar image data is required.

image

We only have one dataset with visible lumbar GM (lumbar-vanderbilt) on images in T2S sequence.

Proposition

There is a data published in this paper: An open-access lumbosacral spine MRI dataset with enhanced spinal nerve root structure resolution, it contains T2w sagittal whole spine and axial T2 images of the lumbo-sacral spinal cord (GM not visible):

image

The idea is to register the axial T2w images to the PAM50 template, in order to obtain a probabilistic GM mask in the subject space.

For this, we can use the seg_sc_contrast_agnostic model to segment the SC in the axial T2w image and we can extract the intervertebral discs labeling by applying the totalspineseg model in the sagittal T2w image.

With which we could apply:

sct_register_to_template  -i "$input_ax-T2w" -s "$input_ax-T2w_SC" -ldisc  "$input_sag-T2w_labels"

and by extracting the probabilistic masks of the GM of the PAM50 we would obtain:

image

Multiplying the soft mask by the image_T2w_ax, and add this product to the image image_T2w_ax we would inject a GM image, also by binarizing the soft mask, we would obtain a GM segmentation (see my code).

Preliminary results:

image

image

Related issues:

#2

@Nilser3
Copy link
Collaborator Author

Nilser3 commented Dec 16, 2024

Preliminary results on lumbar-marseille dataset:

The difference in this case is that both the seg_sc_contrast_agnostic and totalspineseg models were applied to the ax-T2w image (good results labeling the intervertebral discs).

sub-CTS09
image

image

Next steps:

  • Train a model GM lumbar-seg based on real contrast (lumbar-vanderbild)
  • Train a model GM lumbar-seg based on real + synthetic contrast
  • Compare both models

Feedback pls @jcohenadad

@jcohenadad
Copy link
Member

@Nilser3 This is a very interesting and original approach. At first, I would go against this 'fake' creation of GM masks, but if you can prove that it provides better model performance, then I would say go for it!

@Nilser3
Copy link
Collaborator Author

Nilser3 commented Dec 17, 2024

As a proof of concept I trained 2 models:

  1. Model real data: (lumbar-vanderbild)
  • N of 2D images : 561 for train/valid
  1. Model real + synth data: (lumbar-vanderbild + shanghai-synth + lumbar-marseille-synth)
  • N of 2D images: 561 + 483 + 960 = 2004 for train/valid

Both are 2D nnUnetV2 models, same fold 0, and same real data for testing: 12 subjects (166 2D images )from lumbar-vanderbild.

Preliminary results:

Comparison of Dice Score measured on 2D slices (Stat using Wilcoxon test)

Image

Stat description:

Statistic Model real data Model real + synth data
Count 166.000000 166.000000
Mean 0.883873 0.888816
Std 0.057540 0.053268
Min 0.620155 0.588235
25% 0.862014 0.871718
50% 0.900974 0.899782
75% 0.919583 0.920898
Max 0.969231 0.963563

Qualitative results on sub-242174

Color legend :

  • red -> manual GT
  • white -> Model real data
  • green -> Model real + synth data
Image Image Image Image

but in the same subject, we have some slices where both models have failures:

Image Image

Although in these slices start GTs are weird,

I think there is a contribution of the synth-data in the lumbar GM segmentation.

Maybe we should add it to the contrast-region-agnostic training database.

@Nilser3
Copy link
Collaborator Author

Nilser3 commented Dec 17, 2024

Both models will be tested in a new dataset coming soon: basel-rAMIRA

Image
From: Cervical and thoracic spinal cord gray matter atrophy isassociated with disability in patients with amyotrophic lateralsclerosis

@vcallot
Copy link

vcallot commented Dec 17, 2024

When using the template, you will always add the "same" information (about the GM shape), so there will be an over redundancy of the information, without creating/using variability. This is probably a strong bias, to be taken into account.

@Nilser3
Copy link
Collaborator Author

Nilser3 commented Dec 17, 2024

to test on a external dataset: marseille-t2s-template

@jcohenadad
Copy link
Member

about #5 (comment)

Both are 2D nnUnetV2 models, same fold 0, and same real data for testing: 12 subjects (166 2D images) from lumbar-vanderbild.

Important to also test on out-of-distribution. Two possibilities for now:

  • PAM50 lumbar T2*
  • train only with synthetic data and test on vanderbilt

@Nilser3
Copy link
Collaborator Author

Nilser3 commented Jan 7, 2025

Model Fully Trained on Synthetically Generated Data

This is a model trained exclusively with synthetic data, using the following datasets:

  • marseille-lumbar: 10 subjects / 1180 2D denoised and ghosted images
  • shanghai-lumbar: 14 subjects / 617 2D denoised and ghosted images
  • bavaria-quebec-spine-ms-unstitched (chunk inferior): 65 subjects / 1283 2D ghosted images

After applying synth_gm_data.py script, two models were trained on 250 epochs:

2D
"2d": {
            "data_identifier": "nnUNetPlans_2d",
            "preprocessor_name": "DefaultPreprocessor",
            "batch_size": 46,
            "patch_size": [
                448,
                448
            ],
            "median_image_size_in_voxels": [
                410.0,
                410.0
            ],
            "spacing": [
                0.29296875,
                0.29296875
            ],
3D-fullres
"3d_fullres": {
            "data_identifier": "nnUNetPlans_3d_fullres",
            "preprocessor_name": "DefaultPreprocessor",
            "batch_size": 2,
            "patch_size": [
                160,
                224,
                192
            ],
            "median_image_size_in_voxels": [
                330.0,
                410.0,
                410.0
            ],
            "spacing": [
                0.5000009536743164,
                0.29296875,
                0.29296875
            ],

showing promising results.

Results

  1. lumbar-vanderbilt:
    53 subjects / 727 2D images
Image
  1. marseille-t2s-template:
    25 subjects / 345 2D images
    Image

  2. PAM50_t2s.nii.gz image:
    1 volume / 828 2D images
    Image

I find it interesting to see that the 2D model performs better at lower levels, and that the 3D_fullres model segment better at higher levels because it generalizes with the 3D context information.

Following the philosophy of the arXiv Can segmentation models be trained with fully synthetically generated data?, I consider that we can add this synthetically generated data in the training of a contrast-region-agnostic GM segmentation model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants