Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use nibabel instead of freesurfer for conforming and unconforming #5

Open
satra opened this issue Aug 16, 2019 · 18 comments
Open

Use nibabel instead of freesurfer for conforming and unconforming #5

satra opened this issue Aug 16, 2019 · 18 comments

Comments

@satra
Copy link
Collaborator

satra commented Aug 16, 2019

@effigies - is there any nibabel based code that does the same thing that mri_convert -c does?

@effigies
Copy link

I don't know of it... Do we have a list of everything that mri_convert -c does? I think it's:

  • Crop/extend to (256mm)^3 bounding box
  • Resample to (1mm)^3 voxels
  • Rescale to 0-255

I think you can do the first two with nibabel.processing, and the third is trivial as long as you mind your dtypes.

Does it roll axes to RAS/LPS or anything? We have some stuff there, though I'd need to look up the methods.

@satra
Copy link
Collaborator Author

satra commented Aug 16, 2019

@ahoopes - is there a summary somewhere of what mri_convert -c does? or is the best thing to look at the code?

@effigies
Copy link

IM(paranoid)O the best thing to do is to generate a battery of test files...

@kaczmarj
Copy link
Member

for the purposes of kwyk, i don't think we need to fully replicate mri_convert --conform. we need to:

  • resample volume to size 256x256x256 and 1mm^3 voxels
  • (probably) re-orient to some common orientation

we could also rescale the data from 0-255, but we standard score the full volumes anyway.

@effigies
Copy link

effigies commented Aug 16, 2019

Since pretty much anything y'all write is bound to be more broadly useful, you could consider nibabel.cmdline as a home: https://github.com/nipy/nibabel/tree/master/nibabel/cmdline

@ahoopes
Copy link

ahoopes commented Aug 19, 2019

@satra best bet is probably to check the code for specifics. Aside from the very vague help text, I don't know of any detailed documentation. Chris pretty much got all of it though - --conform trilinearly resamples the input onto a 256^3 template with 1mm^3 resolution, reformats to UCHAR, and coverts to LIA orientation (unless direction cosines are preserved with --conform-dc).

@kaczmarj
Copy link
Member

kaczmarj commented Dec 8, 2019

@satra @effigies - i wrote what i think is a good conform script (though it does not change orientation or reformat to UCHAR yet). this uses an updated version of nobrainer.transform._trilinear_interpolation that i have in the add/conform branch of nobrainer.

import nibabel as nib
import numpy as np
import nobrainer
from nobrainer.transform import _trilinear_interpolation, _warp_coords
import tensorflow as tf


def conform(volume, affine, out_shape=None, out_zooms=None):
    if out_shape is None:
        out_shape = (256, 256, 256)
    if out_zooms is None:
        out_zooms = (1.0, 1.0, 1.0)

    # Set up header.
    hdr = nib.Nifti1Header()
    hdr.set_data_shape(out_shape)
    hdr.set_zooms(out_zooms)  # set voxel size
    hdr.set_xyzt_units(2)  # millimeters
    dst_aff = hdr.get_best_affine()

    if np.linalg.det(src_aff) == 0:
        raise ValueError("determinant of source affine should not be 0")
    if np.linalg.det(dst_aff) == 0:
        raise ValueError("determinant of destination affine should not be 0")

    src_aff_inv = np.linalg.inv(src_aff)
    transform = src_aff_inv @ dst_aff    
    transform = transform.astype(np.float32)

    # Transform the coordinates to fit the new space.
    coords = nobrainer.transform._warp_coords(transform, volume_shape=out_shape)

    volume = tf.cast(volume, dtype=tf.float32)
    x = _trilinear_interpolation(volume, coords, output_shape=out_shape)

    img = nib.Nifti1Image(np.asarray(x), affine=dst_aff)
    img.header.set_xyzt_units(2)  # set xyz as mm
    return img


volfile = "/home/jakub/jk-anat.nii.gz"
x, affine = nobrainer.io.read_volume(volfile, return_affine=True)
src_aff = affine

new_img = conform(x, affine)
nib.save(new_img, "foobar.nii.gz")

@satra
Copy link
Collaborator Author

satra commented Dec 10, 2019

@kaczmarj - looks like the hdr is not being set when the new image is being created.

@satra
Copy link
Collaborator Author

satra commented Dec 10, 2019

also i think it would be great if this was something that was written without tensorflow and could be included in nibabel itself.

@kaczmarj
Copy link
Member

@satra - yes i had some issues copying over the header information. the previous affine was being copied over instead of the new one for some reason. haven't looked into it much further. it was with adding the header to the nifti image instantiation, but that caused problems for some reason. will look into it more.

img = nib.Nifti1Image(np.asarray(x), affine=dst_aff, header=hdr)

yes i will submit a pr to nibabel with the numpy implementation. i used tensorflow here because i had already implemented trilinear interpolation in nobrainer with tensorflow. i did it in tensorflow so that it affine transformations could be part of the tf.data.Dataset pipeline.

@satra
Copy link
Collaborator Author

satra commented Dec 10, 2019

@kaczmarj - i think @oesteban, @mgxd added/is adding transformations to nibabel which would likely have interpolations in it.

@effigies
Copy link

For interpolation we can use scipy. Check out nibabel.processing.

If your affine isn't getting written as expected, set both the qform and the sform.

@kaczmarj
Copy link
Member

excellent, thank you @satra and @effigies -- i will submit a pr to nibabel in the coming days

@neurolabusc
Copy link

fastsurfer now includes a Python-port of the FreeSurfer conform function using the permissive Apache license. It might be worth adding this to nibabel (perhaps calling it fastsurfer_conform to disambiguate from the nibabel implementation linked here.

While both nibabel and FastSurfer's conform functions create 1mm isotropic 256x256x256 volumes, there are important differences. The FreeSurfer approach creates a uint8 image with intensity scaling, it applies the transform to write to a grid aligned to the NifTI LIA space, it clamps voxel intensity. In contrast, the nibabel code creates a float32 image, rotates to the nearest orthogonal to RAS space.

You can see the difference by looking at the SForm of the fslmean.nii.gz image provided with NiiVue. The original image is not isotropic and is not orthogonal to the world space:

sto_xyz:1	2.993839 -0.012764 -0.230091 -92.453972 
sto_xyz:2	-0.007595 2.983177 -0.380603 -85.080360 
sto_xyz:3	0.192016 0.317003 3.572422 -56.159145 
sto_xyz:4	0.000000 0.000000 0.000000 1.000000 
sform_xorient	Left-to-Right
sform_yorient	Posterior-to-Anterior
sform_zorient	Inferior-to-Superior

FreeSurfer and FastSurfer remove the rotation and save as LIA:

sto_xyz:1	-1.000000 0.000000 0.000000 126.798790 
sto_xyz:2	0.000000 0.000000 1.000000 -124.712578 
sto_xyz:3	0.000000 -1.000000 0.000000 152.433075 
sto_xyz:4	0.000000 0.000000 0.000000 1.000000 
sform_xorient	Right-to-Left
sform_yorient	Superior-to-Inferior
sform_zorient	Posterior-to-Anterior

while nibabel creates still maintains a rotation relative to the RAS coordinates:

sto_xyz:1	0.997946 -0.004255 -0.063914 -122.033951 
sto_xyz:2	-0.002532 0.994392 -0.105723 -111.847046 
sto_xyz:3	0.064005 0.105668 0.992339 -127.223976 
sto_xyz:4	0.000000 0.000000 0.000000 1.000000 
sform_xorient	Left-to-Right
sform_yorient	Posterior-to-Anterior
sform_zorient	Inferior-to-Superior

I can see the merit in minimizing interpolation artifacts associated with removing the rotation, and RAS is the identity matrix for NIfTI space. However, I do think these differences might have an impact on some situations (e.g. machine learning models assuming the rotation is removed).

@oesteban
Copy link

I may be wrong, but I think models will just take the data array from the NIfTI, so whether there is or there isn't a rotation of the world coordinates w.r.t. index coordinates is not relevant (other than axes ordering and flips, which are important, but the requirement is that the conform function operates consistently across training and inference).

In this case, minimizing interpolation artifacts seems a very much desirable property if it can be included.

That said, the fastsurfer implementation could be a good intermediate point to move forward this issue.

@neurolabusc
Copy link

@oesteban consider this FLAIR scan from the Stroke Outcome Optimization Project (SOOP) which illustrates image angulation typical for individuals with kyphosis. The images below show the voxel space of the images (ignoring the residual anuglation encoded in the nibabel s-form).

The FreeSurfer conform reslices the data so the image grid matches world space:
fs

In contrast, the nibabel approach only rotates voxels to the closest oblique of RAS space:
nib

You can also see differences in the image contrast. nibabel roughly preserves voxel intensity of the source image, while FreeSurfer clamps bright outliers and outputs in the range 0..255.

Another difference is that FreeSurfer defaults to trilinear interpolation, while nibabel defaults to a higher order interpolation. A consequence of this choice is that the nibabel image shows ringing artefacts, with some negative voxel values (whereas, by definition typical MR magnitude images can not have negative voxel values). While higher order interpolation can preserve higher frequencies, it might be worth clamping voxel intensities to not exceed the range of the input image.

I think conform functions are often used for tools that ignore the spatial transform. I can certainly see the rationale for the two approaches, but I do think the FreeSurfer approach is appropriate for many applications where images were intentionally acquired with oblique angulations and the processing tools ignore spatial transforms assuming all images are resliced to a common space. I also think it is important to disambiguate these two approaches.

@oesteban
Copy link

I think conform functions are often used for tools that ignore the spatial transform. I can certainly see the rationale for the two approaches, but I do think the FreeSurfer approach is appropriate for many applications where images were intentionally acquired with oblique angulations and the processing tools ignore spatial transforms assuming all images are resliced to a common space. I also think it is important to disambiguate these two approaches.

Oh, yes -- totally agree with this. My comment is more in the context of kwyk/nobrainer models. These models are typically going to work in voxel coordinates, so perhaps the nibabel options seem better to me.

That said, I totally agree with the clipping of negative values -- that should be an option or at least made before feeding the image into the model.

@satra
Copy link
Collaborator Author

satra commented Apr 23, 2024

thanks for chiming in here. it would be great to get a common conform step. i think we can update the nibabel conform to include additional keywords to make it conform to keyword without losing backward compatibility?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants