-
Notifications
You must be signed in to change notification settings - Fork 64
/
3_HowToGuides.txt
1335 lines (1028 loc) · 83.9 KB
/
3_HowToGuides.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
/**
\page How_To_Guides How To Guides
This section provides step-by-step guidance to apply the CaPTk functionalities:
- \subpage ht_Preprocessing "Pre-processing"
- [DICOM to NIfTI conversion](preprocessing_dcm2nii.html)
- [Image Registration](preprocessing_reg.html)
- [Image Denoizing (SUSAN - ITK filter)](preprocessing_susan.html)
- [N3 Bias Correction (ITK filter)](preprocessing_bias.html)
- [N4 Bias Correction (ITK filter)](preprocessing_bias.html)
- [Histogram Matching](preprocessing_histoMatch.html)
- [Z-Score Normalization](preprocessing_zScoreNorm.html)
- [BraTS Pre-processing Pipeline](preprocessing_brats.html)
- \subpage ht_Segmentation "Segmentation"
- [Interactive Machine Learning based - Utilizing Geodesic Distance Transform and SVM](seg_GeoTrain.html)
- [ITK-SNAP](seg_SNAP.html)
- [Deep Learning based - pre-trained models offered for a) skull-stripping and b) brain tumors](seg_DL.html)
- \subpage ht_FeatureExtraction "Feature Extraction"
- [<b>ML Training Module</b>](Training_Module.html)
- \subpage ht_SpecialApps "Specialized Applications"
- [Brain Cancer: Novel MRI Normalization - WhiteStripe](Glioblastoma_WhiteStripe.html)
- [Brain Cancer: Population Atlas](Glioblastoma_Atlas.html)
- [Brain Cancer: Confetti - Diffusion track visualization](Glioblastoma_Confetti.html)
- [Brain Cancer: Glioblastoma Infiltration Index (Recurrence Predictor)](Glioblastoma_Recurrence.html)
- [Brain Cancer: Glioblastoma Pseudo-progression Index](Glioblastoma_Pseudoprogression.html)
- [Brain Cancer: Glioblastoma Survival Prediction Index](Glioblastoma_Survival.html)
- [Brain Cancer: Glioblastoma EGFRvIII Surrogate Index (PHI Estimator)](Glioblastoma_PHI.html)
- [Brain Cancer: Glioblastoma EGFRvIII SVM Prediction](Glioblastoma_EGFRvIII.html)
- [Breast Cancer: Breast Segmentation](BreastCancer_breastSegmentation.html)
- [Breast Cancer: Breast Density Estimation (LIBRA)](BreastCancer_LIBRA.html)
- [Breast Cancer: Texture Feature Extraction](BreastCancer_texture.html)
- [Lung Cancer: Radiomic Analysis of Lung Cancer (SBRT Lung)](LungCancer_SBRT.html)
- [Miscellaneous: Directionality Estimator](Glioblastoma_Directionality.html)
- [Miscellaneous: Diffusion Derivatives](Diffusion_Derivatives.html)
- [Miscellaneous: Perfusion Derivatives](Perfusion_Derivatives.html)
- [Miscellaneous: Perfusion PCA Parameter Extractor](PCA_Extraction.html)
- \subpage ht_utilities "Utilities (CLI only)": Collection of helper utilities such as resampling, thresholding, image information, casting, label statistics, and more.
- [BraTS Metrics Computation](BraTS_Metrics.html)
Notes:
- Training videos can be accessed in our <a href="https://www.youtube.com/channel/UC69N7TN5bH2onj4dHcPLxxA"><b>YouTube Channel</b></a>, specifically in the <a target="_blank" rel="noopener noreferrer" href="https://www.youtube.com/playlist?list=PLXdcXDD5czvjFFQGX9Jm3KouP0H9cWowu"><b>original</b></a> and <a target="_blank" rel="noopener noreferrer" href="https://www.youtube.com/playlist?list=PLXdcXDD5czvgCOfo_LLhAwFRNlpsBfjFx"><b>ITCR 2020</b></a> CaPTk Playlists.
- All applications are available as command-line applications (use parameter "-h" for detailed help messages).
--------
*/
/**
\page ht_Preprocessing Preprocessing
The following applications can be called from the GUI:
- \subpage preprocessing_dcm2nii "DICOM to NIfTI conversion"
- \subpage preprocessing_reg "Registration"
- \subpage preprocessing_susan "Denoise-SUSAN (ITK filter)"
- \subpage preprocessing_bias "Bias Correction (ITK filter)"
- \subpage preprocessing_histoMatch "Histogram Matching"
- \subpage preprocessing_zScoreNorm "Z-Score Normalization"
- \subpage preprocessing_brats "BraTS Pre-processing Pipeline"
For a full list of available applications and examples, please use the command:
\verbatim
${CaPTk_InstallDir}/bin/Preprocessing -h
\endverbatim
--------
*/
/**
\page preprocessing_dcm2nii DICOM to NIfTI conversion
This tool converts a DICOM series into the Neuroimaging
Informatics Technology Initiative (NIfTI) file format.
<b>REQUIREMENTS:</b>
-# A folder with the continuous sequence of a DICOM series.
<b>USAGE:</b>
-# Launch the tool using the 'Preprocessing' -> 'DICOM to NIfTI' menu option.
-# Specify the directory containing the DICOM images in the series, and the output filename.
-# Click on "Confirm".
-# This can also be used from the command line:
\verbatim
${CaPTk_InstallDir}/bin/Utilities.exe -i C:/test/dicomFolderWithSingleSubject -o C:/PathToOutputFolder/result.nii.gz -d2n
\endverbatim
--------
*/
/**
\page preprocessing_reg Image Co-registration
This tool registers multiple moving images to a single target image using the Greedy Registration technique [1].
This tool is also available from the web on the [CBICA Image Processing Portal](https://ipp.cbica.upenn.edu/). Please see the experiment on the portal for details.
<b>REQUIREMENTS:</b>
-# A Target image.
-# Up to 5 moving images
<b>USAGE:</b>
-# Launch the "Registration" dialog from the "Preprocessing" menu.
-# Customize the parameters (Metrics, Radius and Iterations) as needed; for details regarding the parameters and their usage, please refer to the Greedy manual [2].
-# Specify the moving images (up to 5) and their respective output images.
-# Click on 'Register'.
-# The output image is saved in the specified locations.
-# This can also be used from the command line:
\verbatim
${CaPTk_InstallDir}/bin/Preprocessing.exe -i moving.nii.gz -rFI fixed.nii.gz -o output.nii.gz -reg Affine -rME NCC-2x2x2 -rIS 1 -rNI 100,50,5
\endverbatim
More usage options are available via the ```greedy``` command-line executable (see documentation for it [here](https://sites.google.com/view/greedyreg/documentation)), which is present in ```${CaPTk_InstallDir}/bin/```.
--------
References:
-# P.A.Yushkevich, J.Pluta, H.Wang, L.E.Wisse, S.Das, D.Wolk, "Fast Automatic Segmentation of Hippocampal Subfields and Medical Temporal Lobe Subregions in 3 Tesla and 7 Tesla MRI", Alzheimer's & Dementia: The Journal of Alzheimer's Association, 12(7), P126-127, DOI:10.1016/j.jalz.2016.06.205
-# www.github.com/pyushkevich/greedy
--------
*/
/**
\page preprocessing_susan Denoise-SUSAN (ITK filter)
This tool smooths an image already loaded in CaPTk, to remove any high frequency intensity variations (i.e., noise), using the SUSAN algorithm [1].
<b>REQUIREMENTS:</b>
-# An image loaded in CaPTk.
<b>USAGE:</b>
-# Load an image in CaPTk.
-# Launch the tool using the 'Preprocessing' -> 'Denoise-SUSAN' menu option.
-# Specify the output filename.
-# Click on 'Save'.
-# The output image is saved in the specified folder and automatically loaded in CaPTk.
-# This can also be used from the command line:
\verbatim
${CaPTk_InstallDir}/bin/Preprocessing.exe -i C:/test/image.nii.gz -o C:/test/image_smooth.nii.gz -ss
\endverbatim
--------
Reference:
-# S.M.Smith, J.M.Brady. "SUSAN-A new approach to low level image processing", International Journal of Computer Vision. 23(1):45-78, 1997, DOI:10.1023/A:1007963824710
*/
/**
\page preprocessing_bias Bias Correction (ITK filter)
This tool corrects an image, already loaded in CaPTk, for magnetic field inhomogeneities using a non-parametric method [1]. Users have the option to switch between N3 [1] and N4 [2] correction.
<b>REQUIREMENTS:</b>
-# An image loaded in CaPTk.
<b>USAGE:</b>
-# Load an image in CaPTk.
-# Launch the tool using the 'Preprocessing' -> 'BiasCorrection' menu option.
-# Specify the output filename.
-# Click on 'Save'.
-# The output image is saved in the specified folder and automatically loaded in CaPTk.
-# This can also be used from the command line:
\verbatim
${CaPTk_InstallDir}/bin/Preprocessing.exe -i C:/test/image.nii.gz -o C:/test/image_biasCorr.nii.gz -n4
\endverbatim
--------
Reference:
-# J.G.Sled, A.P.Zijdenbos, A.C.Evans. "A nonparametric method for automatic correction of intensity nonuniformity in MRI data", IEEE Trans Med Imaging. 17(1):87-97, 1998, DOI:10.1109/42.668698
-# N.J.Tustison, B.B.Avants, P.A.Cook, Y.Zheng, A.Egan, P.A.Yushkevich, J.C.Gee. "N4ITK: improved N3 bias correction." IEEE Trans Med Imaging. 29(6): 1310-1320, 2010, DOI:10.1109/TMI.2010.2046908
*/
/**
\page preprocessing_histoMatch Histogram Matching
This tool normalizes the intensity profile of an input image based on the intensity profile of a reference image [1].
<b>REQUIREMENTS:</b>
-# A reference and an input image.
<b>USAGE:</b>
-# Launch the tool using the 'Preprocessing' -> 'HistogramMatching' menu option.
-# Specify the Reference and Input images, and the output filename.
-# Click on 'Confirm'.
-# The output image is saved in the specified folder.
-# This can also be used from the command line:
\verbatim
${CaPTk_InstallDir}/bin/Preprocessing.exe -i C:/test/input.nii.gz -o C:/test/output.nii.gz -hi C:/test/target.nii.gz -hb 100 -hq 50
\endverbatim
--------
Reference:
-# L.G.Nyul, J.K.Udupa, X.Zhang, "New Variants of a Method of MRI Scale Standardization", IEEE Trans Med Imaging. 19(2):143-50, 2000, DOI:10.1109/42.836373
*/
/**
\page preprocessing_zScoreNorm Z-Scoring Normalization
This tool does a z-scoring based normalization on the loaded image.
<b>REQUIREMENTS:</b>
-# An input image.
<b>USAGE:</b>
-# Launch the tool using the 'Preprocessing' -> 'ZScoringNormalizer' menu option.
-# Specify the input image and the (optional) mask
-# Specify the parameters (quantization upper/lower and cut-off upper/lower) to remove the outliers
-# Specify the output image
-# Click on 'Confirm'.
-# The output image is saved in the specified folder.
-# This can also be used from the command line:
\verbatim
${CaPTk_InstallDir}/bin/Preprocessing.exe -i C:/test/input.nii.gz -m C:/test/input_binaryMask.nii.gz -o C:/test/output.nii.gz -zn 1 -zq 5,95 -zc 3,3
\endverbatim
--------
Reference:
-# T.Rohlfing, N.M.Zahr, E.V.Sullivan, A.Pfefferbaum, "The SRI24 multichannel atlas of normal adult human brain structure", Human Brain Mapping, 31(5):798-819, 2010, DOI:10.1002/hbm.20906
*/
/**
\page preprocessing_brats BraTS Pre-processing Pipeline
This pipeline is also available from the web on the [CBICA Image Processing Portal](https://ipp.cbica.upenn.edu/). Please see the experiment on the portal for details.
<b>REQUIREMENTS:</b>
-# 4 structural MRI images (T1, T1CE, T2, FLAIR), preferably in NIfTI format
- For DICOM images, please pass the first image in each of the series as input, not the folder.
<b>USAGE:</b>
This CLI-only application takes 4 structural brain MRIs as input and performs the following steps [1-3]:
-# Re-orientation to LPS/RAI
-# Image registration to SRI-24 Atlas [4] which includes the following steps
-# [N4 Bias correction](preprocessing_bias.html) (This is a TEMPORARY STEP, and is not applied in the final co-registered output images. It is only use to facilitate optimal registration.)
-# [Rigid Registration](preprocessing_reg.html) of T1, T2, FLAIR to T1CE
-# [Rigid Registration](preprocessing_reg.html) of T1CE to SRI-24 atlas [4]
-# Applying transformation to the reoriented images
-# OPTIONAL: [Deep-Learning based Skull-stripping](seg_DL.html) [5]
-# OPTIONAL: [Deep-Learning based Tumor Segmentation](seg_DL.html) [6]
Example command:
\verbatim
${CaPTk_InstallDir}/bin/BraTSPipeline.exe -t1 C:/test/t1.nii.gz -t1c C:/test/t1ce.nii.gz -t2 C:/test/t2.nii.gz -fl C:/test/flair.nii.gz -o C:/test/outputDir
\endverbatim
<b>NOTE</b>: This applications takes ~30 minutes to finish on an 8-core Intel i7 with 16GB of RAM.
<b>Explanation of output files:</b>
Final co-registered images:
- T1_to_SRI.nii.gz : Co-registered T1 image
- T1CE_to_SRI.nii.gz: Co-registered T1CE image
- T2_to_SRI.nii.gz: Co-registered T2 image
- FL_to_SRI.nii.gz: Co-registered FLAIR image
(Optional) Deep Learning based masks:
- brainMask_SRI.nii.gz
- brainTumorMask_SRI.nii.gz
(Optional) Co-registered images, with brain mask applied:
- T1_to_SRI_brain.nii.gz
- T1CE_to_SRI_brain.nii.gz
- T2_to_SRI_brain.nii.gz
- FL_to_SRI_brain.nii.gz
(Optional) Intermediate files (similar for T1CE, T2, FLAIR):
- T1_raw.nii.gz : NifTi file converted from input DICOM, or copy of input NifTI file
- T1_rai.nii.gz : Image re-oriented to LPS/RAI
- T1_rai_n4.nii.gz : Image with N4 bias correction applied to T1_rai.nii.gz
- T1_to_T1CE.mat : transformation matrix of Rigid registration, T1 to T1CE
- T1CE_to_SRI.mat : transformation matrix of Rigid registration, T1CE to SRI
--------
Reference:
-# B. H. Menze, A. Jakab, S. Bauer, J. Kalpathy-Cramer, K. Farahani, J. Kirby, et al. "The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)", IEEE Transactions on Medical Imaging 34(10), 1993-2024 (2015) DOI:10.1109/TMI.2014.2377694
-# S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J.S. Kirby, et al., "Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features", Nature Scientific Data, 4:170117 (2017) DOI:10.1038/sdata.2017.117
-# S. Bakas, M. Reyes, A. Jakab, S. Bauer, M. Rempfler, A. Crimi, et al., "Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge", arXiv preprint arXiv:1811.02629 (2018)
-# T.Rohlfing, N.M.Zahr, E.V.Sullivan, A.Pfefferbaum, "The SRI24 multichannel atlas of normal adult human brain structure", Human Brain Mapping, 31(5):798-819, 2010, DOI:10.1002/hbm.20906
-# S.Thakur, J.Doshi, S.Pati, S.Rathore, C.Sako, M.Bilello, S.M.Ha, G.Shukla, A.Flanders, A.Kotrotsou, M.Milchenko, S.Liem, G.S.Alexander, J.Lombardo, J.D.Palmer, P.LaMontagne, A.Nazeri, S.Talbar, U.Kulkarni, D.Marcus, R.Colen, C.Davatzikos, G.Erus, S.Bakas, "Brain Extraction on MRI Scans in Presence of Diffuse Glioma: Multi-institutional Performance Evaluation of Deep Learning Methods and Robust Modality-Agnostic Training, NeuroImage, Epub-ahead-of-print, 2020.
-# K.Kamnitsas, C.Ledig, V.F.J.Newcombe, J.P.Simpson, A.D.Kane, D.K.Menon, D.Rueckert, B.Glocker, "Efficient Multi-Scale 3D CNN with Fully Connected CRF for Accurate Brain Lesion Segmentation", Medical Image Analysis, 2016.
*/
/**
\page ht_Segmentation Segmentation
There are various segmentation tools available within CaPTk. They are enumerated as follows:
- \subpage seg_GeoTrain "Geodesic Training Segmentation"
- \subpage seg_Geodesic "Geodesic Distance Transform-based Segmentation"
- \subpage seg_SNAP "ITK-SNAP"
- \subpage seg_DL "Deep Learning Segmentation"
*/
/**
\page seg_GeoTrain Geodesic Training Segmentation
'GeodesicTraining' builds upon the geodesic distance based segmentation by using machine learning (SVMs) [1].
It also adds support for multiple areas of interest, multiple modalities, the ability to iterate the algorithm until a desired outcome is achieved and no need for thresholding.
<b> REQUIREMENTS:</b>
One or more co-register images of the same subject. A ROI image has to be drawn which will containing sample labels for the different areas the user wants to segment (see usage for details).
<b> USAGE:</b>
-# Load the different images, that are different modalities of the same subject into CaPTk. It doesn't matter which image you have selected from the ones that are loaded. Everything that is loaded is passed to the algorithm.
-# Draw over the images using CaPTk's drawing tools. Example: Suppose that you have an image of a brain tumor and you are interested in segmenting the image into 3 different sections, tumor core (the main tumor), edema (a fluid surrounding the tumor core) and healthy tissue (the reset of the brain). You have to use three different colors. Suppose you use RED for tumor core, GREEN for edema and BLUE for healthy tissue (it's up to you to pick the colors). Draw a little bit over the tumor core with the red marker, a little bit over the edema with the green one and a little bit anywhere else that is not an affected area with the blue marker. Keep in mind to always use a color for areas that are not of interest (like we used now for healthy tissue), otherwise the algorithm will try to classify the healthy tissue as one of the colors you have actually drawn. Obviously, whatever you do you have to use at least two different colors. You don't need to draw excessively. In fact it is advised not to go overboard, as this leaves you room for better corrections later on (see below). It's best to stick to the free-hand drawing tool with 1x1 or 3x3 marker size.
-# After you have drawn, Click Applications>'Geodesic Training Segmentation' from the menu and wait ~30 seconds (depends on the number of images, the images' size and your computer specs).
-# Now you will see the output segmentation where your labels were previously drawn. You can keep this segmentation if it's ok. Chances are though that it contains mistakes. Using the drawing tools again, draw over *some* of those mistakes (right on the output segmentation!) and then click Applications>'Geodesic Training Segmentation' again. You can repeat this as many times as you want. (There probably won't be a need for more than 2-3 runs though). Something that you might not know about about CaPTk is that you can change the opacity of the ROI by clicking an image, then the 'opacity' checkbox next to the same image and using the slider.
-# Once you are satisfied with the segmentation you can save it using File>Save>ROI. Most of the time people don't want their segmentations to contain labels for the healthy tissue. If you wish for something similar then, before saving, select the color you used for healthy tissues from the 'label selector' in the drawing tools and click 'Clear selected label' and save.
- This application is also available as with a stand-alone CLI for data analysts to build pipelines around, and can run in the following format:
\verbatim
${CaPTk_InstallDir}/bin/GeodesicTraining.exe -i C:/inputImage1.nii.gz,C:/inputImage2.nii.gz,... -l C:/maskWithAtLeastTwoDifferentClasses.nii.gz -o C:/outputDirectoryNotFilename
\endverbatim
You have to draw the ROI image somewhere else to use the CLI executable. This image should be zero everywhere, except for the voxels that you have drawn. These voxels should be the same value if they belong to the same area. For instance in the example we discussed before, you can use value 1 for tumor core, value 2 for edema and value 3 for healthy tissue. If you don't want the output to contain the healthy tissue you can use parameter "-cl 3/0" which means that value 3 will be changed to 0. Keep in mind that values for healthy tissue are still needed, they just not going to show up in the output segmentation. If you want to iterate (correct mistakes in the segmentation), it's a bit harder to do with the CLI. You have to add new values to the input ROI (not the output segmentation) and run again. If you are going to use the CLI it is recommended to spend a little bit more time when you draw your input ROI so the first segmentation you get is good and you don't have to iterate the algorithm often.
Note that the CLI allows users to configure a pixel limit. Images above this size (after normalization) will be downsampled for processing, then upsampled when done. This produces a faster, but less precise, result. The GUI does not currently perform this downscaling step.
--------
References:
-# B.Gaonkar, L.Shu, G.Hermosillo, Y.Zhan, "Adaptive geodesic transform for segmentation of vertebrae on CT images", Proceedings Volume 9035, Medical Imaging 2014: Computer-Aided Diagnosis; 9035:16, 2014, DOI:10.1117/12.2043527.
--------
*/
/**
\page seg_Geodesic Geodesic Distance Transform-based Segmentation
The geodesic distance transform based segmentation is a semi-automatic technique to delineate structures of distinct intensity.
<b> REQUIREMENTS:</b>
A single image with distinct boundaries for the structure that needs to be segmented [1].
<b> USAGE:</b>
-# Load in CaPTk the image that you want to segment.
-# Using Label 1 from the drawing tab, annotate a region of the tissue you would like to segment in the image.
-# Launch the application using the 'Applications' -> 'Geodesic Segmentation' menu option.
-# The mask is populated within ~5 minutes, showing the progress at the bottom right corner of CaPTk.
-# The mask is visualized automatically in the visualization panels.
-# You can revise the resulted segmentation mask (Label:1), by selecting the "Geodesic" preset and changing the "Threshold" at the bottom right corner of CaPTk.
- This application is also available as with a stand-alone CLI for data analysts to build pipelines around, and can run in the following format:
\verbatim
${CaPTk_InstallDir}/bin/GeodesicSegmentation.exe -i C:/inputImage.nii.gz -m C:/maskWithOneLabel.nii.gz -o C:/outputImage.nii.gz -t 20
\endverbatim
--------
*/
/**
\page seg_SNAP ITK-SNAP
ITK-SNAP is a stand-alone software application used to segment structures in 3D medical images and other utilities [1] - http://www.itksnap.org/pmwiki/pmwiki.php.
Within CaPTk specifically, ITK-SNAP is tightly integrated as a tool used for segmentation, accepting files chosen through the CaPTk interface and returning results for further use within CaPTk. ITK-SNAP uses a combination of random forests and level sets to obtain precise segmentations of structures [1]. Please see the following video for detailed instructions: https://www.youtube.com/watch?v=-gBcFxKf-7Q
--------
References:
-# P.Yushkevich, Y.Gao, G.Gerig, "ITK-SNAP: An interactive tool for semi-automatic segmentation of multi-modality biomedical images", Conf Proc IEEE Eng Med Biol Soc. 2016:3342-3345, 2016, DOI:10.1109/EMBC.2016.7591443.
--------
*/
/**
\page seg_DL Deep Learning Segmentation
For our Deep Learning based segmentation, we use DeepMedic [1,2] and users can do inference using a pre-trained models (trained on BraTS 2017 Training Data) with CaPTk for Brain Tumor Segmentation or Skull Stripping [3]. Users also have the option to train their own models using DeepMedic and using that model for their own tasks (be mindful of the preprocessing).
<b> REQUIREMENTS:</b>
- For tumor segmentation model and multi-4 skull-stripping model:
- The 4 basic MRI modalities (T1, T1-Gd, T2 and T2-FLAIR) for a subject which are co-registered.
- For modality-agnostic skull-stripping model:
- A single structural MRI modality (can be either T1, T1-Gd, T2 or T2-FLAIR).
<b> USAGE:</b>
-# Load the images that you want to segment in CaPTk.
-# [OPTIONAL] Load the brain mask - this is used for normalization.
-# Select the appropriate pre-trained model folder (either brain tumor segmentation or skull stripping is available): for custom models, select appropriate option and browse to the model directory.
-# Select the output folder.
-# Click on 'Applications' -> 'Brain Tumor Segmentation' or 'Skull Stripping'
-# This can also be used from the command line:
\verbatim
${CaPTk_InstallDir}/bin/DeepMedic.exe -md ${CaPTk_InstallDir}/data/deepMedic/saved_models/brainTumorSegmentation -i C:/data/t1.nii.gz,C:/data/t1ce.nii.gz,C:/data/t2.nii.gz,C:/data/fl.nii.gz -m C:/data/optionalMask.nii.gz -o C:/data/outputSegmentation.nii.gz
${CaPTk_InstallDir}/bin/DeepMedic.exe -i c:/t1_withSkull.nii.gz -o c:/output -md c:/CaPTk_install/data/deepMedic/saved_models/skullStripping_modalityAgnostic # modality-agnostic skull-stripping
\endverbatim
<br>
--------
References:
-# K.Kamnitsas, C.Ledig, V.F.J.Newcombe, J.P.Simpson, A.D.Kane, D.K.Menon, D.Rueckert, B.Glocker, "Efficient Multi-Scale 3D CNN with Fully Connected CRF for Accurate Brain Lesion Segmentation", Medical Image Analysis, 2016.
-# K.Kamnitsas, L.Chen, C.Ledig, D.Rueckert, B.Glocker, "Multi-Scale 3D CNNs for segmentation of brain Lesions in multi-modal MRI", in proceeding of ISLES challenge, MICCAI 2015.
-# S.P.Thakur, J.Doshi, S.Pati, S.M.Ha, C.Sako, S.Talbar, U.Kulkarni, C.Davatzikos, G.Erus, S.Bakas, "Skull-Stripping of Glioblastoma MRI Scans Using 3D Deep Learning", Springer - BrainLes 2019 - LNCS, Vol.11992, 57-68, 2020, DOI: 10.1007/978-3-030-46640-4_6
-# S.P.Thakur, J.Doshi, S.Pati, S.M.Ha, C.Sako, S.Talbar, U.Kulkarni, C.Davatzikos, G.Erus, S.Bakas, "Brain Extraction on MRI Scans in Presence of Diffuse Glioma:
Multi-institutional Performance Evaluation of Deep Learning Methods and Robust Modality-Agnostic Training", NeuroImage 2020 [Accepted]
--------
*/
/**
\page ht_FeatureExtraction Feature Extraction
<b>REQUIREMENTS:</b>
-# An image or a set of co-registered images.
-# An ROI file containing various labels, for which features will be extracted.
-# NOTE: CaPTk can also extract COLLAGE features [1] using the [Python implementation](https://github.com/radxtools/collageradiomics/), but this functionality must be invoked from the separate "CollageFeatures" executable.
-# By contrast, IBSI-2 convolutional features based on the [CBICA Python implementation](https://github.com/CBICA/CaPTK_IBSI2) are fully available from the CLI and GUI.
<b>USAGE:</b>
-# Once image(s) and an ROI file are loaded, go to the "Feature Extraction" panel.
-# In the "Customization" section, you can select one of the preset of features to extract from the drop-down menu:
- <b>Custom</b>: allows the manual selection & customization of specific features.
- <b>Custom_Lattice</b>: allows the manual selection & customization of specific features using a lattice-based strategy for feature extraction [2]. A regular lattice is virtually overlaid on the ROI and features are computed on local square (for 2D images) or cubic (for 3D images) regions centered on each lattice point. The final feature estimates are then calculated as summary statistics of the corresponding feature measurements across all regions defined by the lattice. The parameterization of the lattice is described in the <a href="tr_Apps.html#appsFeatures">Technical Reference</a> section.
- <b>Lung_SBRT</b>: enables the extraction of features that are used in [3].
-# For the "Custom" and "Custom_lattice" presets, and once specific features are selected, you can use the <b>Advanced</b> button (in "Customization") to parameterize further the individual selected features.
-# In the <b>Image Selection</b> section, you can select the radio button of "<b>Selected Image</b>" or "<b>All Images</b>" to extract features for either the visualized image or all the images loaded in CaPTk, respectively.
-# In the <b>Mask Selection</b> section, you have the option to define the label number(s) for which you want to extract features. Note that when more than one label number is entered the 'pipe' symbol should be used as a separator, i.e. |. Equivalently, text labels should be provided corresponding to each label number, again separated by a 'pipe'.
-# Once the output CSV file is defined you can click on <b>Compute + Save</b>. (Note that the CSV file should be in ASCII format)
-# For the command line interface, a user can copy the file <b>${CaPTk_Installation_Folder}/data/features/1_params_default.csv</b> to a location where they would have write access, change the parameters as they see fit and then pass that to the CLI under the "-b" parameter (example shown below).
- This application is also available as with a stand-alone CLI for data analysts to build pipelines around, and can run in the following format:
\verbatim
${CaPTk_InstallDir}/bin/FeatureExtraction.exe -n AAC0_timestamp -i /usr/path/T1.nii.gz,/usr/path/T2.nii.gz -t T1,T2 -m /user/path/mask.nii.gz -r 2,4,5 -l ED,EN,NE -p /usr/path/features.csv -o /usr/path/output.csv
\endverbatim
-# Batch processing is also available (for more details, please see the relevant <a href="ht_FeatureExtraction.html">technical reference</a> page):
\verbatim
${CaPTk_InstallDir}/bin/FeatureExtraction.exe -b /usr/path/batch.csv -p /usr/path/features.csv -o /usr/path/output.csv
\endverbatim
-# As intermediate results of the "Custom_lattice", the user also has the option to save feature maps, i.e., images that represent the spatial distribution of the corresponding feature measurements as sampled by the lattice over the ROI, using the following CLI format:
\verbatim
${CaPTk_InstallDir}/bin/FeatureExtraction.exe -n AAC0_timestamp -i /usr/path/T1.nii.gz,/usr/path/T2.nii.gz -t T1,T2 -m /user/path/mask.nii.gz -r 2,4,5 -l ED,EN,NE -p /usr/path/features.csv -o /usr/path/output.csv -f 1
\endverbatim
The above command extracts features defined in the parameter file defined in "-p" for the input images defined in "-i" in the regions defined in "-m" and "-r". The output is populated using the subject ID from "-n", modality information from "-t" and annotation label information from "-l" and saves the results in the path defined in "-o".
-# For more information on the defaults of the feature extraction and how to customize it, please see our <a href="tr_FeatureExtraction.html#tr_fe_defaults">technical reference</a> page.
-# The output of Feature Extraction can be used to run the <a href="Training_Module.html">Training Module</a>.
--------
References:
-# P.Prasanna, P.Tiwari, A.Madabhushi, "Co-occurrence of Local Anisotropic Gradient Orientations (CoLlAGe): A new radiomics descriptor", Nature Scientific Reports, 2016.
-# Y.Zheng, B.M.Keller, S.Ray, Y.Wang, E.F.Conant, J.C.Gee, D.Kontos, "Parenchymal texture analysis in digital mammography: A fully automated pipeline for breast cancer risk assessment", Medical Physics, 2015.
-# H.Li, M.Galperin-Aizenberg, D.Pryma, C.Simone, Y.Fan, "Predicting treatment response and survival of early-stage non-small cell lung cancer patients treated with stereotactic body radiation therapy using unsupervised two-way clustering of radiomic features", Int. Workshop on Pulmonary Imaging, 2017.
--------
*/
/**
\page ht_SpecialApps Specialized Applications (SAs) Usage
Contains the following applications:
- \subpage Glioblastoma_WhiteStripe "Brain Cancer: WhiteStripe Normalization"
- \subpage Glioblastoma_Atlas "Brain Cancer: Population Atlas"
- \subpage Glioblastoma_Confetti "Brain Cancer: Confetti"
- \subpage Glioblastoma_Recurrence "Brain Cancer: Glioblastoma Infiltration Index (Recurrence)"
- \subpage Glioblastoma_Pseudoprogression "Brain Cancer: Glioblastoma Pseudo-progression Index"
- \subpage Glioblastoma_Survival "Brain Cancer: Glioblastoma Overall Survival Prediction"
- \subpage Glioblastoma_PHI "Brain Cancer: Glioblastoma EGFRvIII Surrogate Index (PHI Estimator)"
- \subpage Glioblastoma_EGFRvIII "Brain Cancer: Glioblastoma EGFRvIII SVM Index"
- \subpage BreastCancer_breastSegmentation "Breast Cancer: Breast Segmentation"
- \subpage BreastCancer_LIBRA "Breast Cancer: Breast Density Estimation (LIBRA)"
- \subpage BreastCancer_texture "Breast Cancer: Texture Feature Extraction"
- \subpage LungCancer_SBRT "Lung Cancer: Radiomics Analysis of Lung Cancer (SBRT Lung)"
- \subpage Glioblastoma_Directionality "Miscellaneous: Directionality Estimator"
- \subpage Perfusion_Alignment "Miscellaneous: Perfusion Alignment"
- \subpage Diffusion_Derivatives "Miscellaneous: Diffusion Derivatives"
- \subpage Perfusion_Derivatives "Miscellaneous: Perfusion Derivatives"
- \subpage PCA_Extraction "Miscellaneous: PCA Volume Extraction"
- \subpage Training_Module "Miscellaneous: Training Module"
--------
*/
/**
\page Glioblastoma_WhiteStripe Brain Cancer: WhiteStripe Normalization
This algorithm normalizes conventional brain MRI scans [1] by detecting a latent subdistribution of normal tissue and linearly scaling the histogram of images. It is to be used on structural modalities only.
<b>REQUIREMENTS:</b>
- Bias-corrected (N3 or N4) T1-weighted or T2-weighted images, ideally either skull-stripped or rigidly aligned to MNI space.
<b>USAGE:</b>
-# Launch the WhiteStripe UI using the 'Applications -> WhiteStripe Normalization' menu option.
-# Specify the Input and Output files and different parameters (defaults are populated).
-# For T1-Gd images, use the "T1" option and for T2-FLAIR images, use the "T2" option.
-# Click on 'Run WhiteStripe" and the results can be seen in a slice format using "Toggle Mask/Image" checkbox.
-# Use 'Level Display' when needed.
- This application is also available as with a stand-alone CLI for data analysts to build pipelines around, using the following example command:
\verbatim
${CaPTk_InstallDir}/bin/WhiteStripe.exe -i C:/inputImage.nii.gz -o C:/outputImage.nii.gz
\endverbatim
NOTE: WhiteStripe uses the KernelFit library from Lentner (https://github.com/glentner/KernelFit).
--------
References:
-# R.T.Shinohara, E.M.Sweeney, J.Goldsmith, N.Shiee, F.J.Mateen, P.A.Calabresi, S.Jarso, D.L.Pham, D.S.Reich, C.M.Crainiceanu, Australian Imaging Biomarkers Lifestyle Flagship Study of Ageing, Alzheimer's Disease Neuroimaging Initiative. "Statistical normalization techniques for magnetic resonance imaging", Neuroimage Clin. 6:9-19, 2014, DOI:10.1016/j.nicl.2014.08.008
--------
*/
/**
\page Glioblastoma_PHI Brain Cancer: Glioblastoma EGFRvIII Surrogate Index (PHI Estimator)
This application evaluates the Epidermal Growth Factor Receptor splice variant III (EGFRvIII) status in primary glioblastoma, by quantitative pattern analysis of the spatial heterogeneity of peritumoral perfusion dynamics from Dynamic Susceptibility Contrast (DSC) MRI scans, through the Peritumoral Heterogeneity Index (PHI / φ-index) [1-3].
<b>REQUIREMENTS:</b>
-# T1-Gd: To annotate the immediate peritumoral ROI.
-# T2-FLAIR: To annotate the distant peritumoral ROI.
-# DSC-MRI: To perform the analysis and extract the PHI.
-# (Command-line only) ROI Label file consisting of near-region (Label=1) and far-region (Label=2)
<b>USAGE:</b>
-# Annotate 2 ROIs: one near (Label:1) the enhancing tumor and another far (Label:2) from it (but still within the peritumoral region). Rules for ROI annotation are provided below [3].
-# Once the 2 ROIs are annotated, the application can be launched by using the menu option: 'Applications -> EGFRvIII Surrogate Index'.
-# A pop-up window appears displaying the results (within <1 minute).
- This application is also available as with a stand-alone CLI for data analysts to build pipelines around, using the following example command:
\verbatim
${CaPTk_InstallDir}/bin/EGFRvIIISurrogateIndex.exe -i C:/inputDSCImage.nii.gz -m C:/maskWithNearAndFarLabels.nii.gz
\endverbatim
- Additionally, you can process a batch directory (subdirectories are subjects, and each must have a perfusion and mask image). Pass "--help" to the CLI application for more details.
\verbatim
${CaPTk_InstallDir}/bin/EGFRvIIISurrogateIndex.exe -b C:/inputDirectory/ -o C:/output.csv
\endverbatim
<b>RULES FOR ROI ANNOTATION</b>:
- These two ROIs are used to sample tissue located on the two boundaries of the peritumoaral edema/invasion area: near to and far from the tumor, respectively, and hence to evaluate the heterogeneity or spatial gradient of perfusion signals [1-3].
- The "near" ROI is initially defined in the T1-Gd volume, adjacent to the enhancing part of the tumor, described by hyperintense signal on T1-Gd. The T2-FLAIR volume is then used to revise this ROI in terms of all its voxels being within the peritumoral edematous tissue, described by hyperintense signal on the T2-FLAIR volume.
- The T2-FLAIR volume is also used to define the ROI at the farthest from the tumor but still within the edematous tissue, i.e., the enhancing FLAIR abnormality signal.
- These ROIs are described by lines drawn in multiple slices of each image (T1-Gd and T2-FLAIR) for each subject.
- Please note that during annotation:
- Both ROIs are always within the peritumoral edema/invasion area,
- None of the ROIs are in proximity to the ventricles,
- Both ROIs are representative of infiltration into white matter and not into gray matter,
- The distant ROI is at the farthest possible distance from the enhancing part of the tumor while still within edema, and
- No vessels are involved within any of the defined ROIs, as denoted in the T1-Gd volume.
--------
References:
-# S.Bakas, H.Akbari, J.Pisapia, M.Rozycki, D.M.O'Rourke, C.Davatzikos. "Identification of Imaging Signatures of the Epidermal Growth Factor Receptor Variant III (EGFRvIII) in Glioblastoma", Neuro Oncol. 17(Suppl 5):v154, 2015, DOI:10.1093/neuonc/nov225.05
-# S.Bakas, Z.A.Binder, H.Akbari, M.Martinez-Lage, M.Rozycki, J.J.D.Morrissette, N.Dahmane, D.M.O'Rourke, C.Davatzikos, "Highly-expressed wild-type EGFR and EGFRvIII mutant glioblastomas have similar MRI signature, consistent with deep peritumoral infiltration", Neuro Oncol. 18(Suppl 6):vi125-vi126, 2016, DOI:10.1093/neuonc/now212.523
-# S.Bakas, H.Akbari, J.Pisapia, M.Martinez-Lage, M.Rozycki, S.Rathore, N.Dahmane, D.M.O'Rourke, C.Davatzikos, "In vivo detection of EGFRvIII in glioblastoma via perfusion magnetic resonance imaging signature consistent with deep peritumoral infiltration: the phi-index", Clin Cancer Res. 23(16):4724-4734, 2017, DOI:10.1158/1078-0432.CCR-16-1871
--------
*/
/**
\page Glioblastoma_Recurrence Brain Cancer: Glioblastoma Infiltration Index (Recurrence)
This functionality has been removed from this CaPTk release, and we are actively working on a more optimized robust implementation that should enable generalization in multi-institutional data.
*/
/**
\page Glioblastoma_Survival Glioblastoma Overall Survival Prediction
This functionality has been removed from this CaPTk release, and we are actively working on an optimized robust implementation that would enable generalization in multi-institutional data.
*/
/**
\page Glioblastoma_EGFRvIII Brain Cancer: Glioblastoma EGFRvIII SVM Index
This application provides the detection of EGFRvIII mutation status of <i>de novo</i> glioblastoma patients by using baseline pre-operative multi-parametric MRI analysis [1].
<b>REQUIREMENTS:</b>
-# Co-registered Multimodal MRI: T1, T1-Gd, T2, T2-FLAIR, DTI-AX, DTI-FA, DTI-RAD, DTI-TR, DSC-PH, DSC-PSR, DSC-rCBV.
-# Segmentation labels of the tumor sub-regions: Non-enhancing tumor core (Label=1), Enhancing tumor core (Label=4), as well as the Edema (Label=2)
-# Segmentation labels in a common atlas space: Non-enhancing tumor core (Label=1), Enhancing tumor core (Label=4), Edema (Label=2), as well as the Ventricle (Label=7)
-# Clinical data: A csv file having patient's demographics (Note that the CSV file should be in ASCII format). Should have age (in first column) and EGFRvIII status (in second column, binarized values like 0 and 1) for training a new model, and age only for mutation detection of new patients.
-# The data for each patient should be organized in the following directory structure. When running in the command-line, filenames must include words in <b>BOLD</b> to be identified as respective required files.
- Subject_ID
-# <b>features.csv</b>
-# CONVENTIONAL
- my_<b>t1</b>_file.nii.gz
- my_<b>t2</b>_file.nii.gz
- my_<b>t1ce</b>_file.nii.gz
- my_<b>flair</b>_file.nii.gz
-# DTI
- my_<b>axial</b>_file.nii.gz
- my_<b>fractional</b>_file.nii.gz
- my_<b>radial</b>_file.nii.gz
- my_<b>trace</b>_file.nii.gz
-# PERFUSION
- my_<b>rcbv</b>_file.nii.gz
- my_<b>psr</b>_file.nii.gz
- my_<b>ph</b>_file.nii.gz
-# SEGMENTATION
- my_<b>label</b>_file.nii.gz (in same space as above images)
- my_<b>atlas</b>_file.nii.gz (in common atlas space)
-# The data of simgle or multiple patients should be organized in the above mentioned structure and reside under the same folder, e.g.:
- Input_Directory
-# Subject_ID1
-# Subject_ID2
-# ...
-# Subject_IDn
<b>USAGE:</b>
- Use Existing Model
-# "Model Directory". Choose the directory of a saved model.
-# "Test Subjects". Select the input directory that follows the folder structure described above.
-# "Output". Select the output directory where a .csv file with the mutation status for all patients will be saved, and click on 'Confirm'.The first and the second column of .csv will be subject's ID and distancce of the subject from the hyperplance of EGFRvIII model.
-# A pop-up window appears displaying the completion of result calculation. The window will also show the detected mutation status of the first subject in the Data_of_multiple_patients folder (runtime depends on the number of patients: ~2*patients minutes).
- This application is also available as with a stand-alone CLI for data analysts to build pipelines around, using the following example command:
\verbatim
${CaPTk_InstallDir}/bin/EGFRvIIIIndexPredictor.exe -t 1 -i C:/EGFRvIIIInputDirectory -m C:/EGFRvIIIInputModel -o C:/EGFRvIIIOutputDirectory
\endverbatim
--------
References:
-# H. Akbari, S. Bakas, J.M. Pisapia, M.P. Nasrallah, M. Rozycki, M. Martinez-Lage, J.J.D. Morrissette, N. Dahmane, D.M.O’Rourke, C. Davatzikos. "In vivo evaluation of EGFRvIII mutation in primary glioblastoma patients via complex multiparametric MRI signature", Neuro Oncol. 20(8):1068-1079, 2018
--------
*/
/**
\page Glioblastoma_Pseudoprogression Glioblastoma Pseudo-progression Index
This functionality has been removed from this CaPTk release, and we are actively working on an optimized robust implementation that should enable generalization in multi-institutional data. We expect this to be released in our next patch release, in Q4 2020.
*/
/**
\page Glioblastoma_Atlas Brain Cancer: Population Atlas
This application facilitates the users to generate population atlases for patients with different tumor subgroups [1] to emphasize their heterogeneity.
<b>REQUIREMENTS:</b>
-# Segmentation labels of the tumor sub-regions: Non-enhancing tumor core (Label=1), Enhancing tumor core (Label=4)
-# Standard atlas: A standard atlas to map all the patients in a unified space. Atlas should have region numbers in the ascending order, such as 1,2,3,...,n.
-# Batch File: A csv file having patients' IDs, group (atlas) number, and path to the actual segmented image. Atlas numbers should be in the ascending order, such as 1,2,3,...,n. Segmentation images should be in the same space as the standard atlas (Note that the CSV file should be in ASCII format, and the CSV file should have the following column headers in any sequence: PATIENT_IDS,IMAGES, ATLAS_LABELS).
<b>USAGE:</b>
-# Launch the Population Atlases application using the 'Applications -> Population Atlas' menu option.
-# Specify the Input batch file (e.g., Data_of_multiple_patients) in .csv format, Atlas file with delineated region-of-interests in .nii.gz format, and Output directories.
-# Click on 'Run PopulationAtlas" and the atlases will be displayed in the visualization window.
-# The atlases (.nii.gz files) and location features i.e., percentage distribution of the tumors in different sub-regions of the standard atlas template (.csv file) will be saved in the Output directory.
-# The output atlases can be overlayed on the Jacobs atlas (jakob_stripped_with_cere_lps_256256128.nii.gz) given in the corresponding sample data folder.
- This application is also available as with a stand-alone CLI for data analysts to build pipelines around, using the following example command:
\verbatim
${CaPTk_InstallDir}/bin/PopulationAtlases.exe -i C:/LabelsFile -a C:/AtlasFile -o C:/populationAtlasesOutput
\endverbatim
--------
References:
-# M. Bilello, H. Akbari, X. Da, J.M.Pisapia, S.Mohan, R.L.Wolf, D.M.O'Rourke, M.Martinez-Lage, C.Davatzikos. "Population-based MRI atlases of spatial distribution are specific to patient and tumor characteristics in glioblastoma", Neuroimage Clin. 12:34-40, 2016, DOI:10.1016/j.nicl.2016.03.007
--------
*/
/**
\page Glioblastoma_Confetti Brain Cancer: Confetti
This is a method for automated extraction of white matter tracts of interest in a consistent and comparable manner over a large group of subjects without drawing the inclusion and exclusion ROIs, facilitating an easy correspondence between different subjects, as well as providing a representation that is robust to edema, mass effect, and tract infiltration [1-3]. Confetti includes three main steps:
-# Connectivity signature generation for fibers
-# Clustering of fibers using a mixture of multinomials (MNs) clustering method and Expectation-Maximization (EM) optimization framework
-# Extraction of predefined white matter tracts.
<b>REQUIREMENTS:</b>
-# Fiber set (Streamline) to be clustered: The fiber set can be generated using any tractography model, but the file should be saved in .Bfloat format (i.e. fiber format of Camino package). Different converters can be be used to convert .trk to .Bfloat and vice-versa.
-# Track Density Images (TDI): When using GUI, it needs to be generated in the manner explained below; this constraint is not present when using Confetti via the command line.
-# Parcellation of the brain into 87 Desikan/Freesurfer gray matter (GM) regions [4]
<b>Generation of TDI Images with GUI:</b>
-# Freesurfer [4] is used with the Desikan atlas [5] to define 87 gray matter ROIs in the user diffusion space.
-# Region IDs of the ROIs as used by Freesurfer is provided in the example file "{CaPTk_Sample_Data}/Confetti/input/freesurfer_ROIs.csv". (Note that the CSV file should be in ASCII format)
-# TDIs must be generated using the probtrackx utility of FSL package [6] with its default parameters and 5000 seeds per voxel.
-# Each TDI image is a whole brain voxel-map, with each voxel having the number of fibers passing through this voxel and reaching to one of the 87 gray matter ROIs defined by Freesurfer.
-# In total, you should have 87 TDI, each corresponding to one ROI.
<b>USAGE:</b>
-# Open Confetti UI using the 'Applications -> Confetti' menu option.
-# Load the required images using "Streamline File" and "TDI Directory".
-# Specify the output directory and click on 'Run Confetti'.
-# Visually review the output tracks by double clicking respective fields on the populated list view.
- This application is also available as with a stand-alone CLI for data analysts to build pipelines around, using the following example commands:
-# Generation of connectivity signatures of the fibers that will be clustered into bundles:
\verbatim
${CaPTk_InstallDir}/bin/Confetti signature -i tdi_paths_freesurfer_87ROIs.csv -f fibers.Bfloat -o signatures.csv
\endverbatim
-# Clustering of the generated fibers into bundles:
\verbatim
${CaPTk_InstallDir}/bin/Confetti cluster -s signatures.csv -k 200 -o clusterIDs.csv
\endverbatim
-# Identification of specific tracts (requires an annotated example):
\verbatim
${CaPTk_InstallDir}/bin/Confetti extract -t template/ -f fibers.Bfloat -c clusterIDs.csv -o tracts_
\endverbatim
--------
References:
-# B.Tunc, M.Ingalhalikar, W.A.Parker, J.Lecoeur, N.Singh, R.L.Wolf, L.Macyszyn, S.Brem, R.Verma, "Individualized Map of White Matter Pathways: Connectivity-based Paradigm for Neurosurgical Planning", Neurosurgery. 79(4):568-77, 2016, DOI:10.1227/NEU.0000000000001183.
-# B.Tunc, W.A.Parker, M.Ingalhalikar, R.Verma, "Automated tract extraction via atlas based Adaptive Clustering", NeuroImage. 102(2):596-607, 2014, DOI:10.1016/j.neuroimage.2014.08.021
-# B.Tunc, A.R.Smith, D.Wasserman, X.Pennec, W.M.Wells, R.Verma, K.M.Pohl, "Multinomial Probabilistic Fiber Representation for Connectivity Driven Clustering", Inf Process Med Imaging. 23:730-41, 2013.
-# B.Fischl, M.I.Sereno, A.M.Dale, "Cortical surface-based analysis. II: Inflation, flattening, and a surface-based coordinate system", NeuroImage. 9:195-207, 1999, DOI:10.1006/nimg.1998.0396
-# R.S.Desikan, F.Segonne, B.Fischl, B.Quinn, B.Dickerson, D.Blacker, R.Buckner, A.Dale, R.Maguire, B.Hyman, M.Albert, R.Killiany, "An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest", NeuroImage. 31(3):968-80, 2006, DOI:10.1016/j.neuroimage.2006.01.021
-# M.Jenkinson, C.F.Beckmann, T.E.J.Behrens, M.W.Woolrich, S.M.Smith, "FSL", Neuroimage. 62(2):782-790, 2012, DOI:10.1016/j.neuroimage.2011.09.015
--------
*/
/**
\page Glioblastoma_Directionality Miscellaneous: Directionality Estimator
This application estimates the volumetric changes and their directionality for a given ROI across two timepoints [1] and also returns the projections of the boundary point with the maximum distance from a seedpoint (with label <b>TU</b>) in each of the 3 2D visualized planes.
<b> REQUIREMENTS:</b>
-# The image of timepoint 1 and a tissue seedpoint with label <b>TU</b> within the ROI at timepoint 1.
-# [OPTIONAL] If the second time point image is not perfectly aligned, a second tissue seedpoint with label <b>NCR</b> can be initialized and the algorithm will take that translation into consideration during computation.
-# Two ROI files, one for each timepoint (note that the populated voxels of the ROIs should be of Label 1).
<b> USAGE:</b>
-# Load the image of timepoint 1 in CaPTk and initialize the (or load an already initialized) seedpoint with label <b>TU</b> defining the point within the ROI from where you want to estimate the expansion.
-# [OPTIONAL] Load the image of timepoint 2 in CaPTk and initialize the (or load an already initialized) seedpoint with label <b>NCR</b> defining the point within the ROI from where you want to estimate the expansion.
-# Launch the Directionality Estimator application using the 'Applications -> Directionality Estimator' menu option.
-# Specify the Input ROI files for the 2 timepoints and the Output directory where the output masks and text files will be saved.
-# Click on 'Confirm" and a pop-up window will be displayed showing the total volumetric change, as well as the change per 3D octant.
-# A parametric map is automatically loaded as an image in the visualization panels of CaPTk dividing the ROI of timepoint 1 in octants and assigning a value in the range [0,1] on each octant. Smaller and larger values represent the octants with smaller and larger relative volumetric change.
-# A multi-label map is also automatically loaded as an ROI in the visualization panels of CaPTk depicting with i) label 1 (red) and label 3 (yellow) the intersection of the timepoint 1 and timepoint 2 ROIs, ii) label 3 (yellow) the octant of maximum expansion, and iii) label 2 (green) the voxels of expansion in timepoint 2, i.e. the voxels that did not exist in the timepoint 1 ROI but only in the ROI of timepoint 2. Note that the visualization of the multi-label map can be controlled from the "Drawing" panel.
-# Tissue point table gets updated with 3 points marked "CSF" (which are correspond to the direction of max point along the different planes) and 1 point marked "RTN" (which is the actual 3D point of max distance from initialized seed point).
-# The results as well as the actual distances are automatically saved in a text file in the Output folder together with the multi-label ROI images.
-# The user can save both the actual and the projection points of max distance in a location of their choosing.
- This application is also available as with a stand-alone CLI for data analysts to build pipelines around, using the following example command:
\verbatim
${CaPTk_InstallDir}/bin/DirectionalityEstimate.exe -l C:/labelMap.nii.gz -o C:/output.txt -i 40,25,60
\endverbatim
<b> OUTPUT:</b>
The following files are saved in the output folder:
- <b>directionalityOutput.txt</b>, which includes the total volumetric change and the change per 3D octant (as shown in the output window), as well as the coordinates of the point of the largest distance (i.e., RTN) from the TU seedpoint together with its projections in the 3 2D planes (i.e., CSF).
- <b>roi_DE_octantImage.nii.gz</b>, which describes the partitioned version of timepoint_2_ROI in octants with the corresponding numbering of octants accompanying the numbering included in the output text file.
- <b>roi_DE_visualizationRatio.nii.gz</b>, which describes the ratio of the timepoint_2_ROI over timepoint_1_ROI of the volumetric differences per octant.
- <b>roiDE_visualizationProbability.nii.gz</b>, which describes the parametric map dividing the ROI of timepoint 1 in octants and assigning a value in the range [0,1] on each octant. Smaller and larger values represent the octants with smaller and larger relative volumetric change. This is the same result as roi_DE_visualizationRatio.nii.gz but scaled between 0 and 1.
- <b>roi_DE_visualizationIncrease.nii.gz</b>, which describes the multi-label map depicting the intersection of the timepoint 1 and timepoint 2 ROIs using i) label 1 (red) and label 3 (yellow), ii) the octant of maximum expansion based on distance and not based on volumetric difference using label 4 (blue), and iii) the voxels of expansion in timepoint 2, i.e. the voxels that did not exist in the timepoint 1 ROI but only in the ROI of timepoint 2 using label 2 (green). Note that the visualization of the multi-label map can be controlled from the "Drawing" panel.
--------
References:
-# M.E.Schweitzer, M.A.Stavarache, N.Petersen, S.Bakas, A.J.Tsiouris, C.Davatzikos, M.G.Kaplitt, M.M.Souweidane, "Modulation of Convection Enhanced Delivery (CED) distribution using Focused Ultrasound (FUS)", Neuro Oncol. 19(Suppl 6):vi272, 2017, DOI:10.1093/neuonc/nox168.1118
--------
*/
/**
\page BreastCancer_LIBRA Breast Cancer: Breast Density Estimation (LIBRA)
The Laboratory for Individualized Breast Radiodensity Assessment (LIBRA) is a software application for fully-automated breast density evaluation in full-field digital mammography (FFDM) [1]. Besides breast density estimates, LIBRA also provides breast segmentation masks which indicate the dense (i.e., fibroglandular) versus fatty tissue regions of the breast. LIBRA is available in single and batch mode.
<b>REQUIREMENTS:</b>
- Raw (i.e., "FOR PROCESSING") or vendor post-processed (i.e., "FOR PRESENTATION") FFDM images.
- FFDM vendors currently supported: Hologic and GE Healthcare.
- For single mode: Each image stored in a separate folder.
- For batch mode: All images stored in a single folder.
<b>USAGE:</b>
- Single mode:
-# Load the FFDM image using the 'File -> Load -> Images' menu option.
-# Launch LIBRA using the 'Applications -> LIBRA_SingleImage' menu option.
-# LIBRA is executed and the breast segmentation mask is automatically loaded back to CaPTk.
-# LIBRA outputs (described in the Outputs section below) are automatically saved in a temp folder under the CaPTk directory.
- Batch mode:
-# Launch LIBRA using the 'Applications -> LIBRA_BatchMode' menu option.
-# The user is prompted to select the folder which contains the images for analysis.
-# The user is then prompted to select an output folder where the breast density estimates and segmentation images will be stored. Note: If the folder already exists and contains breast density estimates (e.g., from a previous run of LIBRA), it will append the new results to the results file.
-# Lastly, the user will be asked whether they wish to store a processing log file and intermediate graphics in addition to the breast segmentation masks which may be on interest for publication or visualization purposes, and are further described in the Outputs section below.
-# LIBRA will then begin processing all the FFDM images; the progress of the software can be monitored via the related command-prompt window that opens when LIBRA starts running. When analysis is complete, a prompt appears asking the user if they want to open the results folder and if they want to perform additional analyses.
<b>OUTPUT:</b>
- The following outputs are always generated:
- Masks_<Image-Analyzed>.mat: A MATLAB datafile containing a structure array that stores the breast area (res.BreastArea), dense area (res.DenseArea), and area percent density estimates (res.PD_SVM) for the FFDM analyzed (res.dcm_fname). This structure array also stores the binary masks of the breast (res.BreastMask) and dense tissue segmentations (res.DenseMask), which may be useful for further processing and analysis.
- Density.csv: A comma separated file (open-able by spreadsheet programs like Excel) that stores the breast area, dense area, and percent density estimates for each FFDM image analyzed, listed by file name. The Manufacturer, Laterality and View-Position of the mammogram are also provided for reference. (Note that the CSV file should be in ASCII format)
- <Image-Analyzed-Filename>_density_segmentation.jpg: In the Result_Images sub-directory, the breast and density segmentation results are provided for each image analyzed. The breast is outlined in red, the density segmentation is in green. Note that for visualization purposes, the image is window-leveled to between the 5th and 95th percentile of the intensity values of the pixels within the breast region.
- In single mode the following outputs are generated by default, while in batch mode they are only generated if the user specifies to save intermediate files:
- LIBRA-logfile_<time-stamp>.txt: A log of the programs outputs during the session, this text file is time-stamped at the start of the LIBRA sessions so that multiple sessions writing to the same output folder each have their own unique log of events. The <time-stamp> takes the form of <Month>-<DD>-<YYYY>_HH-MM-SS, in 24-hour format. .. _fig_log_file.
- <Image-Analyzed-Filename>_Windowed_Original.jpg: In the Result_Images sub-directory, the analyzed image window-leveled to between the 5th and 95th percentile of the intensity values of the pixels within the breast region is also provided with out the breast density segmentation overlay for comparison purposes.
- <Image-Analyzed>_density_imagesc.jpg: In the Result_Images sub-directory, the image clusters (grouped by colors) generated by the Fuzzy C-Means stage of the LIBRA algorithm is provided.
- <Image-Analyzed>_intensity_histogram.jpg: In the Result_Images sub-directory, the breast intensity histogram (z-scored) and FCM-cluster centers are plotted in this image.
- Please see the LIBRA manual for more details, please see the software manual: https://upenn.box.com/v/manual-libra104
- This application is also available as with a stand-alone CLI for data analysts to build pipelines around, using the following example command:
\verbatim
${CaPTk_InstallDir}/bin/libra.bat --input C:/inputDICOMDir --output C:/outputDir # windows
${CaPTk_InstallDir}/bin/libra --input C:/inputDICOMDir --output C:/outputDir # linux
\endverbatim
--------
References:
-# B.M.Keller, D.L.Nathan, Y.Wang, Y.Zheng, J.C.Gee, E.F.Conant, D.Kontos, "Estimation of breast percent density in raw and processed full field digital mammography images via adaptive fuzzy c-means clustering and support vector machine segmentation", Med Phys. 39(8):4903-4917, 2012, DOI:10.1118/1.4736530
--------
*/
/**
\page BreastCancer_texture Breast Cancer: Texture Feature Extraction
This application extracts radiomic features related to breast cancer risk, as described in the paper [1]. It is based upon [LIBRA](BreastCancer_LIBRA.html) for segmentation of the breast and the lattice-based mode of <a href="ht_FeatureExtraction.html">Feature Extraction</a> backend to perform radiomic feature calculations.
<b>REQUIREMENTS:</b>
- Raw (i.e., "FOR PROCESSING") or vendor post-processed (i.e., "FOR PRESENTATION") FFDM images.
- FFDM vendors currently supported: Hologic and GE Healthcare.
- Each image stored in a separate folder (required only in GUI).
<b>USAGE:</b>
-# Load the FFDM image using the 'File -> Load -> Images' menu option.
-# Launch <b>Breast Texture Feature Extraction</b> using the 'Applications -> [Breast Cancer] Texture Pipeline' menu option.
-# Specify the output directory.
<b>OUTPUT:</b>
- output.csv: CSV file with the extracted radiomic features.
- featureMap/: Folder of feature maps, one for each radiomic feature, reflecting the spatial feature distributions within the breast.
- This application is also available as with a stand-alone CLI for data analysts to build pipelines around, using the following example command:
\verbatim
${CaPTk_InstallDir}/bin/BreastTexturePipeline.exe -i C:/input/image.dcm -o C:/outputDir -d 1
\endverbatim
--------
References:
-# Y.Zheng, B.M.Keller, S.Ray, Y.Wang, E.F.Conant, J.C.Gee, D.Kontos, "Parenchymal texture analysis in digital mammography: A fully automated pipeline for breast cancer risk assessment", Medical Physics, 2015.
--------
*/
/**
\page BreastCancer_breastSegmentation Breast Cancer: Breast Segmentation
This application uses LIBRA [1] to extract the breast region in the loaded image and is only availabe via the GUI (the [Breast Texture Feature Extraction](BreastCancer_texture.html) can be used to generate a mask and perform radiomic analysis via the command line).
<b>REQUIREMENTS:</b>
- Raw (i.e., "FOR PROCESSING") or vendor post-processed (i.e., "FOR PRESENTATION") FFDM images.
- FFDM vendors currently supported: Hologic and GE Healthcare.
- Each image stored in a separate folder.
<b>USAGE:</b>
-# Load the FFDM image using the 'File -> Load -> Images' menu option.
-# Launch LIBRA using the 'Applications -> Breast Segmentation' menu option.
-# Output, i.e., breast mask, is loaded automatically.
--------
References:
-# B.M.Keller, D.L.Nathan, Y.Wang, Y.Zheng, J.C.Gee, E.F.Conant, D.Kontos, "Estimation of breast percent density in raw and processed full field digital mammography images via adaptive fuzzy c-means clustering and support vector machine segmentation", Med Phys. 39(8):4903-4917, 2012, DOI:10.1118/1.4736530
--------
*/
/**
\page LungCancer_SBRT Lung Cancer: Radiomics Analysis of Lung Cancer
This application provides a fully automatic segmentation of lung nodules and prediction of survival and nodal failure risks as a three step workflow[1].
<b>REQUIREMENTS:</b>
-# CT image
-# PET image (co-registered to the CT image)
-# Additional requirements for each of the three steps(Lung Field Segmentation, Lung Nodule Segmentation, Prognostic Modeling) as described in Usage.
<b>USAGE:</b>
<b>Step 1: Lung Field Segmentation</b>
-# Load the required co-registered CT and PET images.
-# Load the optional mask image (for example, body mask) within which the lung field will be extracted.
-# Launch the segmentation step using the 'Applications -> <b>Lung Field Segmentation</b>' menu.
-# The lung field is automatically generated and displayed.
- This application is also available as with a stand-alone CLI for data analysts to build pipelines around, using the following example commands:
\verbatim
${CaPTk_InstallDir}/bin/SBRT_LungField.exe -p C:/PET.nii.gz -c C:/CT.nii.gz -o C:/outputBasename -m C:/foregroundMask.nii.gz # optional mask
\endverbatim
It will generate the lung field mask image with name C:/outputBasename_lf.nii.gz. The mask image will contain 2 labels, label 3 for foreground and label 2 for lung field.
<b>Step 2: Lung Nodule Segmentation</b>
-# Load the required co-registered CT and PET images.
-# Load the required mask image(for example, lung field mask or a region containing the nodule within lung field with default label value of 2) within which the lung nodule will be extracted.
-# Load the optional seed image containing foreground seeds(for nodule) and background seeds for nodule segmentation.The foreground seeds should be with label 2, background seeds with label 1, and others with label 0.
-# Specify the label value for the foreground seeds(default value of 2).
-# Launch the segmentation step using 'Applications -> <b>Lung Nodule Segmentation</b>' menu.
-# The nodule is automatically generated and displayed.
- This application is also available as with a stand-alone CLI for data analysts to build pipelines around, using the following example commands:
- The following command will generate two output images (seed image for nodule segmentation and nodule mask image) with names `C:/outputBasename_seeds.nii.gz` and `C:/outputBasename_segmentation.nii.gz`:
\verbatim
${CaPTk_InstallDir}/bin/SBRT_Nodule.exe -p C:/PET.nii.gz -c C:/CT.nii.gz -m C:/mask.nii.gz -o C:/outputBasename
\endverbatim
- The following command will generate the nodule mask image with name `C:/outputBasename_segmentation.nii.gz`:
\verbatim
${CaPTk_InstallDir}/bin/SBRT_Nodule.exe -p C:/PET.nii.gz -c C:/CT.nii.gz -m C:/mask.nii.gz -o C:/outputBasename -s C:/seedImage.nii.gz
\endverbatim
"Label_value" indicates the label of lung field in the input mask image, default value is 2.
<b>Step 3: Prognostic Modeling</b>
-# Load the required PET image.
-# Load the required nodule mask generated from step 2.
-# Supply the model directory. Note: A model trained on PENN data can be downloaded from ftp://www.nitrc.org/home/groups/captk/downloads/models/SBRT.zip
-# Launch the modeling step using 'Applications -> <b>Prognostic Modeling</b>' menu.
-# The predicted risks for survival and nodal failure are automatically calculated and displayed.[Note: The prediction models were obtained based on PET images with spatial resolution 4mm x 4mm x 4mm.]
- This application is also available as with a stand-alone CLI for data analysts to build pipelines around, using the following example commands:
- The following command will calculate and print the predicted risks regarding survival and nodal failure:
\verbatim
${CaPTk_InstallDir}/bin/SBRT_Analysis.exe -i C:/PET.nii.gz -m C:/mask.nii.gz -l 1 -D C:/Model
\endverbatim
- The following command will calculate and print the predicted risks regarding survival and nodal failure, it will also save the radiomic features used for the prediction into the assigned file:
\verbatim
${CaPTk_InstallDir}/bin/SBRT_Analysis.exe -i C:/PET.nii.gz -m C:/mask.nii.gz -l 1 -o C:/outputFile -D C:/Model
\endverbatim
--------
References:
-# H.Li, M.Galperin-Aizenberg, D.Pryma, C.Simone, Y.Fan, "Predicting treatment response and survival of early-stage non-small cell lung cancer patients treated with stereotactic body radiation therapy using unsupervised two-way clustering of radiomic features", Int. Workshop on Pulmonary Imaging, 2017.
--------
*/
/**
\page Diffusion_Derivatives Miscellaneous: Diffusion Derivatives
This application extracts various measurements from a Diffusion Weighted MRI scan. The exact measurements comprise i) fractional anisotropy (FA), ii) radial diffusivity (RAD), iii) axial diffusivity (AX), iv) apparent diffusion coefficient (ADC), and v) averaged b0 image (B0).
This application is also available from the web on the [CBICA Image Processing Portal](https://ipp.cbica.upenn.edu/). Please see the experiment on the portal for details.
<b> REQUIREMENTS:</b>
-# A single DW-MRI image
-# Its accompanying BVec and BVal files [Note: these files should be calculated by 'dcm2nii' or any other appropriate software.]
-# A mask for which the measurements will be extracted
<b> USAGE:</b>
-# Launch the application from "Applications" -> "Diffusion Derivatives".