From 7585b2b1761acd33525a60141a1e9fb4c2cba2b3 Mon Sep 17 00:00:00 2001 From: Ian-Erickson Date: Mon, 16 Oct 2023 13:18:27 -0400 Subject: [PATCH] Updating Readme to use proper Lseg.png --- model_zoo/cardiac_mri_derived_left_ventricular_mass/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/model_zoo/cardiac_mri_derived_left_ventricular_mass/README.md b/model_zoo/cardiac_mri_derived_left_ventricular_mass/README.md index 1f3c750ba..50aa81aef 100644 --- a/model_zoo/cardiac_mri_derived_left_ventricular_mass/README.md +++ b/model_zoo/cardiac_mri_derived_left_ventricular_mass/README.md @@ -12,7 +12,7 @@ Here batch size, N, was 4 random samples from the training set of 3178 after exc ML4Hseg, is a 3D semantic segmenter. To facilitate model development in the absence of hand-labeled segmentations, the models were trained with the InlineVF contours to minimize Lseg; the per-pixel cross-entropy between the label and the model’s prediction. -![Loss of ML4Hseg](LSeg.png) +![Loss of ML4Hseg](Lseg.png) Here the batch size, N, was 4 from the total set of 33,071. Height, H, and width, W, are 256 voxels and there was a maximum of 13 Z slices along the short axis. There is a channel for each of the 3 labels, which were one-hot encoded in the training data, InlineVF (IVF), and probabilistic values from the softmax layer of ML4Hseg. Segmentation architectures used U-Net-style long-range connections between early convolutional layers and deeper layers. Since not all CMR images used the same pixel dimensions, models were built to incorporate pixel size values with their fully connected layers before making predictions. # Results