Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
nsaharkhiz-fda authored Aug 7, 2024
1 parent c97a6de commit e9e4bdd
Showing 1 changed file with 13 additions and 8 deletions.
21 changes: 13 additions & 8 deletions code/processing/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,16 @@ model.
wget https://isic-challenge-data.s3.amazonaws.com/2018/ISIC2018_Task1_Training_GroundTruth.zip
unzip ISIC2018_Task1_Training_GroundTruth.zip
```
- You need .txt files that lists the paths for the train, validation, and test splits of the image dataset. You can either download these text files from hugging face or generate them yourself. These files must be placed under `../../data/dataset_splits/`.
- In order to use the pre-generated dataset split text files (e.g. ```all_tones_real_ISIC_1.0_add_synth_0.2```):
- Download ```all_tones_real_ISIC_1.0_add_synth_0.2``` from hugging face and place within Github repo:
```
cd $CWD/code/processing
NAME="all_tones_real_ISIC_1.0_add_synth_0.2"
python download_split.py --name $NAME --saveDir '../../'
```
- To generate the dataset split text files, follow the instructions provided under “Split the Datasets into Training, Validation, and Test Sets.”
## Training/Testing Segmentation Model
This code below is heavily based on the [DermsegDiff](https://github.com/xmindflow/DermoSegDiff) implementation of [1].
Expand Down Expand Up @@ -73,12 +83,7 @@ jupyter notebook visualization.ipynb
```
### Training Segmentation Model (DermoSegDiff):
- Download ```all_tones_real_ISIC_1.0_add_synth_0.2``` from hugging face and place within Github repo:
```
cd $CWD/code/processing
NAME="all_tones_real_ISIC_1.0_add_synth_0.2"
python download_split.py --name $NAME --saveDir '../../'
```
- Make sure the dataset split text files are located in `../../data/dataset_splits/`, as outlined in the "Setup" section.
- Run training:
```
Expand All @@ -104,14 +109,14 @@ jupyter notebook visualization.ipynb
```
cd $CWD/code/processing
python -u data_split_ISIC.py --real_data_ratio 1.0 --synth_data_ratio 0.2 --saveDir '../../data/outputs/dataset_splits/' --add_synth
python -u data_split_ISIC.py --real_data_ratio 1.0 --synth_data_ratio 0.2 --saveDir '../../data/dataset_splits/' --add_synth
```
- Run `data_split_HAM.py` to generate own synthetic/HAM10k data splits:
```
cd $CWD/code/processing
python -u data_split_HAM.py --real_data_ratio 1.0 --synth_data_ratio 0.2 --saveDir '../../data/outputs/dataset_splits/' --add_synth
python -u data_split_HAM.py --real_data_ratio 1.0 --synth_data_ratio 0.2 --saveDir '../../data/dataset_splits/' --add_synth
```
These scripts accept the following arguments:
Expand Down

0 comments on commit e9e4bdd

Please sign in to comment.