This repository contains code used in the paper:
"S-SYNTH: Knowledge-Based, Synthetic Generation of Skin Images"
Andrea Kim, Niloufar Saharkhiz, Elena Sizikova, Miguel Lago, Berkman Sahiner, Jana Delfino, Aldo Badano
International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) 2024
- Paper: https://arxiv.org/abs/2408.00191
- Code: https://github.com/DIDSR/ssynth-release
- Data: https://huggingface.co/datasets/didsr/ssynth_data
- Demo: https://didsr.github.io/ssynth-release/
The contributions of our work are:
- We describe S-SYNTH, an open-source, flexible framework for creation of highly-detailed 3D skin models and digitally rendered synthetic images of diverse human skin tones, with full control of underlying parameters and the image formation process.
- We systematically evaluate S-SYNTH synthetic images for training and testing applications. Specifically, we show S-SYNTH synthetic images improve segmentation performance when only a limited set of real images is available for training. We also show comparative trends between S-SYNTH synthetic images and real-patient examples (according to skin color and lesion size) are similar.
- Framework
- Code
- Data
- Citation
- Related Links
- Disclaimer
We present S-SYNTH, the first knowledge-based, adaptable open-source skin simulation framework to rapidly generate synthetic skin models and images using digital rendering of an anatomically inspired multi-layer, multi-component skin and growing lesion model. The skin model allows for controlled variation in skin appearance, such as skin color, presence of hair, lesion size, skin and lesion colors, and blood fraction among other parameters. We use this framework to study the effect of possible variations on the development and evaluation of AI models for skin lesion segmentation, and show that results obtained using synthetic data follow similar comparative trends as real dermatologic images, while mitigating biases and limitations from existing datasets including small dataset size, mislabeled examples, and lack of diversity.
S-SYNTH can be used to generate synthetic skin images with annotations (including segmentation masks) with variations:
Usage: S-SYNTH relies on Houdini for creating of skin layers and Mitsuba for rendering.
Please see code
directory for:
- Code for generating materials, skin models, and synthetic skin lesions
- Training of a segmentation model using associated images
- Evaluating performance on real skin images from HAM10K and ISIC18 datasets.
Associated data for this repository, including pre-generated synthetic skin examples and their masks, can be found in a Hugging face dataset repo (S-SYNTH data).
@article{kim2024ssynth,
title={Knowledge-based in silico models and dataset for the comparative evaluation of mammography AI for a range of breast characteristics, lesion conspicuities and doses},
author={Kim, Andrea and Saharkhiz, Niloufar and Sizikova, Elena and Lago, Miguel, and Sahiner, Berkman and Delfino, Jana G., and Badano, Aldo},
journal={International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI)},
volume={},
pages={},
year={2024}
}
- FDA Catalog of Regulatory Science Tools to Help Assess New Medical Devices.
- A. Badano, M. Lago, E. Sizikova, J. G. Delfino, S. Guan, M. A. Anastasio, B. Sahiner. The stochastic digital human is now enrolling for in silico imaging trials—methods and tools for generating digital cohorts. Progress in Biomedical Engineering 2023.