Repository for "Inverse Design of Silicon Photonics Components: A Study from Deep Learning Perspective"
Whole-data evaluation
Our results show that our ResNet and InceptionNet models both perform reasonably well for PPS generation, especially when compared with VGG. Our results when trained on 90% of the data and tested on the remaining 10% are concurrent withthe results found in our whole-data evaluation.
Load requirements with:
python3 -m pip install -r requirements.txt
python3 -m train_forward [--path=PATH]
PATH as default is ".."
python3 -m train_inverse --model=MODEL [--model_name=NAME] [--path=PATH]
MODEL can be ResNet, InceptionNet, or VGG
NAME as default is "res_model.pth", "inception_model.pth", or "vgg_model.pth", depending on MODEL
PATH as default is ".."
If model name is not specified, defaults to name recognized by default tester, based on model used.
Input structures and output characteristics expected to be in folder at $path + "Machine learning inverse design/input_patterns" and $path + "Machine learning inverse design/output_characteristics" respectively.
This code trains on the entire dataset. To instead run the training with a 90:10 train-test split, replace train_inverse with train_inverse9010
python3 -m test_inverse --model=MODEL [--model_name=NAME] [--path=PATH]
MODEL can be ResNet, InceptionNet, or VGG
NAME as default is "res_model.pth", "inception_model.pth", or "vgg_model.pth", depending on MODEL
PATH as default is ".."
To run this program, two models are required in the models directory: the inverse model to test and the forward model (forward_model.pth).
All models save to/load from the '/models' directory, located at the specified path. All figures save to the '/figures' directory, also located at that path.
Pre-trained models and data can be downloaded from: https://drive.google.com/drive/folders/1QRARk_RdEI-WgjzwzjKGTikWUeJLO2DH?usp=sharing