Code for the "ScrewNet: Category-Independent Articulation Model Estimation From Depth Images Using Screw Theory" paper. Full paper available here. [Project webpage]
cd /path/to/the/repository/
conda env create -f environment.yaml
conda activate screwNet
python evaluate_model.py --model-dir <pretrained-model-dir> --model-name <model-name> --test-dir <test-dir-name> --model-type <screw, l2, noLSTM, 2imgs> --output-dir <output-dir>
- run
jupyter notebook
- open visualize_results notebook
- update evaluation directories (same as the output directory used for the evaluate_model.py script)
- run corresponding cells
- Generate dataset using our fork of the Synthetic articulated dataset generator from here
- Run the following command to train ScrewNet on the generated datasets
python train_model.py --name <model-name> --train-dir <training-dataset-dir> --test-dir <test-dataset-dir> --ntrain <no_of_training_samples> --ntest <no_of_validation_samples> --epochs <no_epochs> --cuda --batch <batch-size> --device 0 --fix-seed