Before we begin, make sure you have followed the steps here to setup your dataset.
This README aims to document the additional steps required to test/evaluate a general dataset. If you would like to evaluate it instead, see the document for training a general dataset.
Note that there are 2 types of "testing" that you can do:
- Evaluation: you have ground truth keypoints data, and what to evaluate against this ground truth
- Demo: you do not have ground truth keypoints, and want to use this algorithm to generate these keypoints
Simply run
python3 train.py \
--eval --eval_dataset val \
--config experiments/path/to/config_file.yaml \
--logdir ./logs
Argument --eval_dataset
can be val
or train
. Results can be seen in logs
directory or in the tensorboard.
Simply run
python3 demo.py \
--config experiments/path/to/config_file.yaml \
--logdir ./logs
Argument --eval_dataset
can be val
or train
. Results can be seen in logs
directory or in the tensorboard.
To visualise your results, follow the instructions in the README. You can choose to run with or [without]((README.md#visualising-results-without-tensorboard) tensorboard.