Data annotation tool with proposal generation and person detection support. It indicates the differences between subsequent frames with bounding boxes to identify regions of interest. Developed during Hackathon on Permafrost and provided with a readme file afterwards.
- Create a conda environment through the provided yml file.
conda env create -f environment.yml
- Download the detector models (RetinaNet, YOLOv3, TinyYOLOv3) to be used and place them under a folder called
models
. - Download the full image dataset (6.7G!) or rather experiment only with images from a single day (27M). Unzip the archive and place the image folders under a directory called
data
.
To use the tool on the provided example data simply call the script by providing the folder of interest:
python differ.py --day "2017-09-08" --saveCSV
Intermediate outputs for the last image can be found in temp
folder, which might come in handy if further visual inspection is needed. To work on your own dataset place the image folder under the data
folder and provide its name to script. Parameters of the script are finetuned on the permafrost dataset, so please change them accordingly for other datasets. Be sure to change also the labelDict.json
to match desired label and key pairs. One important thing to keep in mind is that the files must be named according to their temporal ordering, i.e. if one frame immediately follows another, the name of that frame should also directly follow the other filename.