[ Paper ] |
[ Website ] |
---|
DrawMon - a distributed alert generation system (see figure above). Each game session is managed by a central Session Manager which assigns a unique session id. For a given session, whenever a sketch stroke is drawn, the accumulated canvas content (i.e. strokes rendered so far) is tagged with session id and relayed to a shared Session Canvas Queue. For efficiency, the canvas content is represented as a lightweight Scalable Vector Graphic (SVG) object. The contents of the Session Canvas Queue are dequeued and rendered into corresponding 512Γ512 binary images by Distributed Rendering Module in a distributed and parallel fashion. The rendered binary images tagged with session id are placed in the Rendered Image Queue. The contents of Rendered Image Queue are dequeued and processed by Distributed Detection Module. Each Detection module consists of our custom-designed deep neural network CanvasNet.
CanvasNet processes the rendered image as input and outputs a list of atypical activities (if any) along with associated meta-information (atypical content category, 2-D spatial location).
This repo has the official codebase for CanvasNet.
The CanvasNet code is tested with
- Python (
3.7.x
) - Tensorflow (
1.7.1
) - CUDA (
10.0
) - CudNN (
7.3-CUDA-10.0
)
We have provided environment files for both Conda and Pip methods. Please use any one of the following.
conda env create -f environment.yml
pip install -r requirements.txt
- Download AtyPict [
Dataset Link
] - Place the
- Dataset under
images
directory - COCO-Pretrained Model weights in the
init_weights
directory- Weights used: TBA
- Dataset under
More information can be found in folder-specific READMEs.
If your compute uses SLURM workloads, please load these (or equivalent) modules at the start of your experiments. Ensure that all other modules are unloaded.
module add cuda/10.0
module add cudnn/7.3-cuda-10.0
Train the presented network
python train.py \
--num-gpus 4
Please refer to the README.md under the configs
directory for ablative variants and baselines.
To perform inference and get quantitative results on the test set.
python train.py \
--eval-only \
MODEL.WEIGHTS <path-to-model-file>
- This outputs 2 json files in the corresponding output directory from the config.
coco_instances_results.json
- This is an encoded format which is to be parsed to get the qualitative results
Can be executed only after quantitative inference (or) on validation outputs at the end of each training epoch.
This parses the output JSON and overlays predictions on the images.
python visualise_json_results.py \
--inputs <path-to-output-file-1.json> [... <path-to-output-file-2.json>] \
--output outputs/qualitative/ \
NOTE: To compare multiple models, multiple input JSON files can be passed. This produces a single vertically stitched image combining the predictions of each JSON passed.
Examples of atypical content detection. False negatives are shown as dashed rectangles and false positives as dotted rectangles. Color codes are: text, numbers, question marks, arrows, circles and other icons (e.g. tick marks, addition symbol).
If you make use of our work, please consider citing.
@InProceedings{DrawMonACMMM2022,
author="Bansal, Nikhil
and Gupta, Kartik
and Kannan, Kiruthika
and Pentapati, Sivani
and Sarvadevabhatla, Ravi Kiran",
title="DrawMon: A Distributed System for Detection of Atypical Sketch Content in Concurrent Pictionary Games",
booktitle = "ACM conference on Multimedia (ACMMM)",
year="2022"
}
For any queries, please contact Dr. Ravi Kiran Sarvadevabhatla
This project is open sourced under MIT License.