Skip to content

Commit

Permalink
first commit
Browse files Browse the repository at this point in the history
  • Loading branch information
abhi1kumar committed Mar 30, 2021
0 parents commit d23d850
Show file tree
Hide file tree
Showing 153 changed files with 78,194 additions and 0 deletions.
71 changes: 71 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
# =========================
# User ignore files
# =========================

# folders/files
external/
.idea/
cache/
output/
data/
pathdef.m
.tmp_results/

# filetypes
*.pyc
*.mexa64
*.caffemodel
*.mat
*.jpg
*.png
*.pyc
*/output/*
*~
*.cpp
*.c
*.so

# =========================
# Windows
# =========================

# Windows image file caches
Thumbs.db
ehthumbs.db

# Folder config file
Desktop.ini

# Recycle Bin used on file shares
$RECYCLE.BIN/

# Windows Installer files
*.cab
*.msi
*.msm
*.msp

# Windows shortcuts
*.lnk

# =========================
# OSX
# =========================

.DS_Store
.AppleDouble
.LSOverride

# Thumbnails
._*

# Files that might appear on external disk
.Spotlight-V100
.Trashes

# Directories potentially created on remote AFP share
.AppleDB
.AppleDesktop
Network Trash Folder
Temporary Items
.apdisk
178 changes: 178 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,178 @@
# GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection

<img src="images/groomed_nms.png" width="1024">
<img src="images/demo.gif">

GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection, [CVPR 2021](http://cvpr2021.thecvf.com/)

[Abhinav Kumar](https://sites.google.com/view/abhinavkumar/), [Garrick Brazil](https://garrickbrazil.com/), [Xiaoming Liu](http://www.cse.msu.edu/~liuxm/index2.html)

[project], [supp], [slides], [1min_talk], [demo](https://www.youtube.com/watch?v=PWctKkyWrno), [arxiv]

This code is based on [Kinematic-3D](https://github.com/garrickbrazil/kinematic3d), such that the setup/organization is very similar. A few of the implementations, such as classical NMS, are based on [Caffe](https://caffe.berkeleyvision.org/install_apt.html).

## References

Please cite the following paper if you find this repository useful:
```
@inproceedings{kumar2021groomed,
title={{GrooMeD-NMS}: Grouped Mathematically Differentiable NMS for Monocular {$3$D} Object Detection},
author={Kumar, Abhinav and Brazil, Garrick and Liu, Xiaoming},
booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2021}
}
```


## Setup

- **Requirements**

1. Python 3.6
2. [Pytorch](http://pytorch.org) 0.4.1
3. Torchvision 0.2.1
4. Cuda 8.0
5. Ubuntu 18.04/Debian 8.9

This is tested with NVIDIA 1080 Ti GPU. Other platforms have not been tested. Unless otherwise stated, the below scripts and instructions assume the working directory is the project root.

Clone the repo first:
```bash
git clone https://github.com/abhi1kumar/groomed_nms.git
```

- **Cuda & Python**

Install some basic packages:
```bash
sudo apt-get install libopenblas-dev libboost-dev libboost-all-dev git
sudo apt install gfortran
# We need to compile with older version of gcc and g++
sudo apt install gcc-5 g++-5
sudo ln -f /usr/bin/gcc-5 /usr/local/cuda-8.0/bin/gcc
sudo ln -s /usr/bin/g++-5 /usr/local/cuda-8.0/bin/g++
```

Next, install conda and then install the required packages:

```bash
wget https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh
bash Anaconda3-2020.02-Linux-x86_64.sh
source ~/.bashrc
conda list
conda create --name py36 --file dependencies/conda.txt
conda activate py36
```

- **KITTI Data**

Download the following images of the full [KITTI 3D Object detection](http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=3d) dataset:

- [left color images](https://s3.eu-central-1.amazonaws.com/avg-kitti/data_object_image_2.zip) of object data set (12 GB)
- [camera calibration matrices](https://s3.eu-central-1.amazonaws.com/avg-kitti/data_object_calib.zip) of object data set (16 MB)
- [training labels](https://s3.eu-central-1.amazonaws.com/avg-kitti/data_object_label_2.zip) of object data set (5 MB)

Then place a soft-link (or the actual data) in `data/kitti`:

```bash
ln -s /path/to/kitti data/kitti
```

The directory structure should look like this:

```bash
./groomed_nms
|--- cuda_env
|--- data
| |---kitti
| |---training
| | |---calib
| | |---image_2
| | |---label_2
| |
| |---testing
| |---calib
| |---image_2
|
|--- dependencies
|--- lib
|--- models
|--- scripts
```

Then, use the following scripts to extract the data splits, which use soft-links to the above directory for efficient storage:

```bash
python data/kitti_split1/setup_split.py
python data/kitti_split2/setup_split.py
```

Next, build the KITTI devkit eval:

```bash
sh data/kitti_split1/devkit/cpp/build.sh
```


- **Classical NMS**

Lastly, build the classical NMS modules:

```bash
cd lib/nms
make
cd ../..
```

## Training

Training is carried out in two stages - a warmup and a full. Review the configurations in `scripts/config` for details.

```bash
chmod +x scripts_training.sh
./scripts_training.sh
```

If your training is accidentally stopped, you can resume at a checkpoint based on the snapshot with the `restore` flag. For example, to resume training starting at iteration 10k, use the following command:

```bash
source dependencies/cuda_8.0_env
CUDA_VISIBLE_DEVICES=0 python -u scripts/train_rpn_3d.py --config=groumd_nms --restore=10000
```


## Testing

We provide models for the main experiments on KITTI Val 1/Val 2/Test data splits available to download [here](https://drive.google.com/file/d/1XjwHtkByOK9YEiK4MLn6B_s1GqLjP8M-/view?usp=sharing).

Make an `output` folder in the project directory:

```bash
mkdir output
```

Place different models in the `output` folder as follows:

```bash
./groomed_nms
|--- output
| |---groumd_nms
| |
| |---groumd_nms_split2
| |
| |---groumd_nms_full_train_2
|
| ...
```

To test, run the file as below:

```bash
chmod +x scripts_evaluation.sh
./scripts_evaluation.sh
```


## Contact
For questions, feel free to post here or drop an email to this address- ```[email protected]```
134 changes: 134 additions & 0 deletions data/kitti_split1/determine_seqs.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,134 @@
from importlib import import_module
from getopt import getopt
import scipy.io as sio
import matplotlib.pyplot as plt
from matplotlib.path import Path
import pprint
import sys
import os
import cv2
import math
import shutil
import re
from easydict import EasyDict as edict

# stop python from writing so much bytecode
sys.dont_write_bytecode = True

# -----------------------------------------
# custom modules
# -----------------------------------------
from lib.util import *

mapping_file = '/home/garrick/Desktop/detective/data/kitti_split1/devkit/mapping/train_mapping.txt'
rand_file = '/home/garrick/Desktop/detective/data/kitti_split1/devkit/mapping/train_rand.txt'
ids_file = '/home/garrick/Desktop/detective/data/kitti_split1/val.txt'
mapping = []
rand_map = []

with_tracklets = ['2011_09_26_drive_0086_sync',
'2011_09_26_drive_0064_sync',
'2011_09_26_drive_0070_sync',
'2011_09_26_drive_0022_sync',
'2011_09_26_drive_0039_sync',
'2011_09_26_drive_0032_sync',
'2011_09_26_drive_0014_sync',
'2011_09_26_drive_0009_sync',
'2011_09_26_drive_0023_sync',
'2011_09_26_drive_0052_sync',
'2011_09_26_drive_0093_sync',
'2011_09_26_drive_0002_sync',
'2011_09_26_drive_0017_sync',]


with_tracklets = ['2011_09_26_drive_0046_sync',
'2011_09_26_drive_0056_sync',
'2011_09_26_drive_0036_sync',
'2011_09_26_drive_0018_sync',
'2011_09_26_drive_0027_sync',
'2011_09_26_drive_0028_sync',
'2011_09_26_drive_0051_sync',
'2011_09_26_drive_0019_sync',
'2011_09_26_drive_0061_sync',
'2011_09_26_drive_0087_sync',
'2011_09_26_drive_0035_sync',
'2011_09_26_drive_0057_sync',
'2011_09_26_drive_0059_sync',
'2011_09_26_drive_0091_sync',
'2011_09_26_drive_0001_sync',
'2011_09_26_drive_0084_sync',
'2011_09_26_drive_0015_sync',
'2011_09_26_drive_0029_sync',
'2011_09_26_drive_0011_sync',
'2011_09_26_drive_0020_sync',
'2011_09_26_drive_0013_sync',
'2011_09_26_drive_0005_sync',
'2011_09_26_drive_0060_sync',
'2011_09_26_drive_0048_sync',
'2011_09_26_drive_0079_sync',]

# read mapping
text_file = open(mapping_file, 'r')

for line in text_file:

# 2011_09_26 2011_09_26_drive_0005_sync 0000000109
parsed = re.search('(\S+)\s+(\S+)\s+(\S+)', line)

if parsed is not None:

date = str(parsed[1])
seq = str(parsed[2])
id = str(parsed[3])
mapping.append([seq, id])

text_file.close()

# read rand
text_file = open(rand_file, 'r')

for line in text_file:
parsed = re.findall('(\d+)', line)
for p in parsed:
rand_map.append(int(p))

text_file.close()

text_file = open(ids_file, 'r')

seqs_used = []
# compute total sequences available
for rand in rand_map:
if not mapping[rand-1][0] in seqs_used:
seqs_used.append(mapping[rand-1][0])

total_max = len(seqs_used)

im_count = 0
tr_count = 0

# compute sequences used!
seqs_used = []
for line in text_file:

parsed = re.search('(\d+)', line)

if parsed is not None:
id = int(parsed[0])

im_count += 1
if mapping[rand_map[id]-1][0] in with_tracklets:
tr_count += 1

if not mapping[rand_map[id]-1][0] in seqs_used:
seqs_used.append(mapping[rand_map[id]-1][0])
print('\'{}\','.format(mapping[rand_map[id]][0]))

actual_used = len(seqs_used)

print('with tracking? {}/{}, {}'.format(tr_count, im_count, tr_count/im_count))

#print(seqs_used)
text_file.close()

print('{}/{} seqs used'.format(actual_used, total_max))
4 changes: 4 additions & 0 deletions data/kitti_split1/devkit/cpp/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
cmake_minimum_required (VERSION 2.6)
project(devkit_object)

add_executable(evaluate_object evaluate_object.cpp)
Loading

0 comments on commit d23d850

Please sign in to comment.