This repository contains the official implementation for the paper:
Global Matching with Overlapping Attention for Optical Flow Estimation
CVPR 2022
Shiyu Zhao, Long Zhao, Zhixing Zhang, Enyu Zhou, Dimitris Metaxas
The code has been tested with PyTorch 1.7 and Cuda 11.0. Later PyTorch may also work.
conda create --name gmflownet
conda activate gmflownet
conda install pytorch==1.7.0 torchvision==0.8.0 torchaudio==0.7.0 cudatoolkit=11.0 -c pytorch
conda install matplotlib tensorboard scipy opencv
Download .zip file with pretrained models at Google Drive. Unzip pretrained_models.zip
in the root.
unzip pretrained_models.zip
You can demo a trained model on a sequence of frames
python demo.py --model gmflownet --ckpt=pretrained_models/gmflownet-things.pth --path=demo-frames
To evaluate/train RAFT, you need to download the following datasets.
- FlyingChairs
- FlyingThings3D
- Sintel
- KITTI
- HD1K (optional)
Place all datasets in your preferred directory and symbolic link it to ./datasets
with ln -s <your_directory> ./datasets
so that your ./datasets
folder looks like
├── datasets
├── Sintel
├── test
├── training
├── KITTI
├── testing
├── training
├── devkit
├── FlyingChairs_release
├── data
├── FlyingThings3D
├── frames_cleanpass
├── frames_finalpass
├── optical_flow
...
Download the pretraind model described in Demo.
You may evaluate a pretrained model using evaluate.py
. To get the best result,
On Sintel, evaluate the gmflownet_mix
model as,
python evaluate.py --model gmflownet --use_mix_attn --ckpt=pretrained_models/gmflownet_mix-things.pth --dataset=sintel
On KITTI, evaluate the gmflownet
model as,
python evaluate.py --model gmflownet --ckpt=pretrained_models/gmflownet-things.pth --dataset=kitti
Note: gmflownet_mix
replaces half of heads (4 out of 8 heads) in each POLA attention of gmflownet
with heads of axial attentions and achieves better results on Sintel.
We used the following training schedules in our paper (2 GPUs).
- Train
gmflownet
as,
./train_gmflownet.sh
- Train
gmflownet_mix
as,
./train_gmflownet_mix.sh
Training logs will be written to the ./runs
which can be visualized using tensorboard as,
tensorboard --bind_all --port 8080 --logdir ./runs
The code is based on RAFT and SwinTransformer. We sincerely thank the authors for their great work.