Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could you provide your results in each config? #36

Closed
mrsempress opened this issue Sep 2, 2021 · 10 comments
Closed

Could you provide your results in each config? #36

mrsempress opened this issue Sep 2, 2021 · 10 comments

Comments

@mrsempress
Copy link

Could you provide your results in each config?
Thanks for your work. Because my results using your code is lower than that in papers. I would appreciate if you provide the results for each config. And if it's possible, could you provide a docker for easily install(some machine can't install well, due to conflict among packages?)

And my results as follows: (It seems something wrong in Yolo3D_example and KM3D_example)
In Ground-aware:)

Car AP(Average Precision)@0.70, 0.70, 0.70:
bbox AP:91.76, 79.74, 62.81
bev  AP:22.39, 17.37, 13.54   
3d   AP:16.80, 12.73, 10.22    <----    In paper: 22.16 | 15.71 | 11.75
aos  AP:90.96, 78.19, 61.48
Car AP(Average Precision)@0.70, 0.50, 0.50:
bbox AP:91.76, 79.74, 62.81
bev  AP:56.30, 41.20, 32.98
3d   AP:51.48, 37.65, 29.91
aos  AP:90.96, 78.19, 61.48

In monoflex:

Car AP(Average Precision)@0.70, 0.70, 0.70:
bbox AP:97.04, 91.58, 81.64
bev  AP:30.90, 22.72, 19.22
3d   AP:22.91, 16.49, 13.59    <----    In paper:   23.64 | 17.51 | 14.83
aos  AP:96.92, 91.25, 81.16
Car AP(Average Precision)@0.70, 0.50, 0.50:
bbox AP:97.04, 91.58, 81.64
bev  AP:66.93, 49.88, 43.21
3d   AP:62.03, 46.13, 39.80
aos  AP:96.92, 91.25, 81.16

In RTM3d:

Car AP(Average Precision)@0.70, 0.70, 0.70:
bbox AP:96.98, 88.85, 78.72
bev  AP:16.05, 12.08, 9.98    <----    In paper:  27.83 | 23.38 | 21.69
3d   AP:10.20, 7.86, 6.26      <----    In paper:   22.50 | 19.60 | 17.12
aos  AP:96.45, 87.73, 77.50
Car AP(Average Precision)@0.70, 0.50, 0.50:
bbox AP:96.98, 88.85, 78.72
bev  AP:50.02, 37.08, 30.29
3d   AP:43.69, 32.07, 26.78
aos  AP:96.45, 87.73, 77.50
@Owen-Liuyuxuan
Copy link
Owner

Owen-Liuyuxuan commented Sep 2, 2021

  1. The results on Yolo3D_example are much lower than what I can get. Did you train on multiple GPUs? Also, the version changes affect the result, the best way to reproduce the results is to use the code on the release page
  2. The results on MonoFlex are lower than the paper because the rotation implementation is different and the edge-aware convolution part is not implemented. So the results from the official repo "should" outperform the results here.
  3. The results on KM3D should be compared with the res18 results from "KM3D" but not RTM3D. And the results reported on the paper are AP11 while we are always testing on AP40 in this repo. From my experiments, I obtain AP11 (I manually change the evaluation code to achieve this)
Car AP(Average Precision)@0.70, 0.70, 0.70:
bbox AP:90.73, 89.88, 80.95
bev  AP:24.57, 19.67, 17.20
3d   AP:18.27, 15.52, 14.85
aos  AP:90.49, 89.35, 80.27
Car AP(Average Precision)@0.70, 0.50, 0.50:
bbox AP:90.73, 89.88, 80.95
bev  AP:53.70, 40.93, 35.31
3d   AP:46.47, 37.94, 33.18

which basically confirms the val1 results from the paper

3d 19.48/15.32/13.88
be 24.48/19.10/16.54

and also the official repo with also right image.

3d 17.50, 14.06, 12.62
bev 25.03, 18.53, 17.45 
``

@Owen-Liuyuxuan
Copy link
Owner

Owen-Liuyuxuan commented Sep 2, 2021

FROM nvidia/cuda:11.1-cudnn8-devel-ubuntu20.04

RUN apt-get upgrade -y
RUN apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get install python3.8 python3-pip nano libsm6 libxext6 libxrender-dev libgl1-mesa-glx libglib2.0-0 python3-tk -y
RUN pip3 install -U pip
RUN pip3 install future -U
RUN apt install git nano htop -y
RUN pip3 install tensorflow pandas matplotlib numpy pillow opencv-python scikit-image numba tqdm cython fire easydict cityscapesscripts pyquaternion

ARG CUDA_VER="110"
ARG TORCH_VER="1.7.1"
ARG VISION_VER="0.8.2"

RUN pip3 install torch==${TORCH_VER} torchvision==${VISION_VER} -f https://download.pytorch.org/whl/cu${CUDA_VER}/torch_stable.html

Dockers with basic configurations like this should work fine.

@mrsempress
Copy link
Author

  1. The results on Yolo3D_example are much lower than what I can get. Did you train on multiple GPUs? Also, the version changes affect the result, the best way to reproduce the results is to use the code on the release page
  2. The results on MonoFlex are lower than the paper because the rotation implementation is different and the edge-aware convolution part is not implemented. So the results from the official repo "should" outperform the results here.
  3. The results on KM3D should be compared with the res18 results from "KM3D" but not RTM3D. And the results reported on the paper are AP11 while we are always testing on AP40 in this repo. From my experiments, I obtain AP11 (I manually change the evaluation code to achieve this)
Car AP(Average Precision)@0.70, 0.70, 0.70:
bbox AP:90.73, 89.88, 80.95
bev  AP:24.57, 19.67, 17.20
3d   AP:18.27, 15.52, 14.85
aos  AP:90.49, 89.35, 80.27
Car AP(Average Precision)@0.70, 0.50, 0.50:
bbox AP:90.73, 89.88, 80.95
bev  AP:53.70, 40.93, 35.31
3d   AP:46.47, 37.94, 33.18

which basically confirms the val1 results from the paper

3d 19.48/15.32/13.88
be 24.48/19.10/16.54

and also the official repo with also right image.

3d 17.50, 14.06, 12.62
bev 25.03, 18.53, 17.45 
``
  1. I only use one GPU since I saw issues multi-gpu training #6 . I will try it again in version 1.1 and give feedback as soon as possible.
  2. I can accept this result in monoflex.
  3. I test my km3d experiment in AP 11. The results are as follows:
     3d 12.16, 10.21, 8.03
     bev 17.93, 13.9, 12.98
    
    Also, I will try again in version 1.1.

@mrsempress
Copy link
Author

FROM nvidia/cuda:11.1-cudnn8-devel-ubuntu20.04

RUN apt-get upgrade -y
RUN apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get install python3.8 python3-pip nano libsm6 libxext6 libxrender-dev libgl1-mesa-glx libglib2.0-0 python3-tk -y
RUN pip3 install -U pip
RUN pip3 install future -U
RUN apt install git nano htop -y
RUN pip3 install tensorflow pandas matplotlib numpy pillow opencv-python scikit-image numba tqdm cython fire easydict cityscapesscripts pyquaternion

ARG CUDA_VER="110"
ARG TORCH_VER="1.7.1"
ARG VISION_VER="0.8.2"

RUN pip3 install torch==${TORCH_VER} torchvision==${VISION_VER} -f https://download.pytorch.org/whl/cu${CUDA_VER}/torch_stable.html

Dockers with basic configurations like this should work fine.

Thanks for sharing.

@mrsempress
Copy link
Author

I'm sorry to tell you that the results get even worse after I try the release 1.0 and use your dockerfile.

The new results are: (all experiments train on one GPU, and use the parameters in xx_example.)
In Ground-aware:

Car AP(Average Precision)@0.70, 0.70, 0.70:
bbox AP:94.64, 77.15, 59.71
bev  AP:20.15, 15.08, 11.83
3d   AP:14.42, 10.93, 9.05    <----    In paper: 22.16 | 15.71 | 11.75; first try: 16.80, 12.73, 10.22
aos  AP:93.17, 75.48, 58.37
Car AP(Average Precision)@0.70, 0.50, 0.50:
bbox AP:94.64, 77.15, 59.71
bev  AP:52.49, 38.20, 29.44
3d   AP:46.10, 34.50, 26.31
aos  AP:93.17, 75.48, 58.37

In monoflex:

Car AP(Average Precision)@0.70, 0.70, 0.70:
bbox AP:96.98, 91.55, 81.58
bev  AP:29.61, 21.93, 18.88
3d   AP:20.54, 15.38, 13.62   <----    In paper:   23.64 | 17.51 | 14.83; first try: 22.91, 16.49, 13.59
aos  AP:96.77, 91.20, 81.10
Car AP(Average Precision)@0.70, 0.50, 0.50:
bbox AP:96.98, 91.55, 81.58
bev  AP:68.15, 51.53, 45.09
3d   AP:61.27, 46.01, 39.76
aos  AP:96.77, 91.20, 81.10

In KM3d:

Car AP(Average Precision)@0.70, 0.70, 0.70:
bbox AP:96.95, 88.55, 76.27
bev  AP:19.18, 13.20, 10.87    <----    In paper:  27.83 | 23.38 | 21.69; first try: 16.05, 12.08, 9.98
3d   AP:12.05, 8.26, 6.94      <----    In paper:   22.50 | 19.60 | 17.12; first try: 10.20, 7.86, 6.26
aos  AP:96.25, 87.61, 75.23
Car AP(Average Precision)@0.70, 0.50, 0.50:
bbox AP:96.95, 88.55, 76.27
bev  AP:51.46, 35.90, 29.99
3d   AP:44.34, 31.37, 26.09
aos  AP:96.25, 87.61, 75.23

Do you have any other ideas about my bad results?

@Owen-Liuyuxuan
Copy link
Owner

The computation of observation angle in v1.0 is not correct for usage in monoflex/km3d. You need to use v1.1 to have that correct for monoflex and km3d.

The result of the baseline goes rather wild, even 2D detection result is worse than what should be expected.

I will take some time to investigate.

@Owen-Liuyuxuan
Copy link
Owner

Owen-Liuyuxuan commented Sep 6, 2021

mono3d_tensorboard_zip.zip

A tensorboard result is attached. This piece is trained on the code downloaded from version 1.0.

You can check the loss and the configuration files.

Car AP(Average Precision)@0.70, 0.70, 0.70:
bbox AP:97.31, 82.13, 64.64
bev  AP:28.95, 20.11, 15.51
3d   AP:22.80, 15.41, 11.43
aos  AP:95.90, 79.39, 62.34
Car AP(Average Precision)@0.70, 0.50, 0.50:
bbox AP:97.31, 82.13, 64.64
bev  AP:63.83, 44.91, 34.54
3d   AP:58.73, 41.12, 31.26

@mrsempress
Copy link
Author

mrsempress commented Sep 8, 2021

First thanks for your reply. And I check out the loss(results as follows).
| loss | your | my |
| cls | 2.2329 e-4 | 2.1649 e-4|
| reg | 0.03156 | 0.03385 |
| total | 0.03178 | 0.03407 |
As for the configuration file, I only change the batchsize to 4, due to limit machine(number workers=1), but also enlarge the epochs. Maybe the small batchsize is the main reason? [Oops! I forget to change the learning rate. I will change it again]

@JensenGao
Copy link

  1. The results on Yolo3D_example are much lower than what I can get. Did you train on multiple GPUs? Also, the version changes affect the result, the best way to reproduce the results is to use the code on the release page

    1. The results on MonoFlex are lower than the paper because the rotation implementation is different and the edge-aware convolution part is not implemented. So the results from the official repo "should" outperform the results here.

    2. The results on KM3D should be compared with the res18 results from "KM3D" but not RTM3D. And the results reported on the paper are AP11 while we are always testing on AP40 in this repo. From my experiments, I obtain AP11 (I manually change the evaluation code to achieve this)

Car AP(Average Precision)@0.70, 0.70, 0.70:
bbox AP:90.73, 89.88, 80.95
bev  AP:24.57, 19.67, 17.20
3d   AP:18.27, 15.52, 14.85
aos  AP:90.49, 89.35, 80.27
Car AP(Average Precision)@0.70, 0.50, 0.50:
bbox AP:90.73, 89.88, 80.95
bev  AP:53.70, 40.93, 35.31
3d   AP:46.47, 37.94, 33.18

which basically confirms the val1 results from the paper

3d 19.48/15.32/13.88
be 24.48/19.10/16.54

and also the official repo with also right image.

3d 17.50, 14.06, 12.62
bev 25.03, 18.53, 17.45 
``

How can I get mAP of AP11, can you provide a scripts.

@Owen-Liuyuxuan
Copy link
Owner

Owen-Liuyuxuan commented Jan 14, 2022

@JensenGao
I do not make it a "parameterized" option, so you need to modify the code inside.

Base_Reference

Directly modify the function here to this while not change the function name.

Then launch the test script from root or directly run the evaluator

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants