Skip to content
This repository has been archived by the owner on Apr 17, 2023. It is now read-only.

pytorch vs onnx performance and accuracy #110

Open
devloper13 opened this issue Mar 18, 2021 · 0 comments
Open

pytorch vs onnx performance and accuracy #110

devloper13 opened this issue Mar 18, 2021 · 0 comments

Comments

@devloper13
Copy link

devloper13 commented Mar 18, 2021

So I tested pytorch model using tools/test.py and exported onnx model in tools/test_exported.py. Here are the results:

pytorch:

Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.549
Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.886
Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.605
Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.130
Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.491
Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.709
Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.213
Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.595
Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.618
Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.168
Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.556
Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.770

onnx:

Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.581
Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.928
Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.626
Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.156
Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.531
Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.723
Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.246
Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.631
Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.656
Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.228
Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.579
Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.780

So onnx is a bit better than pytorch? This is possible? Also, I noticed the inferencing speed of onnx was very slow at about 0.3 tasks per second and pytorch at 3.6 tasks per second. I need some help understanding this. Thanks!

Edit1: I guess I figured out, this is because onnx is using cpu to inference. Correct? Any specific reason why onnxruntime cpu is installed and not gpu?

Edit2: Changed requirements/runtime to onnruntime-gpu and now it's fine. So I guess I'm still at a loss why onnx is giving better results.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant