You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 17, 2023. It is now read-only.
So I tested pytorch model using tools/test.py and exported onnx model in tools/test_exported.py. Here are the results:
pytorch:
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.549
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.886
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.605
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.130
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.491
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.709
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.213
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.595
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.618
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.168
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.556
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.770
onnx:
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.581
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.928
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.626
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.156
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.531
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.723
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.246
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.631
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.656
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.228
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.579
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.780
So onnx is a bit better than pytorch? This is possible? Also, I noticed the inferencing speed of onnx was very slow at about 0.3 tasks per second and pytorch at 3.6 tasks per second. I need some help understanding this. Thanks!
Edit1: I guess I figured out, this is because onnx is using cpu to inference. Correct? Any specific reason why onnxruntime cpu is installed and not gpu?
Edit2: Changed requirements/runtime to onnruntime-gpu and now it's fine. So I guess I'm still at a loss why onnx is giving better results.
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
So I tested pytorch model using tools/test.py and exported onnx model in tools/test_exported.py. Here are the results:
pytorch:
onnx:
So onnx is a bit better than pytorch? This is possible? Also, I noticed the inferencing speed of
onnx was very slow at about 0.3 tasks per second
andpytorch at 3.6 tasks per second.
I need some help understanding this. Thanks!Edit1: I guess I figured out, this is because onnx is using cpu to inference. Correct? Any specific reason why onnxruntime cpu is installed and not gpu?
Edit2: Changed requirements/runtime to onnruntime-gpu and now it's fine. So I guess I'm still at a loss why onnx is giving better results.
The text was updated successfully, but these errors were encountered: