You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks for your code. I tested the 3D-BoNet with my own data created from a Dynamic Vision Sensor. I get some weird test results, as you can see from the image. The Bounding Boxes are the instances, and the colors are the different semantic classes. Is it possible, like for instance 22, that points from different semantic classes are predicted in one instance? Moreover, I don’t understand why the net only predicted one instance for 22 even if there is a huge space between the two instances. Do you know any reason why this could happen? Did I miss something?
Thanks for your help!
The text was updated successfully, but these errors were encountered:
Hi, thanks for your code. I tested the 3D-BoNet with my own data created from a Dynamic Vision Sensor. I get some weird test results, as you can see from the image. The Bounding Boxes are the instances, and the colors are the different semantic classes. Is it possible, like for instance 22, that points from different semantic classes are predicted in one instance? Moreover, I don’t understand why the net only predicted one instance for 22 even if there is a huge space between the two instances. Do you know any reason why this could happen? Did I miss something?
Thanks for your help!
The text was updated successfully, but these errors were encountered: