You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for sharing your artifact. This is excellent work. I have a question about how your depth image is implemented. I noticed that when I am using it in the original scenario, obstacles that are further away are white, while obstacles close to the drone become black. I also noticed that both the floor and sky are rendered as black (and therefor must have the same values). You can see this below:
What confuses me, is that both the floor, sky and obstacles really close to the drone are rendered as black (they must all have similar values in the depth image). If this is true, how is the drone able to avoid obstacles close to the drone, and avoid flying into the ground?
The reason I am asking is because I am trying to use my own depth image. For example here you can see a Unity simulation where the drone is highlighted in the red circle. Above that you can see both the image from the drones camera, and the generated depth image. The depth image is generated using the Monocular Depth Estimation MiDaS algorithm.
You will notice that the both the obstacle and floor in my example are registered as white in the depth image (As they are closer to the drone than the sky). This is opposite of your approach which has closer obstacles as black. In my example, the drone flys into the obstacles (which can be explained by my depth image being opposite to yours). However before I go and try and invert my depth image, I was hoping you could provide a few details on how your depth image is constructed (so that I can replicate it, i.e. why the floor, and the sky are both the same depth as obstacles that are close to the drone etc).
Thanks again for this awesome work!
The text was updated successfully, but these errors were encountered:
I have made some progress on this. Below I have attached a video where I inspect different depths in the image. The depth is shown in the bottom right. Objects closer have smaller depth values (as we expect). The closer to the drone, the darker the object (as they have smaller depth images).
their_depth.mp4
Mine was opposite (which is incorrect). Turns out MiDaS produces inverse depth maps, which explains why here you can see how the floor further-est away has the smallest depth value.
mydepth.mp4
So I realize that mine is inverted and needs to be fixed. However I am still not sure how your approach handles having a floor (as in your simulation demo, the floor is rendered as 0).
Hi,
Thank you for sharing your artifact. This is excellent work. I have a question about how your depth image is implemented. I noticed that when I am using it in the original scenario, obstacles that are further away are white, while obstacles close to the drone become black. I also noticed that both the floor and sky are rendered as black (and therefor must have the same values). You can see this below:
What confuses me, is that both the floor, sky and obstacles really close to the drone are rendered as black (they must all have similar values in the depth image). If this is true, how is the drone able to avoid obstacles close to the drone, and avoid flying into the ground?
The reason I am asking is because I am trying to use my own depth image. For example here you can see a Unity simulation where the drone is highlighted in the red circle. Above that you can see both the image from the drones camera, and the generated depth image. The depth image is generated using the Monocular Depth Estimation MiDaS algorithm.
You will notice that the both the obstacle and floor in my example are registered as white in the depth image (As they are closer to the drone than the sky). This is opposite of your approach which has closer obstacles as black. In my example, the drone flys into the obstacles (which can be explained by my depth image being opposite to yours). However before I go and try and invert my depth image, I was hoping you could provide a few details on how your depth image is constructed (so that I can replicate it, i.e. why the floor, and the sky are both the same depth as obstacles that are close to the drone etc).
Thanks again for this awesome work!
The text was updated successfully, but these errors were encountered: