Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How is your depth image implemented? #53

Open
hildebrandt-carl opened this issue Mar 17, 2023 · 1 comment
Open

How is your depth image implemented? #53

hildebrandt-carl opened this issue Mar 17, 2023 · 1 comment

Comments

@hildebrandt-carl
Copy link

Hi,

Thank you for sharing your artifact. This is excellent work. I have a question about how your depth image is implemented. I noticed that when I am using it in the original scenario, obstacles that are further away are white, while obstacles close to the drone become black. I also noticed that both the floor and sky are rendered as black (and therefor must have the same values). You can see this below:

original

What confuses me, is that both the floor, sky and obstacles really close to the drone are rendered as black (they must all have similar values in the depth image). If this is true, how is the drone able to avoid obstacles close to the drone, and avoid flying into the ground?

The reason I am asking is because I am trying to use my own depth image. For example here you can see a Unity simulation where the drone is highlighted in the red circle. Above that you can see both the image from the drones camera, and the generated depth image. The depth image is generated using the Monocular Depth Estimation MiDaS algorithm.

mine

You will notice that the both the obstacle and floor in my example are registered as white in the depth image (As they are closer to the drone than the sky). This is opposite of your approach which has closer obstacles as black. In my example, the drone flys into the obstacles (which can be explained by my depth image being opposite to yours). However before I go and try and invert my depth image, I was hoping you could provide a few details on how your depth image is constructed (so that I can replicate it, i.e. why the floor, and the sky are both the same depth as obstacles that are close to the drone etc).

Thanks again for this awesome work!

@hildebrandt-carl
Copy link
Author

hildebrandt-carl commented Mar 20, 2023

I have made some progress on this. Below I have attached a video where I inspect different depths in the image. The depth is shown in the bottom right. Objects closer have smaller depth values (as we expect). The closer to the drone, the darker the object (as they have smaller depth images).

their_depth.mp4

Mine was opposite (which is incorrect). Turns out MiDaS produces inverse depth maps, which explains why here you can see how the floor further-est away has the smallest depth value.

mydepth.mp4

So I realize that mine is inverted and needs to be fixed. However I am still not sure how your approach handles having a floor (as in your simulation demo, the floor is rendered as 0).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant