-
Notifications
You must be signed in to change notification settings - Fork 305
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Visualizing saved reconstruction. #75
Comments
Can we please get some guide on this? |
I do not know how famailer you are with Open3D, but you can use RGBD interation to construct You can check this for more info. |
@bkhanal-11 Hi. I wrote the following code. import glob
import open3d as o3d
import numpy as np
import matplotlib.pyplot as plt
class CameraPose:
def __init__(self, meta, mat):
self.metadata = meta
self.pose = mat
def __str__(self):
return (
"Metadata : "
+ " ".join(map(str, self.metadata))
+ "\n"
+ "Pose : "
+ "\n"
+ np.array_str(self.pose)
)
def read_trajectory(filename):
traj = []
with open(filename, "r", encoding="utf-8") as f:
metastr = f.readline()
while metastr:
metadata = list(map(int, metastr.split()))
mat = np.zeros(shape=(4, 4))
for i in range(4):
matstr = f.readline()
mat[i, :] = np.fromstring(matstr, dtype=float, sep=" \t")
traj.append(CameraPose(metadata, mat))
metastr = f.readline()
return traj
volume = o3d.pipelines.integration.ScalableTSDFVolume(
voxel_length=4.0 / 512.0,
sdf_trunc=0.04,
color_type=o3d.pipelines.integration.TSDFVolumeColorType.RGB8,
)
intrinsics = np.load("./results/intrinsics.npy")
fx = intrinsics[0][0]
fy = intrinsics[0][1]
cx = intrinsics[0][2]
cy = intrinsics[0][3]
color_images = np.load("./results/images.npy")
depth_images = np.load("./results/disps.npy")
camera_poses = read_trajectory("./results/log.txt")
camera_intrinsic = o3d.camera.PinholeCameraIntrinsic(
width=576, height=336, fx=fx, fy=fy, cx=cx, cy=cy
)
for i in range(len(color_images)):
print(f"Integrate {i}-th image into the volume.")
color = color_images[i]
color = np.ascontiguousarray(color.transpose(1, 2, 0))
color = o3d.geometry.Image((color).astype(np.uint8))
depth = depth_images[i]
depth = np.ascontiguousarray(depth.transpose(0, 1))
depth = o3d.geometry.Image((depth * 255))
rgbd = o3d.geometry.RGBDImage.create_from_color_and_depth(
color, depth, depth_trunc=4.0, convert_rgb_to_intensity=False
)
volume.integrate(rgbd, camera_intrinsic, np.linalg.inv(camera_poses[i].pose))
print("Extract a triangle mesh from the volume and visualize it.")
mesh = volume.extract_triangle_mesh()
mesh.compute_vertex_normals()
o3d.visualization.draw_geometries([mesh]) However, the resulting map is not correct. I am missing something but not sure what it is. Here is the result. |
@bkhanal-11 This is how my log.txt looks like.
|
@pytholic Hi, I also tried to write script for visualization and end up with similar result as yours. It may be due to either intrinsic or disparity (depth). I am also stuck on same problem now. |
@bkhanal-11 @pytholic Hi. Sorry just want to ask if you ever figured out why this was happening? Also running this on a remote server and have to use --disable_vis but im also getting a mess after trying reconstruction? |
@kelvinyankey6 My workaround was to save points and poses in |
Ah @pytholic I did the same to save the points. My environment doesn't have access to an x server. But I didn't know I could do the same for the camera poses. This is really helpful. Thank you very much for getting back to me |
Hello, thanks for your code! I can't save the '.ply' in that way. Because I running this on a remote server, this means it won't go into 'droid_visualization' function. I had a similar problem when I used open3d to visualize the saved reconstruction in sfm_bench dataset. My approach is to use pose, intrinsics and depth to directly recover the 3D coordinates of the point.
|
Use open3d headless rendering and it works. |
@pytholic forgot to respond to this. But I used your method. It worked. Thanks |
@kelvinyankey6 cool! |
Hi thanks for sharing your vis_headless.py code, I can do visualization on my device so I would like to ask how to use your code? like shall I download ml_cv_scripts_guides/droid_slam/ folder and place the files under the original DROIDSLAM project under and then just run python demo.py --imagedir= data/exampledata --calib=calib/exampledata.txt --reconstruction_path=exampledata? Or is there any other special step to use your vis_headless.py to get ply? |
Thank you so much for sharing your code, I was able to get the point cloud ply file I want. |
HI, thanks for your work and explanations. I'm running the code in a docker container and the headless script is not working there, I get the error: So it seems it cannot create the open3d visualizer. Is th code supposed to able to run in docker or am I missing something? Thanks |
@Zunhammer Not sure. I think I have used it with docker in the past but that was my custom docker. Here is my docker for Droid Slam: I created PR but no one is maintaining. You can check this: |
@pytholic @Zunhammer , Hi, i am also using the docker provided by @pytholic and trying to generate |
First, thanks to @pytholic for the response and the hints. I tried with my own docker but had no luck, didn't have time to test the one from pytholic yet. I'll probably do this after christmas. So in short, I don't have a solution yet. |
@Zunhammer , I solved this issue already so may be it would be helpful to you too. @pytholic's Dockerfile works great. But to use it in a headless environment and still to be able to save ply files, you also need to use open3d headless. After building the Docker image with @pytholic's Dockerfile, remove the open3d from the container and recompile the open3d headless and install it as described - http://www.open3d.org/docs/latest/tutorial/Advanced/headless_rendering.html. And use #75 (comment) this method and it worked out well. |
As mentioned by @HtutLynn , if you are using my If you want to use it without a headless, then you might have to pick some parts from my script (parts that save |
@pytholic I am trying to use your implementation of visualization code, but the demo.py does not generate any log.txt camera trajectory for pose. Do you know how to get the log.txt ? |
@jmunshi1990 Hi. It has been a long time since I used this. I don't remember how I was saving the final point cloud but maybe you can take a look at this part from the official demo for camera pose. https://github.com/pytholic/DROID-SLAM/blob/f68a25fa230730e3eaaaa6ab03a818ca5da6954f/demo.py#L59 |
@pytholic I am using the highlighted part of the demo script to generate the reconstructions, but the only problem is that I am not able to generate the log txt file which has the trajectory needed to reproduce your below code which you have posted earlier. I am trying to find the read_trajectory input file out of the saved files which I am not getting. Any suggestion how to do that?
|
Hi. Thank you for the amazing work.
I am running the code on remote server in
--disable_vis
mode. I have the saved reconstruction results in.npy
format. I am not sure how to reconstruct the scene in Open3d using these.npy
files.Before that I was using
visualization
mode and directly saving.ply
files and I was able to visualize them in open3d. But I don't want to do that anymore and want to use the default.npy
files.Thank you~
The text was updated successfully, but these errors were encountered: