-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Different FPS of Replica office0 #28
Comments
Hi @jinmaodaomaye2021, thanks for your interest in our work. Interesting question. For each scene, there are 2000 frames in total. What you mean is that Co-SLAM can run with only 100 frames and fail at 65 frames. I assume in Replica, the video is simulated with 10-30Hz. Then, in your setting, it would be a 0.3-1Hz video sequence. There are several reasons that might result in losing track in this case:
I hope these suggestions help. If you have further questions or need more assistance, feel free to ask:) |
@HengyiWang Thanks for your reply. Please check the logs below which explains why I chose 100-500 tracking/mapping iterations for low FPS videos (1/20 FPS, tracking_iters=400, mapping_iters=100, tracking_samples=1024, mapping_samples=2048). I printed out the losses every 10 tracking/mapping iterations. It seems the tracking stage takes many steps to converge for large camera movements (constant_speed=True, camera pose learning rate = 1e-3). If the tracking/mapping stage takes > 100 iterations, the processing time would be very high (3-8 seconds per frame on my machine if mapping_map_every=1, mapping_keyframe_every=1).
|
@jinmaodaomaye2021, thanks for providing the log. I assumed you have tried different lr for this. Then, can you try 1 to see if that help? A simple way is to select several frames, and manually set a sample region that ensures the overlapping. |
Hi,
Thanks for your great work. Not sure if the algorithm was tested with large view movements.
Since the Replica office0 video has high FPS, I tested the algorithm at different FPS settings.
The algorithm works fine when the FPS was reduced to 1/2, 1/3, 1/5, 1/10 and 1/20 with modifications of parameters. But the algorithm failed to estimate poses at 1/30.
There is still a large view overlapping between frames by visually inspecting the 1/30 FPS data. However, the results are very bad (wrong poses).
In my experiment, I modified the following parameters at 1/30:
Any idea why the algorithm doesn't work for low FPS videos ?
What I observed is that if the pose is incorrect from tracking, it is hard to correct in mapping.
The text was updated successfully, but these errors were encountered: