-
Notifications
You must be signed in to change notification settings - Fork 133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for monocular SLAM #23
Comments
I would also be interested in this topic |
alperaydemir & mees, |
Thanks, but I've tried that previously. The test cases seg faulted in some Alper On Tue, Mar 19, 2013 at 11:12 AM, NH89 [email protected] wrote:
|
@AlperAydemir have you implemented support for mono in ScaViSLAM? If yes, could you share it please? |
Wanted to add this about mono slam: http://hanmekim.blogspot.co.uk/2012/10/scenelib2-monoslam-open-source-library.html |
If there are a group of us who would want to work on making a GPL'd mono-slam I'm definitely interested in contributing. I'm particularly interested in developing a dense SLAM that can handle movement and deforming objects by adding time as a highly smoothed fourth dimension. I recommend this workshop that Prof Andy Davidson is co-presenting at the IEEE ICRA 2013 in Germany this May For those interested in how Mono-SLAM has developed over the last several years, this is a nice overview. http://videolectures.net/bmvc2012_davison_scene_perception/ The TU Graz variational optic flow algorithm mentioned in the lecture above is based on Thomas Pock's thesis and is available here |
I'd love to use MonoSLAM in my research, is there any time table for release? Will it include any DTAM: Dense Tracking and Mapping in Real-Time functionality? |
Q:"is there any time table for release" Why do you want monoSLAM? - that would be like wearing a patch over one eye... Probably the quickest way to make an online mono-SLAM would be to read into the code of Robotvision and ScaViSLAM, then reimplement the core of the Robotvision algorithm as a module of ScaViSLAM. (I did have Robotvision working on Ubuntu 10.04, but now get compile errors on Ubuntu 12.10, and haven't had time to find the cause.) To learn how modern SLAM algorithms work read the g2o.pdf in the documentation of the g2o library, which explains what hypergraphs are and how to use them to implement SLAM and 'bundle adjustment'. Also look at the tutorials at http://www.informatik.uni-bremen.de/agebv/en/Research and the links to conference papers. You could also look at VSLAM (not the same as vslam on ROS) http://www.informatik.uni-bremen.de/agebv/en/pub/hertzbergicra11 and the various SLAM codes at http://openslam.org/ Re DTAM-like functionality If all you need is a real-time depth map with good resolution, then block matching stereo vision may suffice see http://scholar.lib.vt.edu/theses/available/etd-12232009-222118/unrestricted/Short_NJ_T_2009.pdf or https://code.google.com/p/tjpstereovision/ . |
Thanks for the help. Are you actively working on ScaViSLAM or just on related research? "Why do you want monoSLAM? - that would be like wearing a patch over one eye..." I'm working with an existing monoscopic data set. I don't need real time mapping but I would want to be able to save out a dense point cloud with camera positions and ideally a RGB texture map. |
I'm just learning myself. I have visited Prof Andy Davison's lab, but that was after Hauke had finished there. My current work is making a robotic hand with good dynamics and tactile sensitivity. Later I need to do tactile SLAM with the hand, fused with visual SLAM. |
I understand that Hauke just used the PTAM tracker as a frontend to implement monocular SLAM in ScaViSLAM. |
Thanks NH89, the links and your comments are very valuable. I also have been working on monoslam, and stereoslam. Monoslam because one of the robot I work on has weight restriction and have a single camera is based out of requirements, I a using webcam for it which poses a additional interesting problem related to the rolling shutter. |
I would strongly recommend the two tutorials given at the "Robust and Multimodal Inference in Factor Graphs" workshop at the IEEE ICRA 2013 conference. The papers are currently online at http://www.cc.gatech.edu/~dellaert/pubs/2013-05-10-ICRA-Tutorial.pdf Why factor graphs are so useful : Here are pages with links to two of the papers from the workshop, (and other related work) http://www.cc.gatech.edu/~dellaert/FrankDellaert/Frank_Dellaert/Frank_Dellaert.html http://bnp.mit.edu/?page_id=12 Once you've worked through the tutorial papers, I'd reckon you'd have a good idea how to tackle the rolling shutter and other "Robust and Multimodal Inference" problems. (disclaimer - I'm not the author of these papers, those guys are much smarter than me :-) |
Thanks again NH89. will go though it. I know there has been a shift towards the slam based approach but for the For the stereo system we are working on the graph based approach. It does Saw a few work by Prof. Seigwarts and Kummel On 19 June 2013 16:29, NH89 [email protected] wrote:
|
Hi,
Is there a roadmap for monocular SLAM (with a normal camera, not Kinect) ?
Or is it relatively easy to use ScaViSLAM for a monocular SLAM scenario, with minimal code changes?
If so, what parts essentially need to change? I can try to work on such a branch.
The text was updated successfully, but these errors were encountered: