Visual Inertial Odometry
This video was taken of the Visual-Inertial-Odometry I implemented, as part of beam_slam, from the ground up using Locus's fuse repository, along with an in lab library libbeam, where many of my contributions lie. The goal of this VIO implementation was to provide a platform for further research and enhancement (learning based feature tracking, MLPnP, semantic segmentation etc). It was also implemented with the coupling of LiDAR odometry in mind, along with coupled Visual-LiDAR place recognition for robust and accurate loop closures.
vio.mp4
Visual-Lidar Map Alignment
As part of my thesis work, I have implemented an offline tool to automatically align maps generated from SLAM. This approach to alignment allows for more robust, decoupled approached to visual or lidar place recognition as there is no real-time constraint. The purpose of this work is to allow for repeated inspections of the same area, without the need to be confined to one of the few multi-session SLAM packages (namely ORB-SLAM3, RTAB-map, maplab and lt-mapper). See my repository vl_traj_alignment for implementation.
Before Alignment | After Alignment |
---|---|