Skip to content

ORB_SLAM2 with GPU Enhancement running on NVIDIA Jetson TX1. Focus on ROS part.

License

Unknown, GPL-3.0 licenses found

Licenses found

Unknown
LICENSE.txt
GPL-3.0
License-gpl.txt
Notifications You must be signed in to change notification settings

Sherlock-hh/ORB_SLAM2_CUDA

 
 

Repository files navigation

ORB_SLAM2_CUDA

Modified version of ORB-SLAM2 with GPU enhancement and several ROS topics for NVIDIA Jetson TX1, TX2, Xavier, Nano. Currently only supports Monocular camera. Run in real time.

Based on ORB-SLAM2 with GPU enhancements by yunchih's ORB-SLAM2-GPU2016-final, which is based on Raul Mur-Artal's ORB-SLAM2.

I struggled with a number of issues to get the work of yunchih up and running on TX1, so hopefully this can help anyone doing the same thing. The original works offers ROS node to process live data, but it doesn't broadcast any message. So I added ROS publishers for a few topics.

Implementation

  • Monocular
  • Stereo
  • RGB-D

Published topics

  • tf
  • pose
  • pointcloud
  • current frame

Tested on:

  • Jetson TX1, TX2
  • Jetson Xavier
  • Jetson Nano

Installation on Jetson TX1, TX2, Xavier

Prerequisite

  • It might be better to start with a fresh flash for with OpenCV4Tegra not installed.
  • I recommend running from a SD card, at least 64GB, because the build (especially OpenCV) consumes a lot of memory. You can follow JetsonHacks' post here to run the system on SD card.

Installation

Build GPU enabled OpenCV3 ROS Kinetic

I followed this page to make the ROS replacement part. If you follow the commands on that page remember to do the patches afterwards. But if we use opencv 3.2 we don't need to do the patches, which is what I do below.

First, check to get the CUDA compiler version:

nvcc --version 

If you get error nvcc: command not found , check this page to solve it first before moving on. Clone the OpenCV repo locally and checkout to version v3.2.0:

sudo apt-get install git
cd
git clone https://github.com/opencv/opencv.git opencv
cd opencv
git checkout -b v3.2.0 3.2.0

Then we install neccessary packages:

cd
sudo echo "deb-src http://packages.ros.org/ros/ubuntu xenial main" >> /etc/apt/sources.list.d/ros-latest.list
sudo apt-get update
sudo apt-get source ros-kinetic-opencv3
sudo apt-get install devscripts build-essential
cd ros-kinetic-opencv3-3.2.0
sudo apt-get build-dep ros-kinetic-opencv3-3.2.0
sudo dpkg-buildpackage -b -uc
cd ../
sudo mkdir /usr/src/deb
sudo cp ros-kinetic-opencv3_3.2.0-4xenial_arm64.deb /usr/src/deb/
cd /usr/src/deb/
sudo chmod a+wr /usr/src/deb
sudo apt-ftparchive packages . | gzip -c9 > Packages.gz
sudo apt-ftparchive sources . | gzip -c9 > Sources.gz
sudo chmod a+wr /etc/apt/sources.list.d/ros-latest.list
sudo echo "deb file:/usr/src/deb ./" >> /etc/apt/sources.list.d/ros-latest.list
sudo sed -i -e "1,2s/^/#/g" /etc/apt/sources.list.d/ros-latest.list
sudo apt-get update
sudo apt-get remove ros-kinetci-opencv3
sudo apt-get install ros-kinetic-opencv3

You can change ros-kinetic-desktop-full according to your need:

sudo apt-get install ros-kinetic-desktop-full
sudo sed -i -e "s/#//g" /etc/apt/sources.list.d/ros-latest.list

Build OpenCV with CUDA for Tegra

This one is pretty straight forward, just follow the instructions in this link but change the version to 3.2.0. Below are all the commands I used, refer to the above link if you need clarification:

# The following command can be pasted into a shell in order to install the required packages:
sudo apt-get install libglew-dev libtiff5-dev zlib1g-dev libjpeg-dev libpng12-dev libjasper-dev libavcodec-dev libavformat-dev libavutil-dev libpostproc-dev libswscale-dev libeigen3-dev libtbb-dev libgtk2.0-dev pkg-config
# Appropriate packages for Python2 and Python3
sudo apt-get install python-dev python-numpy python-py python-pytest
# Optionally:
sudo apt-get install python3-dev python3-numpy python3-py python3-pytest
# If you want to use OpenCV Extra:
cd
git clone https://github.com/opencv/opencv_extra.git
cd opencv_extra
git checkout -b v3.2.0 3.2.0
# Preparing the build area
cd
mkdir build
cd build
cmake \
    -DCMAKE_BUILD_TYPE=Release \
    -DCMAKE_INSTALL_PREFIX=/usr \
    -DBUILD_PNG=OFF \
    -DBUILD_TIFF=OFF \
    -DBUILD_TBB=OFF \
    -DBUILD_JPEG=OFF \
    -DBUILD_JASPER=OFF \
    -DBUILD_ZLIB=OFF \
    -DBUILD_EXAMPLES=ON \
    -DBUILD_opencv_java=OFF \
    -DBUILD_opencv_python2=ON \
    -DBUILD_opencv_python3=OFF \
    -DENABLE_PRECOMPILED_HEADERS=OFF \
    -DWITH_OPENCL=OFF \
    -DWITH_OPENMP=OFF \
    -DWITH_FFMPEG=ON \
    -DWITH_GSTREAMER=OFF \
    -DWITH_GSTREAMER_0_10=OFF \
    -DWITH_CUDA=ON \
    -DWITH_GTK=ON \
    -DWITH_VTK=OFF \
    -DWITH_TBB=ON \
    -DWITH_1394=OFF \
    -DWITH_OPENEXR=OFF \
    -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-8.0 \
    -DCUDA_ARCH_BIN=5.3 \
    -DCUDA_ARCH_PTX="" \
    -DINSTALL_C_EXAMPLES=ON \
    -DINSTALL_TESTS=OFF \
    -DOPENCV_TEST_DATA_PATH=../opencv_extra/testdata \
    ../opencv

Check the link to find the correct cmake command for your platform (there are 3 sets of them). In the case of NVIDIA Jetson TX1:

make -j4
sudo make instal

Install dependancies for ORB-SLAM2

Pangolin

Dowload and install instructions can be found at: https://github.com/stevenlovegrove/Pangolin.

BLAS and LAPACK

sudo apt-get install libblas-dev
sudo apt-get install liblapack-dev

Eigen3

Download and install instructions can be found at: http://eigen.tuxfamily.org. Required at least 3.1.0.

PCL for ROS

sudo apt-get install libopenni2-dev
sudo apt-get install python-vtk

Building ORB_SLAM2_CUDA

Clone the repo and execute the build script for normal ORB-SLAM2:

git clone https://github.com/hoangthien94/ORB_SLAM2_CUDA.git ORB_SLAM2_CUDA
cd ORB_SLAM2_CUDA
chmod +x build.sh
./build.sh

Remember to run build.sh before building ROS because the lib/libORB_SLAM2_CUDA.so needs to be created first. To build ROS node:

export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:/path/to/ORB_SLAM2_CUDA/Examples/ROS
chmod +x build_ros.sh
./build_ros.sh

Done. If you have problem with the build, sometimes it helps to remove previous build folders:

# in ORB_SLAM2_CUDA folder
sudo rm -rf build
sudo rm -rf Thirdparty/DBoW2/build
sudo rm -rf Thirdparty/g2o/build
./build.sh
sudo rm -rf Examples/ROS/ORB_SLAM2_CUDA/build
./build_ros.sh

When the build is completed, you can try the examples as in the ORB-SLAM2 repo's instructions.

Installation on Jetson Nano

  • Follow the official getting started to have a working Nano with latest image.
  • Install OpenCV 4.1.0 on Jetson Nano (the 3.3.0 version installed with the default image has some issues). Following the instructions in this page worked well for me.
  • Install dependencies:
    • Pangolin: follow the instructions here.
    • Eigen 3:
      sudo apt install libeigen3-dev
      
    • PCL for ROS:
      sudo apt-get install libopenni2-dev
      sudo apt-get install ros-melodic-pcl-ros
      
  • Clone the jetson_nano branch of the code (with modified CMakeLists for OpenCV 4.1.0 and fixed some compatability issues):
git clone https://github.com/hoangthien94/ORB_SLAM2_CUDA.git ORB_SLAM2_CUDA
cd ORB_SLAM2_CUDA 
git checkout jetson_nano
  • Build like normal:
chmod +x build.sh
./build.sh
  • Additionally, to run ROS on Jetson Nano, first follow this JetsonHacks' blog post to install ROS on Nano. Then build this repo with:
cd /path/to/ORB_SLAM2_CUDA/
export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:/path/to/ORB_SLAM2_CUDA/Examples/ROS
chmod +x build_ros.sh
./build_ros.sh

Run non-ROS examples in Monocular node

Please refer to ORB-SLAM2 repo for a detailed step-by-step instruction, with two modifications:

  • The executable is located in the build folder instead of Examples/Monocular.
  • For TUM and KITTI examples, add a fourth argument at the end of the command, which corresponds to bUseViewer that enables / disables Viewer to pop up.

Example run:

$ cd /path/to/ORB_SLAM2_CUDA
$ ./build/mono_tum Vocabulary/ORBvoc.txt Examples/Monocular/TUM1.yaml Data/rgbd_dataset_freiburg1_desk true

Run ROS launch file for Monocular node

This one is created by me. Requires PCL library to run.

First you need to have the camera's image published on topic camera/image_raw.

Change the vocabulary and camera settings file accordingly. The directory is set in the launch file, located at ORB_SLAM2_CUDA/Examples/ROS/ORB_SLAM2_CUDA/launch/ros_mono.launch

Then launch:

roslaunch /path/to/ORB_SLAM2_CUDA/Examples/ROS/ORB_SLAM2_CUDA/launch/ros_mono.launch

This will run the ROS publisher node. The ROS topics will now be published in the ROS network. Run RVIZ for visualization:

rosrun rviz rviz

Note that Viewer is disable by default.

Full usage:

roslaunch /path/to/ORB_SLAM2_CUDA/Examples/ROS/ORB_SLAM2_CUDA/launch/ros_mono.launch [bUseViewer:=(false by default)] [bEnablePublishROSTopic:=(true by default)]

Example:

  • To launch this node with Viewer enabled:
roslaunch /path/to/ORB_SLAM2_CUDA/Examples/ROS/ORB_SLAM2_CUDA/launch/ros_mono.launch bUseViewer:=true
  • To launch this node without publishing any ROS topics:
roslaunch /path/to/ORB_SLAM2_CUDA/Examples/ROS/ORB_SLAM2_CUDA/launch/ros_mono.launch bEnablePublishROSTopic:=false

This is a work in progress. So expects new things and bugs fixes in future version. Happy coding.

References and Useful links for troubleshooting

https://devtalk.nvidia.com/default/topic/1001801/orb_slam2-cuda-enhanced-running-on-a-tx2/# raulmur/ORB_SLAM2#202 raulmur/ORB_SLAM2#205 raulmur/ORB_SLAM2#209 raulmur/ORB_SLAM2#144 raulmur/ORB_SLAM2#317

About

ORB_SLAM2 with GPU Enhancement running on NVIDIA Jetson TX1. Focus on ROS part.

Resources

License

Unknown, GPL-3.0 licenses found

Licenses found

Unknown
LICENSE.txt
GPL-3.0
License-gpl.txt

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 86.9%
  • Cuda 11.8%
  • CMake 1.2%
  • Shell 0.1%