- Installation guide
- Perception
- Navigation
- Manipulation
- Demo in Sim
- Demo in Real-World
- Other troubleshooting tips
- MOVO repository for the Kinova mobile manipulator. Remote PC and sim does not need movo_network or movo_robot.
- Setup Instructions: https://github.com/Kinovarobotics/kinova-movo/wiki/Setup-Instructions
- Note: voice control requires installation of pocketsphinx. e.g.:
sudo apt-get install ros-kinetic-pocketsphinx
. Voice navigation requires installation of SpeechRecognition. e.g.:pip install SpeechRecognition
.
- Time synchronization issue between movo1 and movo2: if you get error messages regarding time synchronization, do the following:
- Connect via ssh to movo1.
- In a terminal of movo1, enter : ntpdate 10.66.171.1 (the ip address should be the address of movo2).
- Battery-related issue:
- Connect to the Ethernet port of MOVO with a remote computer.
- Power on the robot and quickly do the following:
- SSH into MOVO2.
rosrun movo_ros movo_faultlog_parser
. This will produce a directory called "SI_FAULTLOGS" in the~/.ros/
directory.
- Follow the steps located in
movo_common/si_utils/src/si_utils/setup_movo_pc_migration
. Start from Install third parties and additional libraries. But do it line by line manually instead of running thesetup_movo_pc_migration
script. - In the above steps, make sure you use gcc-5. When doing
cmake
, doenv CXX=g++-5 cmake
instead. - Somehow making
AssImp
gives the gtest-related error and I wasn't able to solve it yet. However, this Package was compiled successfully and not havingAssImp
seems okay for now. - For libfreenect2, follow the instruction given by Kinova: https://github.com/Kinovarobotics/kinova-movo/wiki/1.-Setup-Instructions.
- If kinect does not work in Gazebo, make sure to set the Gazebo reference to
${prefix}_ir_frame
from thekinect_one_sensor.urdf.xacro
file located in/movo_common/movo_description/urdf/sensors/
.
- Paper describing the MOVO software, hardware and architecture: Snoswell et al..
- We have two fiducial marker systems installed (AprilTag is preferred).
- AprilTag:
- The
tag36h11
type is currently used and it is set insettings.yaml
along with other AprilTag-related parameters. Set the tag ID and size you want to use intags.yaml
, e.g.stanalone_tags: [{id: 0, size: 0.095}]
for the tag whose size is 9.5 by 9.5 cm square. Otherwise, tags are not going to be recognized. - If you want to use a new tag of different size and ID, see apriltag-imgs. You will find the tag sizes very small. To increase the size of the tag, open a new document in Google Docs, copy and paste the tag in Google Docs, rescale the size, and save the document as a PDF file. Then, you will find the clean image of the tag of different size.
- In
continuous_detection.launch
, setcamera_name=/movo_camera/color
,camera_frame=movo_camera_color_optical_frame
, andimage_topic=image_color_rect
. apriltag_ros GitHub repository. AprilTag tutorials.
- Aruco: aruco_ros GitHub repository.
- When catkin-making AprilTag, you may see the error of
This workspace contains non-catkin packages in it, and catkin cannot build a non-homogeneous workspace without isolation. Try the catkin_make_isolated command instead.
due to the non-catkin apriltag package installed together. Since we must stick withcatkin_make
, notcatkin build
, install theapriltag
package first as follows:make
PREFIX=/opt/ros/kinetic sudo make install
- Then do
catkin_make
insidemovo_ws
- The ROS package for Mask R-CNN: mask_rcnn_ros.
- If you get
ImportError: libcudnn.so.6: cannot open shared object file
, then see this issue. - If you get
IOError: Unable to open file (Truncated file: eof = 47251456, sblock->base_addr = 0, stored_eoa = 257557808)
, then downloadmask_rcnn_coco.h5
from here and place the file in~/.ros/
.
- We use Xavier as a GPU machine to handle perception for MOVO. Xavier is on Ubuntu 18.04, ROS Melodic, and Python 3.6. To test
example.launch
provided by Mask R-CNN, follow the steps:- Activate virtualenv to change to python3:
source Workspace/python-virtualenv/venv/bin/activate
- Source the package of Mask R-CNN:
source Workspace/mask_rcnn_ros/devel/setup.bash
- Source vision_opencv to be able to use cv_bridge:
source Workspace/catkin_build_ws/install/setup.bash --extend
- Activate virtualenv to change to python3:
- Coversion between the depth image and the point cloud: depth_image_proc.
- Bandwidth usage per message
/movo_camera/point_cloud/points
: >300MB/MSG./movo_camera/sd/image_depth
: ~8MB/MSG. The compressed depth image is ~4MB/MSG./movo_camera/hd/image_depth_rect/compressed
: ~19MB/MSG./movo_camera/qhd/image_depth_rect/compressed
: ~6.1MB/MSG. The compressed color image is 1.8MB/MSG.
- Refer to How Tos given by Kinova: for real robot and for simulation.
- Most relevant parameters are called in
move_base.launch
located in/movo_demos/launch/nav/
.eband_planner_params.yaml
contains local planner-related parameters.
- RTAB-Map.
- Installation guide: here. You can follow the
Build from source
and make sure you do the following when cloning rtabmap:git clone -b kinetic-devel https://github.com/introlab/rtabmap.git rtabmap
. - How to run:
- In a terminal, do
roslaunch movo_demos sim_rtabmap_slam.launch
. - In another terminal, do
roslaunch movo_demos rtabmap_slam.launch rtabmap_args:="--delete_db_on_start"
. Usertabmap_args:="--delete_db_on_start"
if you want to start over the map. Otherwise, take this out.
- In a terminal, do
- Useful arguments (attach after calling the launch file):
rtabmap_args:="--delete_db_on_start"
: this deletes the database saved in~/.ros/rtabmap.db
at each start.
- If you face the error when catkin making:
make[2]: *** No rule to make target '/usr/lib/x86_64-linux-gnu/libfreenect.so', needed by '/home/yoon/movo_ws/devel/lib/rtabmap_ros/pointcloud_to_depthimage'. Stop.
, dosudo apt-get install libfreenect-dev
.
- Currently, this feature is not available. We only use vanilla MoveIt for now.
- The grasping largely consists of three packages as follows.
- simple_grasping.
- moveit_python.
- grasping_msgs.
- Refer to the Gazebo tutorial provided by Fetch Robotics: here.
- Grasping poses are hardcoded in
createGraspSeries()
andcreateGrasp()
inshape_grasp_planner.cpp
.
- Actionlib-detailed description: here.
Demo-related files are located in /movo_demos
.
MoveIt-based demo. As a simulator only rviz is used, not Gazebo. Do the following to run the demo.
roslaunch movo_7dof_moveit_config demo.launch
.rosrun movo_demos sim_moveit_pick_place.py
.
TBD
Check if the light under the ethernet port on both arm bases is blinking. If not, one of the followings could be a reason:
- If you can manually move the arm after powering on MOVO, this implies one or more of fuses are blown. Check the status of the fuse using the multimeter and replace with the spare fuse.
- If the arm is stiff after powering on MOVO and cannot be moved manually, this may imply that the arm is stuck in a bootloader state. To fix this, ask Kinova to receive the Base Bootloader Upgrade service bulletin (and see Section 11) as well as the latest version of the firmware. During the bootloader upgrade, you may need to go through
short-circuit the 2 pins
, which can be highly risky. Make sure you triple check the right pints to short-circuit.
If none of the above works, ask Kinova for help.
Two types: 028707.5PXCN (7.5 A AC 32 V DC), and 0287002.PXCN (2 A AC 32 V DC).