This is the implementiaon of Reinforcement learning, particularly Q learning to train the two wheeled robot to nevigate in the environment with obstacles. ROS2 foxy and gazebo is used for this project.
The robot used is Turtlebot3 burger model. The goal of the robot is to learn the appropeiate actions based on its state to avoid the obstacles.
Once the robot is trained to avoid the obstacles, the final Q Table is used to make a hybrid go-to-goal and obstacle-avoidance algorithm to nevigate into the environment to reach the goal by avoiding obstacles.
To run the code, make sure turtlebot3 package and all its dependencies are installed. This can be done by going here. Select 'foxy' and run the appropriate commands. Or watch this how to do.
- From the home directory
name@ubuntu:~$
clone the ropository:git clone https://github.com/rosuman/Q-Learning.git
cd Q-Learning
-
Build the package:
colcon build --symlink-install
. install/local_setup.bash
export TURTLEBOT3_MODEL=burger
-
Now launch the training environment in gazebo:
ros2 launch turtlebot3_gazebo training.launch.py
-
Next run the q_learning node:
for that open the new terminal and type
. install/local_setup.bash
export TURTLEBOT3_MODEL=burger
ros2 run turtlebot3_gazebo q_learning
-
After training quit the q_learning node as well as training environment of gazebo and launch demo environment along with run the run_demo node:
From the first terminal
ros2 launch turtlebot3_gazebo demo.launch.py
From second terminal
ros2 run turtlebot3_gazebo run_demo