-
Notifications
You must be signed in to change notification settings - Fork 3
Architecture Overview for XAMYAB
The robot Xamyab has three specific parts: Perception (sense), Brain (think), and Path Planning & Control (act). The general workflow is that the robot will sense the environment using some sensors (cameras, LIDAR,...), process the data to make decisions, and finally perform some actions. This overview will explain some basic ideas how each part works for our Xamyab robot.
For Xamyab, the depth camera (i.e. Kinect) will collect the RGB, depth, pointclouds, and infrared data (How to work with depth cameras). Different camera types have their own libraries and ROS packages that we need to install in order to integrate with ROS. Our GenericPerception python class can be easily extended to obtain and manipulate the raw camera data. It's recommended that each project should extend (inherit) the GenericPerception class for their own image processing purposes (object recognition, machine learning, etc.) The processed data will then be used by the Brain.
Example: we will use a chess game as an example throughout this document. First, for Perception, the raw data collected from the camera will be processed so that it can recognize the chessboard and all the chess pieces on the board (turning data into useful information for the brain).
The brain will use the processed data to solve a problem or make decisions depending on the task. Then, it will execute some commands based on the decisions to move the robot. Basically, the brain is the main agent that connects Perception and Path Planning & Control.
Note that as the brain "knows" the position of the camera relative to the robotics arms, it can transform the data from the coordinate system of the camera to that of the robotic arms (Read about tf transformation).
Example: after getting processed data from Perception (i.e. where the chess pieces are on the board), the brain acting as a chess program will begin to find the best possible next step. After that, it will turn this decision into an action for Path Planning & Control to execute.
After receiving motion instructions from the Brain, Path Planning & Control will send it to MoveIt!, which will plan one possible path to move the arms. MoveIt! will make sure the robot will not collide with itself and with the physical environment around the robot (sensed by the camera). The GenericRobot class makes it easy to control the arms/grippers of the Xamyab robot using MoveIt! as the lower-level control system.
Note that MoveIt! can work with both the simulated robot and the real one.
If you need help, come find Merwan Yeditha [email protected] or Audrey Lee [email protected]!