Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ROS 2 Branch - Checkerboard Finder + Hand Eye Calibration #149

Closed
peterdavidfagan opened this issue Apr 27, 2023 · 23 comments
Closed

ROS 2 Branch - Checkerboard Finder + Hand Eye Calibration #149

peterdavidfagan opened this issue Apr 27, 2023 · 23 comments
Labels

Comments

@peterdavidfagan
Copy link

peterdavidfagan commented Apr 27, 2023

Hi @mikeferguson,

I am looking to perform hand-eye calibration of a realsense depth camera in my scene (containing a Franka Emika Panda) using this package. I am trying to estimate the transform between my robot base panda_link0 and the frame camera_depth_optical_frame. I have mounted a checkerboard on my robot eef and provided an initial guess for the checkboard frame transform as part of my configuration files.

So far capturing samples works great for me on ROS 2.

I am however facing an error when running the optimization step; the error relates to building a chain model. I have been able to capture samples but when I look to run the optimization step I get the following error:

Error source
Called From

Failed to build a chain model from panda_link0 to camera_depth_optical_frame, check the link names

The link names appear to be correct based on the models I am loading. My understanding is that I have the transform from my camera_depth_optical_frame -> checkerboard and transforms from panda_link0 -> panda_link8 while I have an initial estimate for panda_link8 -> checkerboard. This information should be sufficient based on my understanding to get a reasonable calibration given enough samples of data.

My configuration files are as follows:

robot_calibration:
  ros__parameters:
    verbose: true
    base_link: panda_link0
    calibration_steps:
    - single_calibration_step
    single_calibration_step:
      models:
      - arm
      - camera
      arm:
        type: chain3d
        frame: panda_link8
      camera:
        type: camera3d
        frame: camera_depth_optical_frame
        topic: /camera/depth/color/points
      free_frames:
      - camera_depth_optical_joint
      - checkerboard
      camera_depth_optical_joint:
        x: true
        y: true
        z: true
        roll: true
        pitch: true
        yaw: true
      checkerboard:
        x: true
        y: true
        z: true
        roll: true
        pitch: true
        yaw: true
      free_frames_initial_values:
      - checkerboard
      checkerboard_initial_values:
        x: 0.01
        y: 0.0
        z: 0.1
        roll: 0.0
        pitch: 0.0
        yaw: 0.0
      error_blocks:
      - hand_eye
      hand_eye:
        type: chain3d_to_chain3d
        model_a: camera
        model_b: arm
robot_calibration:
  ros__parameters:
    chains:
    - arm
    arm:
      topic: /panda_arm_controller/follow_joint_trajectory
      joints:
      - panda_joint1
      - panda_joint2
      - panda_joint3
      - panda_joint4
      - panda_joint5
      - panda_joint6
      - panda_joint7
    features:
    - checkerboard_finder
    checkerboard_finder:
      type: robot_calibration::CheckerboardFinder
      topic: /camera/depth/color/points
      camera_sensor_name: camera
      chain_sensor_name: arm

I am happy to write up/contribute to the README or a tutorial once I get this working.

@peterdavidfagan
Copy link
Author

peterdavidfagan commented Apr 27, 2023

Parameter Overrides Issues

Should I override the following parameters somewhere?

node->declare_parameter<std::string>(name + ".camera_info_topic", "/head_camera/depth/camera_info");

node->declare_parameter<std::string>(name + ".camera_driver", "/head_camera/driver");

topic_name = node->declare_parameter<std::string>(name + ".topic", name + "/points");

I have observed warning such as the following warning due to these hardcoded defaults

[WARN] [1682607053.921713444] [robot_calibration]: Unable to get parameters from /head_camera/driver
[WARN] [1682607056.349302559] [checkerboard_finder]: CameraInfo receive timed out.

[ERROR] [1682681227.784274946] [checkerboard_finder]: Failed to get cloud
[ERROR] [1682681227.784307656] [checkerboard_finder]: No point cloud data

source

These issues are resolved when I change the default to a value that is appropriate for my setup.

Checkboard size
Is the checkerboard size documented somewhere outside of the code? If not happy to potentially add this documentation.

@peterdavidfagan peterdavidfagan changed the title Failure to build chain model ROS 2 - Potential Errors in Checkerboard Finder Apr 27, 2023
@mikeferguson
Copy link
Owner

Yes - you want to set "camera_info_topic" in the same spot you set "topic" (under "checkerboard_finder" namespace)

The parameters for any of the individual finders really aren't documented outside of the code (other than example configs - which probably should have ALL the parameters added to them) - tutorial/readme updates greatly appreciated.

@peterdavidfagan
Copy link
Author

peterdavidfagan commented Apr 28, 2023

Thanks @mikeferguson for the response on this.

Checkerboard Finder
I now have this working as it prints messages indicating it has found the Checkboard. I only need to check if the offsets from the driver message is required for my camera.

Yes - you want to set "camera_info_topic" in the same spot you set "topic" (under "checkerboard_finder" namespace)

I have just tried this and it didn't work for me, you mean set it in the calibrate.yaml file under the camera model? It does work if I change the default value for the parameter in the source code. This also holds for other topics like the pointcloud topic, they are not being overwritten by the yaml config values in my setup (for the CheckboardFinder). I can open a pr to suggest changes once I get this working. Any feedback you can provide is appreciated.

Optimization
This still doesn't work in my setup, I get an error as a chain model is attempted to be created from the robot base panda_link0 to my camera frame camera_depth_optical_frame. This is the transform I don't know and that I am currently trying to estimate.

[INFO] [1682676836.596812300] [robot_calibration]: Press [Enter] to capture a sample... (or type 'done' and [Enter] to finish capture)
done
[INFO] [1682676910.994090283] [robot_calibration]: Done capturing samples
[INFO] [1682676911.002630574] [robot_calibration]: Adding free frame: camera_depth_optical_frame
[INFO] [1682676911.002821455] [robot_calibration]: Adding free frame: checkerboard
[INFO] [1682676911.002988195] [robot_calibration]: Adding initial values for: checkerboard
[INFO] [1682676911.003183856] [robot_calibration]: Adding model: arm
[INFO] [1682676911.003309636] [robot_calibration]: Adding model: camera
[INFO] [1682676911.003442847] [robot_calibration]: Adding chain3d_to_chain3d: hand_eye
[INFO] [1682676911.003663518] [robot_calibration]: Creating chain 'arm' from panda_link0 to panda_link8
[INFO] [1682676911.003773228] [robot_calibration]: Creating camera3d 'camera' in frame camera_depth_optical_frame
terminate called after throwing an instance of 'std::runtime_error'
  what():  Failed to build a chain model from panda_link0 to camera_depth_optical_frame, check the link names
[ros2run]: Aborted

This package is awesome, one question I had as I am delving into the code some more, is it assumed that the camera link and robot base link are initially part of the same kinematic tree. For a robot platform with integrated camera this will be the case but for a setup like the one I have the camera is mounted on a tripod and I don't have an accurate transform from the robot base link to the camera base link. As a result I cannot construct a chain and hence I run into the above error. I could publish a dummy transform to the link and calibrate its child joint as a free frame, I think this will work. What are your thoughts @mikeferguson, did I overlook something?

I am performing Hand Eye calibration on the following setup.

@peterdavidfagan peterdavidfagan changed the title ROS 2 - Potential Errors in Checkerboard Finder ROS 2 Branch - Checkerboard Finder + Hand Eye Calibration Apr 28, 2023
@peterdavidfagan
Copy link
Author

peterdavidfagan commented Apr 28, 2023

I am still getting the following error:

[INFO] [1682690470.877863702] [capture_manager]: Capturing features from checkerboard_finder
[INFO] [1682690471.013099349] [checkerboard_finder]: Found the checkboard
[INFO] [1682690471.013451190] [robot_calibration]: Captured pose 23
[INFO] [1682690471.013486750] [robot_calibration]: Press [Enter] to capture a sample... (or type 'done' and [Enter] to finish capture)
done
[INFO] [1682690481.870074886] [robot_calibration]: Done capturing samples
[INFO] [1682690481.881276624] [robot_calibration]: Adding free frame: camera_depth_joint
[INFO] [1682690481.881535404] [robot_calibration]: Adding free frame: checkerboard
[INFO] [1682690481.881774975] [robot_calibration]: Adding initial values for: checkerboard
[INFO] [1682690481.882040975] [robot_calibration]: Adding model: arm
[INFO] [1682690481.882176955] [robot_calibration]: Adding model: camera
[INFO] [1682690481.882354976] [robot_calibration]: Adding chain3d_to_chain3d: hand_eye
[INFO] [1682690481.882728986] [robot_calibration]: Creating chain 'arm' from panda_link0 to panda_link8
[INFO] [1682690481.882836496] [robot_calibration]: Creating camera3d 'camera' in frame camera_depth_optical_frame
terminate called after throwing an instance of 'std::runtime_error'
  what():  Failed to build a chain model from panda_link0 to camera_depth_optical_frame, check the link names
[ros2run]: Aborted

I added a dummy static transform so the camera is now connected to the base of the robot. Here is tf output from rviz that demonstrates this:

image

Any guidance would be appreciated, I am going to review my config for a typo and begin creating a sequence of prespecified poses before continuing to debug this.

@mikeferguson
Copy link
Owner

I have just tried this and it didn't work for me, you mean set it in the calibrate.yaml file under the camera model? It does work if I change the default value for the parameter in the source code.

This would actually go in the "capture.yaml" - there are two steps here, capture and then calibration. During capture, we are using the various finders, during calibration we are using the various models to reproject the data we captured. Once we get to the calibration step, we don't care about topics anymore (the data is already captured in the form of calibration_data messages).

So your capture.yaml would look like:

robot_calibration:
  ros__parameters:
    chains:
    - arm
    arm:
      topic: /panda_arm_controller/follow_joint_trajectory
      joints:
      - panda_joint1
      - panda_joint2
      - panda_joint3
      - panda_joint4
      - panda_joint5
      - panda_joint6
      - panda_joint7
    features:
    - checkerboard_finder
    checkerboard_finder:
      type: robot_calibration::CheckerboardFinder
      topic: /camera/depth/color/points
      camera_info_topic: TOPIC_NAME_GOES_HERE
      camera_sensor_name: camera
      chain_sensor_name: arm

@peterdavidfagan
Copy link
Author

So your capture.yaml would look like:

Thank you, silly mistake on my part.

@mikeferguson
Copy link
Owner

mikeferguson commented Apr 28, 2023

One question I had as I am delving into the code some more, is it assumed that the camera link and robot base link are initially part of the same kinematic tree. For a robot platform with integrated camera this will be the case but for a setup like the one I have the camera is mounted on a tripod and I don't have an accurate transform from the robot base link to the camera base link. As a result I cannot construct a chain and hence I run into the above error. I could publish a dummy transform to the link and calibrate its child joint as a free frame, I think this will work.

The output of calibration is an update URDF - so anything we want to modify has to be in the URDF (also, internally, we use KDL, not TF, so any part of the chain has to be in the URDF). Are you publishing this dummy frame with the TF static_transform_publisher? If so, you need to add the camera to your URDF and then things should hopefully run!

@peterdavidfagan
Copy link
Author

peterdavidfagan commented Apr 28, 2023

The output of calibration is an update URDF - so anything we want to modify has to be in the URDF (also, internally, we use KDL, not TF, so any part of the chain has to be in the URDF). Are you publishing this dummy frame with the TF static_transform_publisher? If so, you need to add the camera to your URDF and then things should hopefully run!

This makes sense, I was just starting to look into the use of KDL::Tree further.

Thanks for your help, I'll be sure to close this issue once I update my setup and have it running. I'll also look to contribute to the README.md. I'm happy to potentially make other contributions in future now I am getting more familiar with this codebase.

@mikeferguson
Copy link
Owner

One additional comment on capture vs. calibration - just for completeness - that "topic" part in your calibration.yaml is unused (topic isn't a parameter of the camera3d model)

@peterdavidfagan
Copy link
Author

peterdavidfagan commented Apr 28, 2023

One additional comment on capture vs. calibration - just for completeness - that "topic" part in your calibration.yaml is unused (topic isn't a parameter of the camera3d model)

This is good to know, I am also going to look into how parameters are being handled in the code so that I improve my understanding of the codebase.

Thank you.

@peterdavidfagan
Copy link
Author

peterdavidfagan commented Apr 28, 2023

The changes you suggested addressed the issue relating to constructing a frame which I was facing. In my case I approximate the transform between the camera base and the robot base with real-world measurements and add as these as the origin of the realsense in my URDF.

My script for collecting joint positions is ready so I can generate a large number of feasible configurations to use for the automatic calibration utilities.

When I run calibrate with 20 manually collected samples the Ceres solver is failing due to Nans in the residuals. I will start exploring this side of things this evening, hopefully will have a calibrated camera frame by today/tomorrow.

Looking forward to also trying to calibrate robot kinematics.

@peterdavidfagan
Copy link
Author

peterdavidfagan commented Apr 28, 2023

Ceres Solver Report: Iterations: 27, Initial cost: 2.170917e+03, Final cost: 9.228077e+01, Termination: CONVERGENCE
Parameter Offsets:
realsense_camera_link_joint_x: 0
realsense_camera_link_joint_y: 0
realsense_camera_link_joint_z: 0
realsense_camera_link_joint_a: 0
realsense_camera_link_joint_b: 0
realsense_camera_link_joint_c: 0
checkerboard_x: 0.85359
checkerboard_y: -0.598554
checkerboard_z: -1.36472
checkerboard_a: -0.197625
checkerboard_b: -2.39345
checkerboard_c: 0.821487

Converged but to very inaccurate results, making progress nonetheless. Thanks for your help so far, I'm going to clean my workspace and code and revisit this tomorrow. I'll try to post a pr with further documentation by this week or early next week should I get to accurate results.

@mikeferguson
Copy link
Owner

With only 20 samples, and 2 completely free frames, that probably won't converge - a couple possible ways to improve:

  • More samples
  • With the checkerboard attached to your hand, the plane that the checkerboard is located in should be able to be fairly well known (usually the gripper centers the checkerboard well on the hand for instance) and so you should be able to lock one of the XYZ coordinates as not a free parameter, and also hopefully even cut the angular estimation down to a single rotation - for instance if you look at the fetch_ros configuration (for ROS1), we had:
- name: checkerboard
   x: true
   y: false
   z: true
   roll: false
   pitch: true
   yaw: false
  • Add an outrageous_error block - if you have a somewhat decent estimate (+/-10cm?) of the camera location, you can use this to avoid the camera being moved to a crazy location - you can see an example of this in the UBR calibration config - although for your use case the camera angle is probably not well known and so you would want to change rotation_scale to 0.0

One really big caveat with roll/pitch/yaw to remember: you can have 0, 1, or 3 free parameters - but never 2 (because internally it is specified as angle-axis and so we can limit the rotation to a single roll/pitch/yaw - but can't properly specify, for example, roll+pitch but no yaw movement)

@peterdavidfagan
Copy link
Author

if you have a somewhat decent estimate (+/-10cm?) of the camera location

As in for the initial values of the free frame or the accuracy of the value defined in the URDF?

@mikeferguson
Copy link
Owner

As in for the initial values of the free frame or the accuracy of the value defined in the URDF?

The combination of the two (the free_frame initial values simply add/subtract to the URDF defined values - at runtime, they are indistinguishable when the transforms are calculated by KDL)

@peterdavidfagan
Copy link
Author

peterdavidfagan commented May 2, 2023

I updated my setup based on the above feedback by clamping the board with the gripper I have. This allowed me to update the set of free parameters in a similar fashion to the UBR-1 example you gave. Despite this, I have still not been able to get the solver to converge to a correct result. I have provided further details below, it would be appreciated if you could take a quick glance in case you spot something that is awry.

Robot Workspace
Workspace - behind camera perspective
Workspace - sideview
Data Collection - Video
rviz - tf

Config Files

robot_calibration:
  ros__parameters:
    chains:
    - arm
    arm:
      topic: /panda_arm_controller/follow_joint_trajectory
      joints:
      - panda_joint1
      - panda_joint2
      - panda_joint3
      - panda_joint4
      - panda_joint5
      - panda_joint6
      - panda_joint7
    features:
    - checkerboard_finder
    checkerboard_finder:
      type: robot_calibration::CheckerboardFinder
      topic: /camera/depth/color/points
      camera_info_topic: /camera/depth/camera_info
      points_x: 9
      points_y: 6
      size: 0.0245
      camera_sensor_name: camera
      chain_sensor_name: arm
robot_calibration:
  ros__parameters:
    verbose: true
    base_link: panda_link0
    calibration_steps:
    - single_calibration_step
    single_calibration_step:
      models:
      - arm
      - camera
      arm:
        type: chain3d
        frame: panda_link8
      camera:
        type: camera3d
        frame: camera_color_optical_frame
        topic: /camera/depth/color/points
      free_frames:
      - camera_joint
      - checkerboard
      camera_joint:
        x: true
        y: true
        z: true
        roll: true
        pitch: true
        yaw: true
      checkerboard:
        x: true
        y: false
        z: true
        roll: false
        pitch: true
        yaw: false
      free_frames_initial_values:
      - camera_joint
      - checkerboard
      camera_joint_initial_values:
        x: 1.6
        y: 0.0
        z: 0.6
        roll: 0.0
        pitch: 0.0
        yaw: 3.142
      checkerboard_initial_values:
        x: 0.0
        y: 0.0
        z: 0.25
        roll: 0.0
        pitch: 0.0
        yaw: 1.571
      error_blocks:
      - hand_eye
        - restrict_camera
        - restrict_checkerboard
      hand_eye:
        type: chain3d_to_chain3d
        model_a: camera
        model_b: arm
      restrict_camera:
        type: outrageous
        param: camera_joint
        joint_scale: 0.0
        position_scale: 0.1
        rotation_scale: 0.1
      restrict_checkerboard:
        type: outrageous
        param: checkerboard
        joint_scale: 0.0
        position_scale: 0.1
        rotation_scale: 0.1

Solver Output
Number of samples: 100

[INFO] [1683050550.329326949] [robot_calibration]: Done capturing samples
[INFO] [1683050550.342549279] [robot_calibration]: Adding free frame: camera_joint
[INFO] [1683050550.342707870] [robot_calibration]: Adding free frame: checkerboard
[INFO] [1683050550.342845931] [robot_calibration]: Adding initial values for: camera_joint
[INFO] [1683050550.342986041] [robot_calibration]: Adding initial values for: checkerboard
[INFO] [1683050550.343117712] [robot_calibration]: Adding model: arm
[INFO] [1683050550.343205862] [robot_calibration]: Adding model: camera
[INFO] [1683050550.343306273] [robot_calibration]: Adding : hand_eye - restrict_camera - restrict_checkerboard
[INFO] [1683050550.344011867] [robot_calibration]: Creating chain 'arm' from panda_link0 to panda_link8
[INFO] [1683050550.344116977] [robot_calibration]: Creating camera3d 'camera' in frame camera_color_optical_frame

Solver output:
Ceres Solver Report: Iterations: -2, Initial cost: 0.000000e+00, Final cost: 0.000000e+00, Termination: CONVERGENCE
Parameter Offsets:
camera_joint_x: 1.6
camera_joint_y: 0
camera_joint_z: 0.6
camera_joint_a: -0
camera_joint_b: 0
camera_joint_c: 3.142
checkerboard_x: 0
checkerboard_z: 0.25
checkerboard_b: 0

[INFO] [1683050550.345622815] [robot_calibration]: Done calibrating

Next Steps

  • I am going to start looking deeper into the Ceres solver API tomorrow as thus far I have only skimmed it lightly. It seems others have encountered similar output to what I am seeing here.
  • Triple check URDF file

@mikeferguson
Copy link
Owner

This line looks suspect to me:

[1683050550.343306273] [robot_calibration]: Adding : hand_eye - restrict_camera - restrict_checkerboard

That should say "Adding chain3d_to_chain3d: hand_eye" and then have two more lines - but it looks like your YAML is misformatted and the restrict_camera/checkerboard are indented too many lines and getting tacked onto the name (which then doesn't exist) so you have NO error blocks...

For reference - this error message pops up here: https://github.com/mikeferguson/robot_calibration/blob/ros2/robot_calibration/src/optimization/params.cpp#L101

It looks like we should have an error for several issues that are creeping up here:

  • If there error_blocks ends up being empty, we should definitely throw an error (rather than try to run the optimization).
  • If the error_block type doesn't match any of our error blocks (in the for loop noted above) we should print an error.

@mikeferguson
Copy link
Owner

I created #150 - which adds the better error messages above

@peterdavidfagan
Copy link
Author

peterdavidfagan commented May 3, 2023

That should say "Adding chain3d_to_chain3d: hand_eye" and then have two more lines - but it looks like your YAML is misformatted and the restrict_camera/checkerboard are indented too many lines and getting tacked onto the name (which then doesn't exist) so you have NO error blocks...

@mikeferguson thank you, that was likely the issue, for small samples I now seem to be getting ballpark correct results when I ensure my config files are correctly formatted.

When running on a large batch of data I encounter nans, I need to identify why this is the case, I am going to start with cost function evaluation.

I am going to start to document my setup for others to also perform hand-eye calibration, I was considering opening a pr that adds an examples folder to the root of this repository and include a description for hand-eye with a README.md that outlines my current configuration. If I get time I will add a simulated setup. Does this sound like a reasonable contribution?

Also thanks for your responsiveness to my questions thus far. I still wish to work on my setup to ensure I am getting accurately calibrate results but will start documenting in the meantime as well.

@AaronLPS
Copy link

AaronLPS commented Feb 6, 2024

I have a question concerning the definition of a checkerboard frame within a 3D context, specifically when configuring it in a URDF file. How should this frame be accurately defined? Is it standard practice to set the origin at the top left inner corner of the checkerboard, with the X-axis aligned along the rows and the Y-axis aligned along the columns?

@peterdavidfagan
Copy link
Author

peterdavidfagan commented Feb 6, 2024

Is it standard practice to set the origin at the top left inner corner of the checkerboard, with the X-axis aligned along the rows and the Y-axis aligned along the columns?

Hey dude, I think it is dependent on the software library and conventions it sets. It has been a while since I have looked at this set of software but from what I recall it leverages opencv only for detecting corners.

From what I remember opencv has the Z-axis pointing perpendicular to the target pointing away from the target surface (i.e. towards camera detecting target). I cannot recall the axis alignment along the board, the following tutorial has an example of plotting frames but I don't think it explicitly calls out axis alignment conventions for opencv.

It might be possible to glean this information from the source code or the opencv docs, off the top of my head I don't 100% recall the convention, the linked tutorial aligns with your suggestion for the axis convention.

@mikeferguson
Copy link
Owner

mikeferguson commented Feb 6, 2024

Assuming you have an asymmetric checkerboard, then the side connecting the black corners is your "left" side, and the intersections are ordered such that the lower left is the first intersection. There is a big comment in chain3d that shows the ordering of what gets kicked out (although I'm not sure that the X/Y axis labeling matches OpenCV)

 *  Based on the 6x5 checkboard, the ordering of the oberservation points
 *  produced by the OpenCV chessboard finder is always the same:
 *  <PRE>
 *     ____________________________
 *    |                            |
 *    |  ###   ###   ###   ###     |
 *    |  ###   ###   ###   ###     |
 *    |  ###   ###   ###  LL##     |    11  First Observation Point
 *    |     ###   ###   ##LL  ###  |    11
 *    |     ###   ###   ###   ###  |
 *    |     ###   ###   ###   ###  |    22  Second Observation Point
 *    |  ###   ###   ###   ###     |    22
 *    |  ###   ###   ###   ###     |
 *    |  ##22  ###   ###   ###     |    LL  Last (20th) Obervation Point
 *    |    22##   ###   ###   ###  |    LL
 *    |     ###   ###   ###   ###  |
 *    |    11##   ###   ###   ###  |
 *    |  ##11  ###   ###   ###     |
 *    |  ###   ###   ###   ###     |   gripper_link
 *    |  ###   ###   ###   ###     |      X-axis
 *    |                            |     ^
 *    |           _____            |     |     gripper_link
 *    |          |     |           |     |  ----> Y-axis
 *    |          |     |
 *    |          |  0<----- gripper_link frame origin
 *    |__________|     |___________
 *               |     |
 *            ___|_____|___
 *           |             |
 *           |   Gripper   |
 *           |             |
 *  </PRE>

(also note, in reviewing this, I found that the coordinate frames in the comment were out of date with what we used internally - updated in #160)

@mikeferguson
Copy link
Owner

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants