How to match pixel coordinates from /image_raw to gazebo world and perform pick-and-place with UR5 robotic arm?

asked 2021-10-07 00:22:28 -0500

kh gravatar image

Dear Gazebo community,

Currently, I'm creating a project to detect an object and perform pick-and-place with UR5 robotiq gripper in ROS Melodic. For the demonstration on normal pick-and-place, I managed to grasp the object with the grasp plugin and MoveIt API in python script. However, I wish to grasp the object detected from a 2D camera in Gazebo simulation. My problems are:

  1. How to match pixels' coordinates of the object from the 2D Camera to the object's model in Gazebo world?
  2. How to send the coordinates of x and y (2D camera) to the end-effector of UR5 arm as well as to perform pick-and-place?

I'm writing my motion planning with MoveIt API in python script. Any suggestions will be appreciated.

Here I attach the screenshot of my detected object with OpenCV.

image description

edit retag flag offensive close merge delete