Home | Tutorials | Wiki | Issues
Ask Your Question
0

Simulated kinect rotation around X [bug?]

asked 2013-02-04 01:24:12 -0500

ugo gravatar image

updated 2013-02-06 05:19:45 -0500

Hi,

As advised on answers.ros.org, I'm posting my question here.

In our robot, the Kinect can be mounted on the side of the arm, as shown in the screenshot below. When running the simulation in Fuerte, I found this weird behaviour. As you can observe on the image, the point cloud does not match the robot model (we see a partial image of the hand/arm at the bottom left of the screenshot, which should be on the robot model).

Rotate Kinect

As soon as I rotate the kinect against its X axis (so that the kinect is horizontal as you can see on the second screenshot), then the point cloud and robot model are aligned properly.

Horizontal Kinect

The kinect xacro and dae are the one from the turtlebot. I'm simply attaching them with a rotation:

<joint name="base_camera_joint" type="fixed">
  <origin  xyz="0.01216 0.1713 0.433"
       rpy="-${M_PI/2} ${M_PI/4} -${M_PI/12}" /> 
  <!-- This -pi/2 in origin rpy is the offending parameter -->

  <parent link="shadowarm_trunk"/>
  <child link="camera_link" />
</joint>

The code can be seen on github.

Any help is greatly appreciated!

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
0

answered 2013-02-22 03:19:58 -0500

ugo gravatar image

An answer from David Butterworth which solved my problem! Thanks David.


I found that if you modify the visual/collision mesh to be aligned with your joint origin, then it's okay. Don't do extra translations/rotations of the mesh from within your URDF, because that is broken. The orientation of the sensor data should depend on the Gazebo macro, but there should be only translations in there, with 2 joint rotations for the extra pair of camera frames.

My Gazebo macro is based on the PR2 one, but with the visual/collision meshes fixed and re-scaled as per above. The end result is that any translation/rotation is only done at the headmountkinect joint that orientates everything else, and you can successfully pitch the sensor up-down or vertically.

I'm guessing that in your situation, the PointCloud data is actually in the correct place, but because of the bug with the meshes, the visual model is in the wrong place.

edit flag offensive delete link more
Login/Signup to Answer

Question Tools

1 follower

Stats

Asked: 2013-02-04 01:24:12 -0500

Seen: 434 times

Last updated: Feb 22 '13