What does the optical_frame of a camera actually do, for ROS tf transformations?

asked 2020-05-16 05:26:14 -0500

Shrutheesh R Iyer gravatar image

updated 2020-05-16 05:27:40 -0500

Hello, I have an Intel Realsense that is spawned in my simulation environment, using a gazebo camera plugin. Among many topics, it has camera_depth_frame and the camera_depth_optical_frame topics that are published.

Visualized in RViz, this is my setup currently. Initial location. To get a better understanding, here is the same setup from a different view. Second view.

Now, when I query the pose transformation between this camera_depth_optical_frame and the world, this is my result.

  x: 0.318701649258
  y: 0.475529280403
  z: -0.0471259472185
  x: 0.499374723815
  y: -0.498611691812
  z: 0.500621423536
  w: 0.501387531058

Looking at the translation values, it does seem fine.

However, when I rotate the entire camera about the y axis, such that it is in this setting Rotated camera, with the camera looking about 1.2 radians down and then query the same transformation, this is my result.

  x: 0.318701218896
  y: 0.217455112995
  z: 0.400558588204
  x: 0.69572986174
  y: -0.694144038039
  z: 0.130472495454
  w: 0.130770569581

Looking at the translation values, it seems very bizarre, since the camera has not actually translated to any other location at all. (other than the very tiny change due to shaking).

This makes me wonder, what actually is this depth_optical_frame ?

Note that there is no glitch in my transformation matrices, because if I use the camera_depth_frame rather than the camera_depth_optical_frame, my translation values are the same (which is true)

I am not sure if this query belongs in ROS, or here. I asked here since the optical_frame is a Gazebo plugin.


edit retag flag offensive close merge delete