Gazebo Camera Processing

asked 2016-06-01 02:50:27 -0600

dvh gravatar image

updated 2016-06-02 11:17:41 -0600

chapulina gravatar image

Issues using libgazebo_ros_openni_kinect (note: I'm using gazebo that comes with ROS Indigo):

1) This plugin checks the mathematical relationship "fl = width / (2 * tan(hfov/2))". However, when I provide:

    <width>1920</width>
    <hfov>60.0699746625743</hfov>
    <focalLength>1660.426379</focalLength>

which satisfies this mathematical relationship, the plugin reports that it is wrong with the following message:

    "The <focal_length>[1660.426379] you have provided for camera_ [openni_camera_camera] is inconsistent with specified image_width [1920] and HFOV [1.047000].   Please double check to see that focal_length = width_ / (2.0 * tan(HFOV/2.0)), the explected focal_lengtth value is [1663.148138], please update your camera_ model description accordingly."

2) This plugin is inconsistent on how it processes distortion coeff. I pass to the plugin:

    <distortionK1>-0.016941</distortionK1>
    <distortionK2>0.117304</distortionK2>
    <distortionT1>0.003003</distortionT1>
    <distortionT2>0.004669</distortionT2>
    <distortionK3>-0.233843</distortionK3>

and the camera_info messages that are published has:

    distortion_model: plumb_bob
    D: [-0.016941, 0.117304, -0.233843, 0.003003, 0.004669]

which is suppose to be in k1,k2,t1,t2,k3 order

3) This plugin uses "focal_length" which represents fx but doesn't have a fy and it ignores the distortion parameters. Robots primarly use cameras to determine where thing are. Since we can know "exactly" where everything is in Gazebo, if this plugin did process fy and the distortion parameters, then we can isolate errors when determining object locations to our algorithm which would be a great benefit in assessing our algorithms. I uses ROS Industrial intrinsic calibration in Gazebo to get the intrinsic camera_info values. Since I now where the camera and calibration target is "exactly" in Gazebo, calibration algorithms (as one example) can be tested and compared in a idealistic environment where in the real world, there are always measurement errors. So processing in Gazebo using the standard camera model (include all the camera parameters) would be a big win in the robotic world.

edit retag flag offensive close merge delete