Gazebo | Ignition | Community
Ask Your Question
0

DRC Laser / Stereo Disparity between Collision and Visual Elements

asked 2013-02-05 15:13:38 -0500

klowrey gravatar image

It is fantastic to have DRCSIM 2.0 and Gazebo 1.4, thanks! Obviously a huge part of the DRC competition is rectifying data between stereo and laser, but these produce differing results. Robots can only interact with collision boundaries based on objects; if it can't collide it can't affect.

The disparity is that the /multisensesl/camera/points2 data produces point clouds based on visual elements, while /multisensesl/laser/scan produces point clouds based on collision boundaries.

Visual data is not useful if it doesn't lead to a means of interaction.

Link to Laser data of collision boundaries of DRC_vehicle

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
0

answered 2013-02-05 18:49:51 -0500

nkoenig gravatar image

I believe this pull-request will resolve this issue: https://bitbucket.org/osrf/gazebo/pul...

It will enable the use of render-based ray sensor.

edit flag offensive delete link more

Comments

Partially ... GPU laser sensor is capped at 90 degrees as it's intended to be used for sensors with a 2d image sensor I think (e.g. TOF sensors or Kinect or other depth cameras), but you could use the GPU ray sensor to extend it to a lidar sensor, see my suggestion in my comment here: https://bitbucket.org/osrf/gazebo/issue/309/use-of-visual-model-instead-of-collision

ThomasK gravatar imageThomasK ( 2013-02-05 20:17:03 -0500 )edit

The GPU laser sensor can render a full 360 degrees. At least the test I've seen has shown that it can.

nkoenig gravatar imagenkoenig ( 2013-02-05 21:09:43 -0500 )edit

Ah it's the vertical fov that is restricted to 90 degrees, it was late already yesterday :) ... all good then I guess, great

ThomasK gravatar imageThomasK ( 2013-02-05 21:37:14 -0500 )edit

Question Tools

Stats

Asked: 2013-02-05 15:13:38 -0500

Seen: 598 times

Last updated: Feb 05 '13