DRC Laser / Stereo Disparity between Collision and Visual Elements
It is fantastic to have DRCSIM 2.0 and Gazebo 1.4, thanks! Obviously a huge part of the DRC competition is rectifying data between stereo and laser, but these produce differing results. Robots can only interact with collision boundaries based on objects; if it can't collide it can't affect.
The disparity is that the /multisensesl/camera/points2 data produces point clouds based on visual elements, while /multisensesl/laser/scan produces point clouds based on collision boundaries.
Visual data is not useful if it doesn't lead to a means of interaction.