The following link is some captured point cloud data visualized in Rviz following the DRCSIM tutorials. The robot is in the golf cart, set to drive in a circle, with a box placed nearby to try and "see".
http://www.youtube.com/watch?v=lM2UZ-6tRaQ
In the video, the multisense output produces a huge amount of spurious point cloud data. My guess is that it is the approximately synced cameras: when the multisense head moves around any amount, the stere_image_proc
picks up too many correspondence points with the not synchronized cameras. I am hoping to do visual odometry and point cloud manipulation asap, but this makes things a bit difficult.
Is this the general experience with the multisense head so far?