Poor Stereo Output
The following link is some captured point cloud data visualized in Rviz following the DRCSIM tutorials. The robot is in the golf cart, set to drive in a circle, with a box placed nearby to try and "see".
http://www.youtube.com/watch?v=lM2UZ-6tRaQ
In the video, the multisense output produces a huge amount of spurious point cloud data. My guess is that it is the approximately synced cameras: when the multisense head moves around any amount, the stere_image_proc
picks up too many correspondence points with the not synchronized cameras. I am hoping to do visual odometry and point cloud manipulation asap, but this makes things a bit difficult.
Is this the general experience with the multisense head so far?
Yes, this has been our experience as well. That and the fact that stereo processing is a CPU hog. Would we be better off with a simpler simulation of stereo that used depth buffer to generate point clouds from within Gazebo and not try to do separate processing?