In the video, the multisense output produces a huge amount of spurious point cloud data. My guess is that it is the approximately synced cameras: when the multisense head moves around any amount, the stere_image_proc picks up too many correspondence points with the not synchronized cameras. I am hoping to do visual odometry and point cloud manipulation asap, but this makes things a bit difficult.