Getting Start with Kinect and ROS
Hi all,
I want to use a Gazebo world and a robot with a kinect on it where I can identiy objects of this world. I'm thinking on getting the kinect RGB images and process, but I'm a little confuse about how to simulate the kinect on gazebo and parse the image data to ROS, so I was thinking if any of you doesn't have some quick start or tutorials to give me (even some tips), I know normally OpenNI and PCL are used, but I don't know exactly how to use them with ROS and Gazebo together. I searched at ROS forum but related questions normally was closed as a Gazebo question, so I thought maybe here was the best place to discuss it.
I'm also looking for a similar solution, have you got any clues yet?