How exactly does the gazebo kinect work?
I am very new to robotics and simulation of sensors. That is why i am currently trying to find out how the kinect in gazebo works when combined with the libgazeborosopenni_kinect plugin. How exactly does it generate a pointcloud output? Does it use laser emitting or is the point cloud rendered based on the field of view and all the objects in current sight of the camera?
Asked by kosjaat on 2019-08-11 11:20:40 UTC
Comments