How exactly does the gazebo kinect work?
I am very new to robotics and simulation of sensors. That is why i am currently trying to find out how the kinect in gazebo works when combined with the libgazebo_ros_openni_kinect plugin. How exactly does it generate a pointcloud output? Does it use laser emitting or is the point cloud rendered based on the field of view and all the objects in current sight of the camera?