how to get h*w point cloud from laser scan not just planer
I am trying to convert a laser scan into point cloud so i can use its cartesian cordintes for computer vision purpose. But when i do it it seems like the width of point cloud converted is only 1. Before this i was using kinect sensor where both height and width were not 1. i'm getting confused to use the hokuyo sensor/multisense sensor. I am using gazbo for getting the lidar scan data. To convert lidar scan into pointcloud2 data i'm using this program:
#include <ros/ros.h>
#include <tf/transform_listener.h>
#include <laser_geometry/laser_geometry.h>
class My_Filter {
public:
My_Filter();
void scanCallback(const sensor_msgs::LaserScan::ConstPtr& scan);
private:
ros::NodeHandle node_;
laser_geometry::LaserProjection projector_;
tf::TransformListener tfListener_;
ros::Publisher point_cloud_publisher_;
ros::Subscriber scan_sub_;
};
My_Filter::My_Filter(){
scan_sub_ = node_.subscribe<sensor_msgs::LaserScan> ("/multisense/lidar_scan", 100,
&My_Filter::scanCallback, this);
point_cloud_publisher_ = node_.advertise<sensor_msgs::PointCloud2> ("/my_cloud", 100, false);
tfListener_.setExtrapolationLimit(ros::Duration(0.1));
}
void My_Filter::scanCallback(const sensor_msgs::LaserScan::ConstPtr& scan){
sensor_msgs::PointCloud2 cloud;
projector_.transformLaserScanToPointCloud("/head", *scan, cloud, tfListener_);
point_cloud_publisher_.publish(cloud);
}
int main(int argc, char** argv)
{
ros::init(argc, argv, "my_filter");
My_Filter filter;
ros::spin();
return 0;
}
Here when i run this node, the height of point cloud is fine but width is only 1. and when i visualize the laser scan or pointcloud or pointcloud2 in rviz even when the lidar is rotating i can only see 2-d points as shown in below screenshot. Why is this happening. what should i do to convert it into 3d point cloud? Here all the point lies in single plane.
I need the point data like from kinect sensor which will have height and weight, and where the points will be in cartesian form.
Asked by hari1234 on 2017-07-29 08:05:22 UTC
Comments
as I remember, multisense uses hokuyo which produces planar laser scans so the output in rviz will be rotating point clouds on a plane. So it's the expected output.
A couple of gazebo tutorials that show the expected output of multisense lidar_scan in rviz:
1) http://gazebosim.org/tutorials?tut=drcsim_visualization&cat=
2) http://gazebosim.org/tutorials?tut=drcsim_multisense&cat=
Asked by iche033 on 2017-08-01 10:58:30 UTC
before this i was uaing kinect sensor so how can i use this data like before? that time i used to have height and weidth of poont cloud and each (x,y) used to represent a point in given point cloud. now how do i do this with this hokuyo sensor?
Asked by hari1234 on 2017-08-02 08:09:59 UTC