Improving laser scanner simulation: multi echo, material-dependent sensor noise, intensity, etc.

asked 2018-07-25 07:17:24 -0600

wentz gravatar image

updated 2018-08-27 11:42:47 -0600

I want to improve the simulation of ray sensors / gpu ray sensors. Main topics will be:

  • multi echo
  • material-dependent sensor effects (like scan point shifting on reflective materials)
  • intensity
  • effects caused by particles (like snow, fog, ..)

The idea is to create a lookup table and store informations like reflectivity, absorption, transmission for the material. Every model-material needs to store the material information as a string (How could i do that?). In the ray sensor I would read these string (getting the model from the collision of the ray?) and get the infos from the table. Then based on these infos add individual noise.

Does this make sense? Or is there a better way? Anybody knows if someone already did something in this direction? I would appreciate to get some infos and hints.

edit retag flag offensive close merge delete

Comments

I also have this question: http://answers.gazebosim.org/question/20200/how-does-ogre-raytracing-for-gpu_laser-work/ where I ask for help with the rendering interface and ogre.

wentz gravatar imagewentz ( 2018-08-27 11:43:52 -0600 )edit

Hi Alex, have you made recent progress on this? I also use gpu ray and would like to have a texture dependend intensity output. Also scattering effects based on mesh surface angle relative to a simulated ray would be very nice to achieve. Greetings PS: Some years ago there was a simmilar question ( http://answers.gazebosim.org/question/5810/why-gpuraysensor-do-not-support-intensity-readings/ ) but the answer by Nate on this isn't availlble anymore...

jared gravatar imagejared ( 2018-10-30 06:24:08 -0600 )edit

Hi Jared, so i gave up to try it with gpu laser cause ogre is, well lets say not so user friendly like i hoped (and i dont want to spend to much time). So i fumbled around and came up with a weird combination of gpu_laser and a ray_sensor. What im doing now is: 0. create yaml with materials 1. get laser_scan from gpu_laser 2. for every hit in max_range cast a ray (with same startpoint and vector) 2.1 when i hit an object and still under max_range cast new ray from behind (or within) the object

wentz gravatar imagewentz ( 2018-11-02 05:28:21 -0600 )edit

2.2 Then get the Material of the Objects and the corresponding infos from my yaml 3. Compute stuff :P

wentz gravatar imagewentz ( 2018-11-02 05:30:27 -0600 )edit

When i'm finished i will post a detailed answer about what i've done.

wentz gravatar imagewentz ( 2018-11-02 05:31:33 -0600 )edit

Thank you for your answer, that sounds interesting. So you added the object parameters like reflectivity, absorption, transmission etc. to the yaml? And in case your gpu raycast hit something you use the standard cpu raycast for this ray again? Otherwise you couldn't extract any material information I guess. So the objects/ particles you use in your simulation still are collision objects? Did you use the collide bitmasks so that your robot still can pass those objects?

jared gravatar imagejared ( 2018-11-02 11:17:27 -0600 )edit