Robotics StackExchange | Archived questions

How does ogre raytracing for gpu_laser work?

I want to implement a new gpu laser scanner with so(me improvements to generate more realistic laser datas.To achieve this I need the information about the object with which the ray collides. I know this might be easier with the physics::Ray but the gpu ray doesn't effect the real time factor that much(since i have a good graphic card). I already read the source code and i think the point to start is rendering::GpuLaser class which implements the interface to ogre.

Can someone give me some hints about that topic? Thanks!

Edit:

I studied the interface a bit more and came to the conclusion that Ogre don't use ray tracing, but a different technique. (For me) It looks like that they create a special kind of texture for the rendering of depth data. The actual data a stored as RGB in a float array (R is range, G is ???, B is Intensity).

Since creating a new interface to Ogre isn't easily done (without a good understanding of Ogre itself), I'm now working on a workaround by creating a sensor plugin which inherits from GpuRayPlugin, then checking a laser range values for the closest model/link. This is truly not the best way. If someone has a better knowledge I'm always open for hints.

Greeting Alex

Asked by wentz on 2018-08-08 03:47:15 UTC

Comments

I also have this question: http://answers.gazebosim.org/question/20104/improving-laser-scanner-simulation-multi-echo-material-dependent-sensor-noise-intensity-etc/ where i explain what i would like to do.

Asked by wentz on 2018-08-27 11:44:38 UTC

Answers