Gazebo | Ignition | Community
Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

How are textures rendered?

I am interested in how textures are rendered and used in Gazebo. For example, I use the following SDF to render a textured heightmap:

<model name="heightmap">
  <link name="height">
    <collision name="collision">
          <size>380 380 201</size>
          <pos>0 0 0</pos>
    <visual name="visual_abcedf">
          <size>380 380 211</size>
          <pos>0 0 0</pos>

I then run a drone containing a camera looking down at the ground from a set altitude. The captured image is different than the original .jpg image (texture_trimmed.jpg)in the following ways:

  1. It is significantly darker.
  2. There is also a small amount of blurring. I realize I am taking a picture of a picture, but the size of the texture is about 4kx4k and the size of the camera image is 2048x350, and I have matched the field of view of the camera so that it covers a little more than half of the texture, so the pixel resolution should approximately match. Is interpolation used?
  3. There seems to be a small amount of distortion. This begs the question: does Gazebo assume the texture is an orthographic projection, or a projective view? I would like the textured heightmap to be created as a orthographic image laid down on the heightmap. The camera then takes a projective view of the textured heightmap, as a real camera above terrain would. Is this what Gazebo models?

I would really like to understand the answers to these questions, but the first is the most important at this time.