How are textures rendered?
I am interested in how textures are rendered and used in Gazebo. For example, I use the following SDF to render a textured heightmap:
<model name="heightmap">
<static>true</static>
<link name="height">
<collision name="collision">
<geometry>
<heightmap>
<uri>file://media/materials/textures/wasatch_calls_fort_canyon/mtn_height.jpg</uri>
<size>380 380 201</size>
<pos>0 0 0</pos>
</heightmap>
</geometry>
</collision>
<visual name="visual_abcedf">
<geometry>
<heightmap>
<use_terrain_paging>false</use_terrain_paging>
<texture>
<diffuse>file://media/materials/textures/wasatch_calls_fort_canyon/texture_trimmed.png</diffuse>
<normal>file://media/materials/textures/flat_normal.png</normal>
<size>380</size>
</texture>
<uri>file://media/materials/textures/wasatch_calls_fort_canyon/mtn_height.jpg</uri>
<size>380 380 211</size>
<pos>0 0 0</pos>
</heightmap>
</geometry>
</visual>
</link>
</model>
I then run a drone containing a camera looking down at the ground from a set altitude. The captured image is different than the original .jpg image (texture_trimmed.jpg)in the following ways:
- It is significantly darker.
- There is also a small amount of blurring. I realize I am taking a picture of a picture, but the size of the texture is about 4kx4k and the size of the camera image is 2048x350, and I have matched the field of view of the camera so that it covers a little more than half of the texture, so the pixel resolution should approximately match. Is interpolation used?
- There seems to be a small amount of distortion. This begs the question: does Gazebo assume the texture is an orthographic projection, or a projective view? I would like the textured heightmap to be created as a orthographic image laid down on the heightmap. The camera then takes a projective view of the textured heightmap, as a real camera above terrain would. Is this what Gazebo models?
I would really like to understand the answers to these questions, but the first is the most important at this time.
Thanks.