Robotics StackExchange | Archived questions

Utilizing GPU to speed up ROS/Gazebo

I am running a ROS-Kinetic/Gazebo simulation of the PR2 robot on a remote server in a Docker container. It is running headless so it doesn't spawn the GUI, but I was wondering if there's some way of utilizing the NVidia CUDA GPU's on the server to speed up the simulation. When I run the "nvidia-smi" command it doesn't show my process as one of the processes using GPU, so I guess it doesn't use GPU by default. Any other recommendations of how to speen up the simulation are also welcome. Thanks!

Asked by benwex93 on 2019-05-21 09:29:29 UTC

Comments

I got the same problem. When I run the "nvidia-smi" command, I could see that about 500MB GPU memory is used, but n process is shown using GPU. If I run a simulation with more models, the frame rate would drop from 60 to 30, but the GPU usage keeps the same.

Asked by Joe on 2021-02-06 20:00:14 UTC

Answers