Utilizing GPU to speed up ROS/Gazebo

asked 2019-05-21 09:29:29 -0500

benwex93 gravatar image

I am running a ROS-Kinetic/Gazebo simulation of the PR2 robot on a remote server in a Docker container. It is running headless so it doesn't spawn the GUI, but I was wondering if there's some way of utilizing the NVidia CUDA GPU's on the server to speed up the simulation. When I run the "nvidia-smi" command it doesn't show my process as one of the processes using GPU, so I guess it doesn't use GPU by default. Any other recommendations of how to speen up the simulation are also welcome. Thanks!

edit retag flag offensive close merge delete

Comments

I got the same problem. When I run the "nvidia-smi" command, I could see that about 500MB GPU memory is used, but n process is shown using GPU. If I run a simulation with more models, the frame rate would drop from 60 to 30, but the GPU usage keeps the same.

Joe gravatar imageJoe ( 2021-02-06 19:00:14 -0500 )edit