Home | Tutorials | Wiki | Issues
Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

DRCSIM: how shall we jointly agree on simulation parameters

I realize there is a lot of uncertainty still (will we use Bullet for the VRC or not?, how much will performance be improved in the ODE version with better load sharing among threads and a shared memory interface?), but I would like to get agreement that we will maximize the number of ODE/Bullet iterations (iter) on the VRC cloud-based simulator. This will improve simulation robustness, the simulator IMU measurements, and the simulated force measurements.

For example, on my Xeon-based machine (dual Intel(R) Xeon(R) CPU E5-2687W 0 @ 3.10GHz) with Quadra 6000 and Tesla K20 GPUs (this machine is a slightly newer version of the cloud machine we plan to use), I was able to get 125 iterations (vs. 40 in the distributed version) with a real time factor in the range 1.00-1.05 (it varies slowly, I disabled the real time limitation to improve performance) with the following parameter values:

DRCSIM 2.0.1 as is except (angle brackets removed to make this visible)

     update_rate 0 /update_rate
      ode
        solver
          ...
          iters 125 /iters

Turning real time control back on with an update-rate of 1000 gets a real time factor of 0.95.

I was not able to decrease dt to 0.0005 and get a real time factor near 1.0, but perhaps further improvements might make that possible. In that case, we will need to hunt for the best tradeoff of iters and dt as a group.

It would be really good to nail down the exact dt and iters we will use well in advance of the VRC, say be the end of March after we have some experience with Bullet and the shared memory interface.

I am cross posting this to the DARPA forum as well.

Thanks, Chris

DRCSIM: how shall we jointly agree on simulation parametersparameters?

I realize there is a lot of uncertainty still (will we use Bullet for the VRC or not?, how much will performance be improved in the ODE version with better load sharing among threads and a shared memory interface?), but I would like to get agreement that we will maximize the number of ODE/Bullet iterations (iter) on the VRC cloud-based simulator. This will improve simulation robustness, the simulator IMU measurements, and the simulated force measurements.

For example, on my Xeon-based machine (dual Intel(R) Xeon(R) CPU E5-2687W 0 @ 3.10GHz) with Quadra 6000 and Tesla K20 GPUs (this machine is a slightly newer version of the cloud machine we plan to use), I was able to get 125 iterations (vs. 40 in the distributed version) with a real time factor in the range 1.00-1.05 (it varies slowly, I disabled the real time limitation to improve performance) with the following parameter values:

DRCSIM 2.0.1 as is except (angle brackets removed to make this visible)

     update_rate 0 /update_rate
      ode
        solver
          ...
          iters 125 /iters

Turning real time control back on with an update-rate of 1000 gets a real time factor of 0.95.

I was not able to decrease dt to 0.0005 and get a real time factor near 1.0, but perhaps further improvements might make that possible. In that case, we will need to hunt for the best tradeoff of iters and dt as a group.

It would be really good to nail down the exact dt and iters we will use well in advance of the VRC, say be the end of March after we have some experience with Bullet and the shared memory interface.

I am cross posting this to the DARPA forum as well.

Thanks, Chris