Robotics StackExchange | Archived questions

Best practice for robot control

I would like to design some control algorithm for a mobile robot in ROS and simulate it with Gazebo.

The goal is to develop such structure of the project to be able to switch the control mechanism from simulation to the real robot. So far I have following project structure:

robot_description
   ├ urdf
   | └ robot.urdf.xacro
   └ meshes
     └ robot_mesh.dae

robot_gazebo
   ├ launch
   | └ robot_world.launch
   ├ plugins
   | └ some_world_plugin
   |   └  SomeWorldPlugin.cc
   └ worlds
     └ robot.world

robot_control
   ├ config
   | └ robot_control.yaml
   └ launch
     └ robot_control.launch

robot_teleop
   ├ resource
   | └ gui_controller.ui
   ├ scripts
   | └ run_gui_controller.py
   └ src
     └ gui_teleop_plugin
       ├ __init__.py
       └ gui_teleop_plugin.py

I omit CMakeLists.txtx and package.xml files in every package in the above listing for simplicity.

The robotdescription package contains .urdf model of the robot with all the links and joints defined. The .urdf file also includes a gazebo plugin `libgazeboros_control.so` to handle the tags in the urdf model and load appropriate control manager.

The robotgazebo package contains .world file with the simulation environment defined, a launch file that launches the world, spawns the robot from robotdescription package and runs the controller from the robotcontrol package. It also contains a gazebo world plugin that is loaded in the .world file and publishes some messages to a custom ROS topic `/robot/isinarea` (stdmsgs::Bool carrying information about the position of the robot - is the robot at some specified position? True or False).

The robotcontrol package contains the definitions of the joint controllers, its pid and a .launch file that creates a topics for joints state and control commands: `/robot/jointstate,/robot/joint1_controller/command` etc.

The robotteleop package contains a QT plugin to rqt. The GUI structure (widget) is defined in the `guicontroller.uifile. The Plugin, signals and connections between buttons and functions are defined in theguiteleopplugin.pyfile. The push of the button calls a function which publishes into the/robot/joint1controller/commandtopic. And the script for running the plugin is written in therungui_controller.py` file.

This way I have a robot in my gazebo world simulation that can be controlled via GUI buttons on the screen and a topic that gets an information about whether the robot is at specific area or out of that area (SomeWorldPlugin.cc). This way it works and makes sense to me. If there is a best practice that contradict my structure, please, tell me.

Now I want to make an automatic controller that would move the robot and make decisions based on the state of the world/robot. I was thinking I would create a new package robotauto or such, that would contain a gazebo model plugin. The model plugin would move the robot by publishing into the `/robot/joint1controller/commandtopic, just as the GUI does. And that I would read the/robot/isinarea` and change the robot direction when the robot is out of the area. The desired behavior is kind of bouncing off the area borders.

Now I'm not sure if that is the best idea. After all I wand the automatic control to be plug-able into the real world robot without Gazebo. What is the best practice for such goal?

Asked by kumpakri on 2018-10-03 06:02:04 UTC

Comments

Answers

If you want to train your automatic control algrithm, the most important codes should be in robot_teleop.

Gazebo and real world robot both need joint_states_controllers and some_command_controllers.

And, I think you also need write a sensor model to extract data from gazebo and a hardware package to connect with your real robot.

eg: https://github.com/itfanr/elfin_robot

Asked by itfanr on 2018-10-08 19:53:52 UTC

Comments