Gazebo | Ignition | Community
Ask Your Question

Implementing a neighbours detection Sensor

asked 2017-11-02 06:29:35 -0600

elcymon gravatar image

I am trying to implement a sensor to enable a mobile robot identify neighbours within a limited range (say 5 metres). Currently, I am doing that within a ModelPlugin.

The steps taken are: 1. get a list of models in the world 2. iterate through the models and compute their distance from the robot in question 3. If the distance is within specified sensing range, accept the robot as a neighbour.

Snippet code for this is:

physics::Model_V models = this->world->GetModels();
std::string detections = "";
this->neighbours = 0;
math::Vector3 my_p = this->model.GetWorldPose().pos;
std::string my_name = this->model.GetName();
for(auto m : models)
    std::string m_name = m->GetName();
    if(m_name.find("m_4wrobot") != std::string::npos and !=0)
    {//Detects all neighbouring robots, their distances and how many they are

        math::Vector3 m_p = m->GetWorldPose().pos;
        double dist = my_p.Distance(m_p);
        if(dist <= 5)
            detections = detections + m_name + ":" + to_string(dist) + " ";
            this->neighbours += 1;

Please, I will like to know if this is the most efficient way of doing this. The problem is that I also want to use this method to detect other objects in the world, and this can lead to hundreds (>500) objects (robots and targets) in the world.

edit retag flag offensive close merge delete

1 Answer

Sort by » oldest newest most voted

answered 2017-11-03 20:22:01 -0600

Carlos Agüero gravatar image

This is a similar idea of what we're doing in the LogicalCameraSensor. The linear complexity is not awesome but it shouldn't be terrible slow with a few hundreds of models.

Maybe one thing to try is to use directly the logicalCamera sensor with a 360 degree horizontal field of view. Then, you should also play with the near and far parameters to set your "field of view". The complexity won't change but at least you could reuse most of the code.

edit flag offensive delete link more


If all the objects are static (i.e. they don't move around during the simulation), and they are all spawned during the simulator initialization (i.e. not spawned ad-hoc during the simulation run), then organizing the objects in a k-d tree will improve NN search from linear to about log(n). I am doing this for my "forest world", where I am simulating the effect of each tree canopy on the GPS integrity as a function of distance to a tree. In my world, all my trees are spawned at initialization.

Galto2000 gravatar imageGalto2000 ( 2017-11-11 09:27:41 -0600 )edit

Another approach for increasing the search performance - and if your simulation machine has a good NVIDIA GPU - is to implement a massively parallel KNN search approach in CUDA. Here is one example: , but there are plenty more if you do a Google search on NN search on GPU

Galto2000 gravatar imageGalto2000 ( 2017-11-11 09:37:18 -0600 )edit

@Galto2000: My objects are not static. @carlos: adjusting the near, far and fov can be really hard to get right. Do you have a sample configuration that will view all objects within a limited distance round the camera pose?

elcymon gravatar imageelcymon ( 2017-11-29 04:51:06 -0600 )edit
Login/Signup to Answer

Question Tools



Asked: 2017-11-02 06:29:35 -0600

Seen: 105 times

Last updated: Nov 03 '17