Robotics StackExchange | Archived questions

Implementing a neighbours detection Sensor

I am trying to implement a sensor to enable a mobile robot identify neighbours within a limited range (say 5 metres). Currently, I am doing that within a ModelPlugin.

The steps taken are: 1. get a list of models in the world 2. iterate through the models and compute their distance from the robot in question 3. If the distance is within specified sensing range, accept the robot as a neighbour.

Snippet code for this is:

physics::Model_V models = this->world->GetModels();
std::string detections = "";
this->neighbours = 0;
math::Vector3 my_p = this->model.GetWorldPose().pos;
std::string my_name = this->model.GetName();
for(auto m : models)
{
    std::string m_name = m->GetName();
    if(m_name.find("m_4wrobot") != std::string::npos and m_name.compare(my_name) !=0)
    {//Detects all neighbouring robots, their distances and how many they are

        math::Vector3 m_p = m->GetWorldPose().pos;
        double dist = my_p.Distance(m_p);
        if(dist <= 5)
        {
            detections = detections + m_name + ":" + to_string(dist) + " ";
            this->neighbours += 1;
        }
    }
}

Please, I will like to know if this is the most efficient way of doing this. The problem is that I also want to use this method to detect other objects in the world, and this can lead to hundreds (>500) objects (robots and targets) in the world.

Asked by elcymon on 2017-11-02 06:29:35 UTC

Comments

Answers

This is a similar idea of what we're doing in the LogicalCameraSensor. The linear complexity is not awesome but it shouldn't be terrible slow with a few hundreds of models.

Maybe one thing to try is to use directly the logicalCamera sensor with a 360 degree horizontal field of view. Then, you should also play with the near and far parameters to set your "field of view". The complexity won't change but at least you could reuse most of the code.

Asked by Carlos Agüero on 2017-11-03 20:22:01 UTC

Comments

If all the objects are static (i.e. they don't move around during the simulation), and they are all spawned during the simulator initialization (i.e. not spawned ad-hoc during the simulation run), then organizing the objects in a k-d tree will improve NN search from linear to about log(n). I am doing this for my "forest world", where I am simulating the effect of each tree canopy on the GPS integrity as a function of distance to a tree. In my world, all my trees are spawned at initialization.

Asked by Galto2000 on 2017-11-11 10:27:41 UTC

Another approach for increasing the search performance - and if your simulation machine has a good NVIDIA GPU - is to implement a massively parallel KNN search approach in CUDA. Here is one example: http://vincentfpgarcia.github.io/kNN-CUDA/ , but there are plenty more if you do a Google search on NN search on GPU

Asked by Galto2000 on 2017-11-11 10:37:18 UTC

@Galto2000: My objects are not static.

@Carlos: adjusting the near, far and fov can be really hard to get right. Do you have a sample configuration that will view all objects within a limited distance round the camera pose?

Asked by elcymon on 2017-11-29 05:51:06 UTC