Obstacle Avoidance

Autonomously avoid walking into objects.

By the end of this tutorial you will be able to:

  1. Extend your applications to utilise different types of sensory information
  2. Give your robots a basic level of autonomy by closing the sensing-acting control loop
  3. Stop your robots from walking into obstacles detected with a distance sensor

When we observe complex behaviours it’s tempting to think that the mechanisms that gave rise to those behaviours must be equally complex. But often, complexity emerges from simple underlying mechanisms.

This point was made quite beautifully by Valentino Braitenberg in a short book called Vehicles. The book presents a series of thought experiments in which we are encouraged to imagine the behaviour that would be displayed by a simple robot (vehicle) if we connected its sensors and motors together in different configurations and with wires that have different properties.

For example, a vehicle with a light sensor on its left that drives a wheel on its left, and a sensor on its right that drives a wheel on its right, will turn away from light sources. A second vehicle with its wires swapped over will instead turn towards lights. For the engineers of those two vehicles it would be straightforward to explain the behaviour; “when the light is on the left of the first vehicle the left sensor reading is stronger than the right sensor reading, so the left wheel spins faster than the right, and the vehicle turns to the right, away from the light”.

However, when we don’t know how a system was constructed we tend to evoke more intelligence when we describe its behaviour; the first vehicle “hates the light” or “wants to escape”, whereas the second “loves the light” or “wants to get warm”. Consider how you might otherwise have described the behaviour of a moth circling a flame.

Braitenberg was a neuroscientist, and he was thinking about the challenge of understanding how the circuitry of our brains shapes our behaviours. His perspective yields two important take-home messages for us. First, by engineering very simple control systems and adding complexity step by step, we can gain insights into how complex behaviours arise that might be more difficult to obtain by carrying out traditional experiments. Second, behaviour results from the interaction between an agent’s control system, its body, and its environment. Consider how much more complex a moth’s orbit might appear with a damaged wing, or if we lit a second flame.

So it’s time to start introducing low-level autonomy. In this tutorial we’ll remove ourselves from the (sensing, decision-making, acting) control loop by allowing our robots to make some decisions for themselves. We’ll start by engineering an extremely simple decision-making system that requires no input from the user, and by observing the complexity of the behaviour that it generates when interacting with its environment.

About the Application

The purpose of the application you will develop here is to enable users to generate autonomous locomotion without walking into obstacles. The structure of the application is as follows:

Before we get stuck in:

If you are just browsing to get a sense of what's possible, take a look at the code online.

Running the Application

Navigate to the Control/ObstacleAvoidance/Python folder in the SDK Tutorials repository:

cd SDK-Tutorials/Control/ObstacleAvoidance/Python

Execute the example program:

python main.py

Investigation

Once the tutorial is running you will be able to see the robot's camera data, and immediately you will see the robot begin to move forward. As it approaches an obstacle or wall it will begin to turn to the left. When it gets very close to an obstacle it will reverse. The resulting behaviour should be that your robot moves around the environment without colliding with walls or obstacles.

Let's take a closer look at the key parts of the code:

After connecting, we need to identify the distance sensor that is furthest forward on the robot's body. The first step is to sample from the exteroception channel to obtain all candidate distance sensors:

ext_sample, err = myrobot.exteroception.get(True)
while not err.Success:
    ext_sample, err = myrobot.exteroception.get(True)
    time.sleep(0.1)
 
front_sonar = identify_front_sonar(ext_sample.Range)

The call to the IdentifyFrontSonar function then compares the location of the sensors to identify that which is furthest forward:

def identify_front_sonar(range_sensors):
    sonars = []
    for sensor in range_sensors:
        if sensor.OperationType == bow_data.Range.OperationTypeEnum.Ultrasound:
            sonars.append(sensor)
 
    front_sonar_x_pos = -100.0
    front_sonar_name = None
    for sonar in sonars:
        if sonar.Transform.Position.X > front_sonar_x_pos:
            front_sonar_x_pos = sonar.Transform.Position.X
            front_sonar_name = sonar.Source
 
    return front_sonar_name

Then inside the main control loop we sample from the front sensor so we can get it's distance reading:

ext_sample, err = myrobot.exteroception.get(True)
if not err.Success or ext_sample is None:
    continue
 
sonar = None
for range_sensor in ext_sample.Range:
    if range_sensor.Source == front_sonar:
        sonar = range_sensor
        break

Next we construct a motor message which translates the distance reading to an appropriate locomotion action:

 if sonar.Data == -1:
     print("Invalid Sonar Data: ", sonar.Data, " meters")
     motor_command.Locomotion.RotationalVelocity.Z = 0.5
 
 elif sonar.Data == 0:
     print("No obstruction in range: ", sonar.Data, " meters")
     motor_command.Locomotion.TranslationalVelocity.X = 0.2
 
 elif sonar.Min + 0.5 < sonar.Data < sonar.Min + 1.5:
     print("Obstruction approaching sensor minimum: ", sonar.Data, " meters")
     motor_command.Locomotion.RotationalVelocity.Z = 0.5
 
 elif sonar.Data < sonar.Min + 0.5:
     print("Obstruction too close to maneuver, reverse: ", sonar.Data, " meters")
     motor_command.Locomotion.RotationalVelocity.X = -0.2
 
 else:
     print("Obstruction detected at safe range", sonar.Data, " meters")
     motor_command.Locomotion.TranslationalVelocity.X = 0.2

Finally we send the motor message to close the loop between sensing and action:

myrobot.motor.set(motor_command)

On this page