BOW Logo
Docs
Basics

Step 2: Sensing & Acting

Welcome to this tutorial series in robotics powered by the BOW SDK. This is Layer 2 - Locomotion

Recommended Robots

The recommended robots for this tutorial are:

  • a quadruped like the DEEP Robotics - Lite 3
  • a humanoid like the Softbank Robotics - Nao.

Prerequisites

The key libraries are:

  • OpenCV - a library of programming functions mainly for real-time computer vision

Before trying these tutorials, make sure you have followed the instructions from the dependencies step to set up the development environment for your chosen programming language.

These tutorials also assume you have installed the BOW Hub available for download from https://bow.software and that you have subscribed with a Standard Subscription (or above) or using the 30 day free trial which is required to simulate robots.

The Control Loop

Let's restate the control loop:

  1. Sense the Environment
  2. Using the sensations from the Environment, decide on the next actions to perform that achieve your current goal
  3. Carry out the planned actions within the environment
  4. Repeat from 1

In the previous step we learned how to sample the robot's visual modality. In this step, you will play the part of the decision maker and control the robot's movement given video stream received from the robot.

Locomotion

In this step we will tackle locomotion, or the act of moving around within the robot's world.

To achieve this we will have the following:

Sense

  • Connect to a robot by calling QuickConnect
  • Get all the images sampled by the robot by calling GetModality("vision")
  • Place the data for the returned images into OpenCV
  • Visualise the OpenCV images

Decide

  • User decides where they want to move

Act

  • User presses keys on the keyboard based on a decision made
  • Read keypress events from the keyboard
  • Send locomotion decision to robot by calling SetModality("motor")

Go ahead and open a simulated robot of your choice. For this tutorial we recommend:

You can still try connecting to one of the industrial robots if only to check the SDK's behaviour when the vision modality is not available. However, if you do want to get a video stream from the robot select a robot that has a vision modality.

Running the Tutorial

To run the tutorial, navigate to the Step_2_Locomotion folder within the SDK Tutorials repository that you cloned earlier.

cd SDK-Tutorials/Step_2_Sensing_Acting

Then navigate to the language that you would like to use

cd Python

Finally execute the example program according to your languages standards. For python this would be:

python bow_tutorial_2.py

Each language folder will contain a README that more specifically explains how to run that program in that specific languages environment. Make sure to check out the github repository here

Interacting with the Tutorial

Once the tutorial is running you will be able to see the robots vision modality (as setup in layer_1) as well as being able to drive the robot around the simulated environment using the following keyboard controls.

Keyboard Controls

KeyActionDescription
WMove ForwardMoves the robot forward
ATurn LeftTurns the robot to the left
SMove BackwardMoves the robot backward
DTurn RightTurns the robot to the right
EStrafe RightMoves the robot to the right without turning
QStrafe LeftMoves the robot to the left without turning

Stopping

To cancel the tutorial program's execution you can press Ctrl + C in the running terminal.

On this page