Step 1: Sensing
Welcome to this tutorial series in robotics powered by the BOW SDK. This is Step 1 - Sensing.
The recommended robots for this tutorial are:
- a quadruped like the DEEP Robotics - Lite 3
- a humanoid like the Softbank Robotics - Nao.
Prerequisites
The key libraries are:
- OpenCV - a library of programming functions mainly for real-time computer vision
Before trying these tutorials make sure you have followed the instructions from the dependencies step to set up the development environment for your chosen programming language.
These tutorials also assume you have installed the BOW Hub available for download from https://bow.software and that you have subscribed with a Standard Subscription (or above) or are using the 30 day free trial which is required to simulate robots.
How to use BOW
At BOW we encourage thinking about robotics problems in a very human way and to also solve them in a human way. Every real world task can be solved with the use of the same three part control loop.
- Sense the Environment
- Using the sensations from the Environment, decide on the next actions to perform that achieve your current goal
- Carry out the planned actions within the environment
- Repeat from 1
The BOW SDK was specifically designed to greatly simplify the programming for Steps 1 and 3 and make the programming for these steps universal across different robots, languages and operating systems.
Crucially we leave all the decision-making up to you or your AI models!
Visual Sensing
The first step in any robotics journey is that of understanding your environment. This tutorial demonstrates how to perceive the visual environment around the robot by sampling the Vision Modality.
To achieve this we will:
- Connect to a robot by calling QuickConnect
- Get all the images sampled by the robot by calling GetModality("vision")
- Place the data for the returned images into OpenCV
- Visualise the OpenCV images
Go ahead and open a simulated robot of your choice. Check the top of this tutorial for our recommended robots!
You can still try connecting to one of the industrial robots if only to check the SDK's behaviour when the vision modality is not available. However, if you do want to get a video stream from the robot select a robot that has a vision modality.
Running the Tutorial
To run the tutorial, navigate to the Step_1_Vision
folder within the SDK Tutorials repository that you cloned earlier.
Then navigate to the language directory that you would like to use and execute the example program according to your operating system and language standards contained in the README.
Each language folder will contain a README that more specifically explains how to run that program in that specific languages environment. Make sure to check out the github repository here
Interacting with the Tutorial
Once the tutorial is running you will be able to see the robots vision modality being output to the screen as a CV2 window, this will provide a live view of the simulated scene which you can verify by interacting with the robot inside Webots and monitoring the output.
Stopping
To cancel the tutorial program's execution you can press Ctrl + C
in the running terminal.