Deeper Insights
See the world from your robot's perspective.
By the end of this tutorial you will be able to:
- Use the BOW Insight application to explore a robot’s structure and capabilities
- Monitor the data flowing through SDK messages in real-time
- See your robot reach for targets that you can drag around using a mouse
Robotics is all about understanding the difference between what should have happened as a consequence of an action taken, and what actually did happen.
These discrepancies are central to perceiving the world. If a command to lift an arm does not result in a change in joint angle at the shoulder, this may be evidence that the arm is restrained. If a command to bend the knee does not result in a change in camera image, this may be evidence that the foot is not on the floor. If a command to ask a question is not followed by a reply, this may be evidence that a robot is alone. Reducing these kinds of discrepancies is also central to controlling action, in the way that turning to reduce the distance of the brightest star from the centre of your vision is a pretty good method for heading North.
When developing applications for robots it is always important to consider the world from their perspectives, and to consider the raw information that a robot has available to it for improving its perceptions and actions. Using the BOW SDK, this information is all contained in the message structures that are exchanged between your robot and your application. And while those message structures have been carefully designed to be flexible, generic, efficient, and intuitive to construct in code, the data they contain can be a lot to keep track of at once, especially in real-time, while a robot is moving. To help interpret the data more easily, and to help you explore the movement capabilities of your robots more directly, we developed a graphical tool called BOW Insight.
The tool is called Insight because it provides a window into the current state of your robot’s sensors, a view of its three-dimensional structure and current pose, a perspective on the world around the robot that is aligned with its (inverse kinematics) coordinate system, and an interface for defining movement targets in that space. It also provides a visual way to explore the structure of the SDK message types.
The 3D robot view is updated using forward kinematics calculations and the current joint settings, obtained from a constant stream of proprioception messages from the BOW SDK. Importantly, Insight is not a simulator, because it faithfully represents the sensor data about what actually happened instead of anticipating what might have happened with reference to physics calculations. Think of Insight as a reflection of the ground-truth, against which those all important discrepancies may be revealed.
In this tutorial you’ll be guided through how to launch BOW Insight, and some of the ways in which its features can assist you with robotics development.
Take a look at the following to see what we're aiming for:
About the Application
Here we will be using the BOW Insight application to explore a robot’s structure and capabilities. The structure of the application is as follows:
Before we get stuck in:
If you are just browsing to get a sense of what's possible, take a look at the code online.
Running the Application
Launch BOW Insight from the BOW Hub.
Investigation
The key components of the graphical interface are as follows:
- A button for establishing a connection to any of the currently available BOW supported robots.
- A set of tools for visualising the sensory information which are populated by the data returned from the get modality calls to the robot for its different modalities
- A set of tools for issuing commands to the robot such as speech, locomotion.
- A central 3d graphics area in the middle
- And an interface for setting IK objectives in real-time, which we’ll examine in more detail shortly.
First, choose one of your robot's from the dropdown menu. We can then use the mouse to inspect the model and change the 3D view and we can use the display area to visualise the model however we like including visualising its collision capsule objects.
On the right we have a live stream of the robot’s cameras which we can select and switch between.
Underneath, the messages area allows us to visually inspect the messages that the BOW SDK is providing to Insight in detail, which shows for example the parameters and current position of the joints.
Beneath that we have a speech area where we can send messages for our robot to vocalise in the real world.
Switching over to the kinematics tab on the left we have a list of the robot’s effectors as well as the target positions and angles that we are setting for it to achieve.
We can switch to a different robot simply by unticking the connect button, selecting the new name from the menu and pressing the connect button.
Again we can configure each of the areas however we want to inspect the structure of the robot, its sensors and the messages received, and by moving the handles for a given effector we can issue motor commands.
Finally we can initialise any pre-defined action patterns, such as a locomotion routine for this robot.