BOW Logo
Docs

Modalities

This page outlines the modalities available through the BOW SDK, how they should be used and where they exist within the BOW data message standard.

What is a modality?

In order to simplify the process of controlling robots we break their capabilities down into easy to understand channels based on human equivalents. In the case of data coming from the robots, these channels equate to human senses; in the case of data being sent to the robot they equate to ways in which we interact with the environment either through actions or through speaking. Each of these channels is known as a modality.

Sensory Modalities

Sensory modalities represent all the data collected by the sensors on the robot and, just like in humans, all of this data is broken down into individual senses. The below table shows all the available BOW modalities, the human equivalent for each, and a breakdown of the types of sensors covered by each modality.

Modality NameHuman Sense EquivalentRobot Sensors
AuditionHearingMicrophones
VisionSightCameras, RGB-D sensors
TactileTouchHaptics, touch sensors
ProprioceptionAwareness of body's own positionJoint angles, joint velocities, joint torques, joint names, joint min and max, IK objective status, Effector positions
InteroceptionAwareness of own internal state, e.g. hunger, thirst, temperature, etc...Battery voltage, connection status, fault codes, accelerometers, gyroscopes, tilt sensors
ExteroceptionAwareness of the surrounding environment, e.g. temperature, pressure, light levels, etc...Sonar, lidar, thermometers, pressure sensors, lights sensors, cliff sensors, GPS, compass

Control Modalities

Control modalities represent all the data which can be transmitted to the robot and cause it to act upon its environment, whether that is through movement, audio or some other mechanism. These different mechanisms are broken down into individual channels which are detailed in the table below.

Modality NameHuman EquivalentRobot Output
MotorPhysical ActionsMotor movements, both joints and wheels
SpeechTalkingText output (terminal), text-to-speech audio output
VoiceTalkingDirect audio streaming to speakers
EmotionEmotionExpressiveness in the form of sounds, lights and predetermined motion patterns

Robot Specific Modality

Although we strive to have a 100% robot agnostic system, there may be rare cases where a robot has a unique attribute which either cannot be represented by the existing set of modalities or where doing so would be overly complicated or unnecessary, for example a robot specific sensor or a set of joints which are visual only. For these rare cases we provide the flexibility of custom two-way data channels that are robot specific, but we strive to fit all robot functionality within the existing modalities.

On this page