Inverse Kinematics
Compute joint angles automatically to reach targets.
By the end of this tutorial you will be able to:
- Move a robot's effectors to positions/orientations specified in external (world) coordinates
- Define robot movements in terms of objectives for the in-built inverse kinematics solver
- Coordinate movements of multiple effectors while avoiding self collisions
Figuring out where the parts of a robot will be after setting its joints in a certain way involves a fairly large number of calculations (forward kinematics), but the mathematics involved is routine (trigonometry and matrix multiplications). By comparison, the inverse problem of figuring out how to set the joints to get a body part to a particular location, turns out to be much more difficult than it might sound.
We can demonstrate the difficulty of inverse kinematics with a simple exercise. Lift your shoulder, bend your elbow, flex your wrist, and extend your finger. Notice that if you repeat the same set of lifts, bends, flexes, and extensions, your fingertip will end up in the same place every time. Next, place your fingertip on an object in front of you, and keep moving your shoulder, elbow, and wrist while maintaining the contact. Notice that every joint setting you tried is a valid solution for the task of getting your finger to the location of the object.
More formally, there is a one-to-one mapping from joint settings to fingertip positions, but there is a one-to-many mapping from fingertip positions to joint settings. A one-to-many mapping may seem like a good thing (there are lots of ways to solve the same problem), but it’s not. When a problem has a potentially infinite number of solutions, it can become impossible to state formally which particular solution we want.
For a robot with a simple kinematic tree, like a robot arm in a factory, it is often possible to set the mathematics up with extra constraints to define which solutions we want in advance. But for robots with more complex kinematic trees, such as humanoids, the problem becomes mathematically impossible.
Inverse kinematics is critical to being able to develop robotics applications intuitively. Consider that if we want a robot to pick up an object in front of it, we simply want to tell it (or have it estimate) where that object is, and then work out the correct joint settings automatically, rather than to specify those joint settings ourselves. Another way to think of it is that inverse kinematics aligns all of the joints to a single coordinate system, rather than a separate one for each, providing a common reference frame in which to tell robots how we want them to move.
The BOW SDK incorporates a general-purpose solution to the inverse kinematics problem that works for robots with arbitrarily complex kinematic trees, which can be used to accurately position an arbitrary number of body parts simultaneously. This, we believe, is a game-changer. In this tutorial we’ll be exploring how the inverse kinematics (IK) solver that is built into the BOW SDK works, and how it can be used for advanced robotic control, simply by tweaking the motor messages a little.
Take a look at the following to see what we're aiming for:
About the Application
When running, the application will send a sequence of positions, each defined in world coordinates, for one of your robot's effectors to follow, demonstrating how to invoke the inverse kinematics solver. The structure of the application is as follows:
Before we get stuck in:
If you are just browsing to get a sense of what's possible, take a look at the code online.
Running the Application
Navigate to the Control/InverseKinematics/Python
folder in the SDK Tutorials repository:
Execute the example program:
Investigation
Let's take a closer look at the key parts of the code:
The key part of the code example is in the SendObjective function, which is called inside the main sampling loop:
This function takes as arguments the name of the effector for which an inverse kinematics objective is to be set, as well as the coordinate of the location (x,y,z in world coordinates) to which that effector should be positioned. We start by creating a motor sample, and request that the solver have a high accuracy (note that the solver is very efficient and the speed-accuracy trade-off will be negligible for most applications):
Next we create an objective, which defines a condition for the inverse kinematics to satisfy. The following specifies a general-purpose objective, for the user-defined effector:
Note that the target type is a TRANSFORM, which corresponds to a 4 by 4 transform matrix holding information about 3D positions and rotations.
The inverse kinematics solver works with two main types of objective. A position objective is a requirement to minimise the distance between the effector and a supplied target location. An orientation objective is a requirement to minimise the angle between the effector and a supplied target vector.
For this example we will focus on position only; i.e., we don't care which direction the effector is pointing so long as it ends up in the right place. The following sets the weight (values should be between 0 and 1) for the position objective to 1 and the weight for the orientation objective to 0. This means that any orientation information contained in the transform we supply will effectively be ignored:
Now we can update the position components of the target transform with the supplied coordinate:
The inverse kinematics solver can simultaneously solve for a large number of effector/objective combinations, but we'll just add one the list for this example:
Finally, we send the motor message using motor.set:
Identify the section of the code that updates the target position for the robot's effector to reach, and modify it to create a different trajectory.