LeRobot is a project that offers open-sourced robot arm design and corresponding model training pipelines. Recently I follow their guide to build my own robot arms. One thing that interested me is how they calibrate the robot arms automatically.
To collect robot arm training data, there are two arms, the leader arm and the follower arm. Human controls the leader arm and the follower arm acts accordingly, called teleoperation. To do so, we need to calibrate the robot arms so the output positions from two arms are the same if the two arms are at the same physical configuration.
From LeRobot’s documentation, to calibrate two arms, one has to place a robot arm to 1. zero position, 2. rotated position, 3. Rest position. This procedure must be done on both the leader arm and the follower arm. However, it is unintuitive to me regarding why we need to configure the robot in such a way. In this blog post, I try to figure out how to think about the calibration procedure in depth. Here are the screenshots from the documentation:
Why it works?
To think about calibration, we need to first define our calibration goals.
In LeRobot, their goal is to map the arm’s motors raw positions (aka goal position) to degrees in [-180deg, 180deg], where I think choosing ranges of 180 degrees is a more intuitive way to represent values, and it aligns with DH-parameters. Notably, the raw position ranges from 0 to +- 2^32 because the motor is a continuous joint. This is largely unlike the revolute joint taught in basic robotics lectures. However, after calibration, we can treat the robot arm as a normal revolute joint.
We should take a step back to think about what 180deg should be mapped from. By observing the property of the motor, we found that if we rotate the motor by 360 degrees clockwise, the raw position is increased by 4096. Therefore, we simply define position 0 → 0 deg and position 2048 → 180deg. Let us call [-2048, +2048] the immediate position range.
Now, our goal is to map the raw position to the immediate position first. Converting the immediate position to degrees is straightforward. We need to consider Homing offset and driver direction.
Homing offset
First of all, if two arms are at the same physical configuration, their immediate positions should be roughly the same. To this end, we need to move the arms to the same physical configuration, for example, the zero configuration. To make things easier, we define that at zero configuration, the immediate position should be 0. Therefore, the raw position values at that time is exactly the offset, called homing offset (homing_offset
).
In the implementation, they do not use the actual raw positions as the homing_offset
. It finds the closest multiply of 1024 (90deg) as an approximation, which I guess is because the motor raw positions are generally physically aligned with 1024*n when the arm is 0, 90, 180, 270 degrees. By this way we avoid the physical errors for the zero configuration we set by hand.
Moreover, the homing_offset
calculated here is not taken the driver direction into accounts.
Driver direction
Secondly, we need to make sure that the rotation happens in the same direction for the two arms. The rotations could be different if the two arms are not assemblied in the same direction. Here, we physically rotate each joint by 90 degrees, resulting in the same physical confiugration of the two arms. Noted that we have to do it for every joint or otherwise the direction can’t be calculated. Essentially, since we rotate the joint by +90 degrees, we expect the raw position will +1024. However, some of the motors may instead be -1024, indicating the motor raw positions is on the inverted direction as our physical rotation.
How do we calculate this? The key is to multiply the raw positions by -1, so the delta of raw position -1024 becomes 1024, which is exactly what we need. Then we calculate homing offset from there.
Let us look at an example. Suppose for an inverted joint, at zero configuration the raw position is 2048. Rotating it by +90 degrees, the raw position is 1024 (minus 1024 because it is an inverted joint). We just multiply the raw position by -1 (flipping the numbers), so 2048 becomes -2048, 1024 becomes -1024. Our goal now, is to add an offset so that -2048 becomes 0, and -1024 becomes 1024. The homing offset here is 2048. Adding the homing offset, -2048 becomes 0 and -1024 becomes 1024. Now we got the things we want: raw position 2048 becomes 0, and +90 configuration’s raw position 1024 becomes 1024.
In code
From dynamixel_calibration.py
, we see that:
# This gives all 0, actually.
zero_target_pos = convert_degrees_to_steps(ZERO_POSITION_DEGREE, arm.motor_models)
# Read the motor raw position.
zero_pos = arm.read("Present_Position")
# This rounds the raw position to nearest rounded positions.
zero_nearest_pos = compute_nearest_rounded_position(zero_pos, arm.motor_models)
# Obtaining the homing offset (which is -nearest_pos)
homing_offset = zero_target_pos - zero_nearest_pos
With driver direction detection, we have:
# This gives 2048.
rotated_target_pos = convert_degrees_to_steps(ROTATED_POSITION_DEGREE, arm.motor_models)
rotated_pos = arm.read("Present_Position")
# If the position becomes smaller, it indicates inverted mode.
# We expect the raw position increases as we rotates 90 degrees.
# so if position after rotation < position at zero configuration, the value is 1.
drive_mode = (rotated_pos < zero_pos).astype(np.int32)
# We calculate the homing_offset again as this time we take the direction into accounts.
rotated_drived_pos = apply_drive_mode(rotated_pos, drive_mode)
rotated_nearest_pos = compute_nearest_rounded_position(rotated_drived_pos, arm.motor_models)
homing_offset = rotated_target_pos - rotated_nearest_pos
Please note that we do not actually use the homing_offset
calcuated in zero configuration. We use the homing_offset
with driver direction taken into accounts.
The values are saved in two calibration files:
Follower
{
"homing_offset": [2048, 3072, 3072, 2048, 1024, -2048],
"drive_mode": [1, 1, 1, 1, 0, 0],
"start_pos": [2013, 3027, 3011, 2067, -1120, 2016],
"end_pos": [-1044, -1993, -1957, -1092, 29, 3171],
"calib_mode": ["DEGREE", "DEGREE", "DEGREE", "DEGREE", "DEGREE", "DEGREE"],
"motor_names": [
"shoulder_pan",
"shoulder_lift",
"elbow_flex",
"wrist_flex",
"wrist_roll",
"gripper"
]
}
Leader
{
"homing_offset": [2048, 2048, 3072, 2048, -3072, -2048],
"drive_mode": [1, 1, 1, 1, 0, 0],
"start_pos": [2146, 2066, 2994, 2107, 3064, 1996],
"end_pos": [-973, -993, -1961, -988, 4065, 3083],
"calib_mode": ["DEGREE", "DEGREE", "DEGREE", "DEGREE", "DEGREE", "DEGREE"],
"motor_names": [
"shoulder_pan",
"shoulder_lift",
"elbow_flex",
"wrist_flex",
"wrist_roll",
"gripper"
]
}
Furthermore, we see how those values are used. Actually, start_pos
and end_pos
is not quite used in the calibration procedure (but may be useful for some debugging stuff?).
In dynamixel.py
, there is a apply_calibration
function accepting values
as the raw positions.
LOWER_BOUND_DEGREE = -270
UPPER_BOUND_DEGREE = 270
if CalibrationMode[calib_mode] == CalibrationMode.DEGREE:
drive_mode = self.calibration["drive_mode"][calib_idx]
homing_offset = self.calibration["homing_offset"][calib_idx]
_, model = self.motors[name]
resolution = self.model_resolution[model]
# Update direction of rotation of the motor to match between leader and follower.
# In fact, the motor of the leader for a given joint can be assembled in an
# opposite direction in term of rotation than the motor of the follower on the same joint.
if drive_mode:
values[i] *= -1
# Convert from range [-2**31, 2**31] to
# nominal range [-resolution//2, resolution//2] (e.g. [-2048, 2048])
values[i] += homing_offset
# Convert from range [-resolution//2, resolution//2] to
# universal float32 centered degree range [-180, 180]
# (e.g. 2048 / (4096 // 2) * 180 = 180)
values[i] = values[i] / (resolution // 2) * HALF_TURN_DEGREE
if (values[i] < LOWER_BOUND_DEGREE) or (values[i] > UPPER_BOUND_DEGREE):
raise JointOutOfRangeError(
f"Wrong motor position range detected for {name}. "
f"Expected to be in nominal range of [-{HALF_TURN_DEGREE}, {HALF_TURN_DEGREE}] degrees (a full rotation), "
f"with a maximum range of [{LOWER_BOUND_DEGREE}, {UPPER_BOUND_DEGREE}] degrees to account for joints that can rotate a bit more, "
f"but present value is {values[i]} degree. "
"This might be due to a cable connection issue creating an artificial 360 degrees jump in motor values. "
"You need to recalibrate by running: `python lerobot/scripts/control_robot.py calibrate`"
)
Here, we see that if driver_mode
equals 1, it indicates the motor is in the inverted mode and so we follow the derivation that we need to multiply the raw position by -1. Adding the homing_offset
, we obtain the immediate position. Finally, we convert the immediate position to degrees.
There is an error handling: if the final degree is < -270 or > +270, it indicates the joint exceeds what we expect it to rotates. This helps identify abnormal rotation of joints. For example, the fifth joint can be rotates over 270 degrees, but it makes the cable line strenched.
Therefore, the final resting configuration is actually not used. I think it is just for placing the robot arm in a safe configuration.
Results
When we place the robot arm in the zero configuration, the degrees we obtain is indeed close to 0:
Leader [-5.2734375 3.7792969 7.470703 2.0214844 -9.316406 -2.9882812]
Follower [ 7.734375 3.4277344 2.3730469 2.4609375 6.1523438 -2.3730469]
import tqdm
seconds = 30
frequency = 200
for _ in tqdm.tqdm(range(seconds*frequency)):
leader_pos = robot.leader_arms["main"].read("Present_Position")
follower_pos = robot.follower_arms["main"].read("Present_Position")
print("Leader", leader_pos)
print("Follower", follower_pos)
There are some important implications for the calibration.
- The robot arms can be teleoperated and used to collect and trian models.
- Converting the raw positions to degrees in -180, +180 can help us view the the continous joint as a normal revolute joint. This further helps us to utilize common inverse kinematics libraries that often assumed revolute or prismatic joints.