IMU as Controller Input (Tilt-Controlled 3D)
Time estimate: ~45 minutes Prerequisites: BMI160 SPI Kernel Driver, SDL2 Rotating Cube
Learning Objectives
By the end of this tutorial you will be able to:
- Read IMU sensor data from sysfs and convert it to roll/pitch angles
- Implement a complementary filter for sensor fusion (gyroscope + accelerometer)
- Pipe real-time sensor data into an SDL2 application
- Measure sensor-to-display latency in the complete pipeline
Sensor-to-Display Pipeline and Sysfs Driver Interface
An IMU-controlled display creates a complete sensor-to-pixel pipeline: the kernel driver reads the BMI160 over SPI, exposes data via sysfs IIO files, a userspace process applies sensor fusion (complementary filter), and the result drives a GPU-rendered scene. Each stage adds latency — SPI transfer, sysfs file I/O, filter computation, OpenGL rendering, and VSync wait. Measuring and minimising this total pipeline delay is essential for responsive physical-input systems like game controllers, drone HUDs, and vehicle instrument clusters. The sysfs IIO interface (/sys/bus/iio/devices/) is the kernel's standard way to expose sensor channels; each axis becomes a file containing a raw integer that userspace multiplies by a scale factor to get physical units.
See also: Device Tree and Drivers reference | Real-Time Graphics reference
Introduction
The BMI160 IMU on your Pi provides 3-axis accelerometer and 3-axis gyroscope data. In this tutorial, you will map the physical tilt of the Pi to the rotation of the 3D cube from the previous tutorial. Tilting the Pi forward rotates the cube on the X axis; tilting left rotates it on the Y axis.
This creates a simple but complete sensor-to-display pipeline — the same pattern used in game controllers, drone HUDs, and vehicle instrument clusters. The main engineering question is latency: how long between a physical tilt and the cube moving on screen?
1. Verify IMU Access
Which BMI160 Driver Are You Using?
This tutorial reads sensor data from the IIO sysfs interface (/sys/bus/iio/devices/), which is provided by the mainline kernel BMI160 driver.
If you completed the BMI160 SPI Driver tutorial, you have the custom teaching driver loaded, which exposes /sys/class/bmi160/ instead. You have two options:
- Switch to mainline IIO (recommended for this tutorial): unload the custom module (
sudo rmmod bmi160_spi) and load the mainline one (sudo modprobe bmi160_spi). Note: the stock Raspberry Pi OS kernel does not include the BMI160 IIO module — you must build it first. See the IIO Buffered Capture tutorial for instructions. - Adapt the paths below to read from
/sys/class/bmi160/bmi160/— replacein_accel_x_rawwithaccel_x,in_anglvel_x_rawwithgyro_x, etc.
For more on the differences: Custom Driver vs IIO | IIO Subsystem reference
Concept: The BMI160 kernel driver exposes accelerometer and gyroscope data through sysfs IIO (Industrial I/O) interface.
Find the BMI160 device and check available channels:
cat /sys/bus/iio/devices/iio\:device0/name
# Should show: bmi160
# Read raw accelerometer values
cat /sys/bus/iio/devices/iio\:device0/in_accel_x_raw
cat /sys/bus/iio/devices/iio\:device0/in_accel_y_raw
cat /sys/bus/iio/devices/iio\:device0/in_accel_z_raw
# Read raw gyroscope values
cat /sys/bus/iio/devices/iio\:device0/in_anglvel_x_raw
cat /sys/bus/iio/devices/iio\:device0/in_anglvel_y_raw
cat /sys/bus/iio/devices/iio\:device0/in_anglvel_z_raw
The IIO Subsystem: How Sensor Data Reaches Sysfs
The Industrial I/O (IIO) subsystem (drivers/iio/) is the kernel's standard framework for sensors — accelerometers, gyroscopes, ADCs, pressure sensors, etc. When the BMI160 driver (drivers/iio/imu/bmi160/) probes the SPI device, it:
- Reads the chip ID register over SPI to confirm it's a BMI160
- Configures measurement ranges — accelerometer (±2/4/8/16 g) and gyroscope (±125/250/500/1000/2000 °/s)
- Registers IIO channels — each axis becomes a sysfs file under
/sys/bus/iio/devices/iio:deviceN/
Converting raw values to physical units:
The scale files contain the conversion factor:
cat /sys/bus/iio/devices/iio:device0/in_accel_scale # e.g., 0.000598
cat /sys/bus/iio/devices/iio:device0/in_anglvel_scale # e.g., 0.001065
For example, if in_accel_x_raw = 16384 and in_accel_scale = 0.000598, then acceleration = 16384 × 0.000598 = 9.8 m/s² (1g). The scale depends on the configured range — a ±2g range has higher resolution (smaller scale) than ±16g.
Sysfs vs buffered mode: Reading individual sysfs files (as we do here) triggers a single SPI transaction per read. For higher-performance applications, IIO supports buffered mode with a hardware FIFO and DMA — the sensor fills a kernel ring buffer at a configured sample rate, and userspace reads batches of samples from /dev/iio:deviceN. The SPI DMA Optimization tutorial explores this path.
For custom images: Enable CONFIG_BMI160, CONFIG_BMI160_SPI, and CONFIG_IIO in your kernel config. The device tree node for the BMI160 on your SPI bus must have compatible = "bosch,bmi160" and the correct reg (chip select), spi-max-frequency, and optionally interrupt-parent/interrupts for data-ready signaling.
Stuck?
- No
iio:devicefound — verify the BMI160 driver is loaded:lsmod | grep bmi160. See BMI160 SPI Driver tutorial. - Permission denied — read sysfs as root or adjust permissions with a udev rule.
Checkpoint
You can read raw accelerometer and gyroscope values that change when you tilt the Pi.
2. Sensor Fusion: Complementary Filter
Concept: The accelerometer gives absolute tilt angles (from gravity) but is noisy. The gyroscope gives smooth rotation rates but drifts over time. A complementary filter blends both: trust the gyroscope for fast movements, correct with the accelerometer for drift.
Info
For the mathematical derivation of the complementary filter — why \(\alpha = 0.98\), what the crossover frequency is, and how it relates to the Kalman filter — see Real-Time Graphics reference § Signal Processing for Sensor Fusion.
Python Prototype
Test the sensor fusion before integrating with SDL2:
python3 - <<'PY'
import time, math, os
IIO = "/sys/bus/iio/devices/iio:device0"
ACCEL_SCALE = float(open(f"{IIO}/in_accel_scale").read())
GYRO_SCALE = float(open(f"{IIO}/in_anglvel_scale").read())
def read_imu():
ax = int(open(f"{IIO}/in_accel_x_raw").read()) * ACCEL_SCALE
ay = int(open(f"{IIO}/in_accel_y_raw").read()) * ACCEL_SCALE
az = int(open(f"{IIO}/in_accel_z_raw").read()) * ACCEL_SCALE
gx = int(open(f"{IIO}/in_anglvel_x_raw").read()) * GYRO_SCALE
gy = int(open(f"{IIO}/in_anglvel_y_raw").read()) * GYRO_SCALE
return ax, ay, az, gx, gy
# Complementary filter
ALPHA = 0.98 # Trust gyro 98%, accel 2%
roll, pitch = 0.0, 0.0
t_prev = time.monotonic()
for _ in range(200):
ax, ay, az, gx, gy = read_imu()
t_now = time.monotonic()
dt = t_now - t_prev
t_prev = t_now
# Accelerometer angles (from gravity vector)
accel_roll = math.atan2(ay, az) * 180 / math.pi
accel_pitch = math.atan2(-ax, math.sqrt(ay*ay + az*az)) * 180 / math.pi
# Complementary filter: gyro integration + accel correction
roll = ALPHA * (roll + gx * dt * 180 / math.pi) + (1 - ALPHA) * accel_roll
pitch = ALPHA * (pitch + gy * dt * 180 / math.pi) + (1 - ALPHA) * accel_pitch
print(f"Roll: {roll:+7.1f}° Pitch: {pitch:+7.1f}° dt: {dt*1000:.1f}ms")
time.sleep(0.02)
PY
Tilt the Pi and verify the angles change smoothly. ALPHA = 0.98 means the filter trusts the gyroscope for 98% of the estimate and uses the accelerometer only for slow drift correction.
Checkpoint
The Python script prints roll and pitch angles that respond smoothly to tilting, without excessive noise or drift.
3. IMU-Controlled Cube
Concept: A separate sensor-reader process writes angles to a shared file (or pipe). The SDL2 cube reads them each frame. This decouples the sensor rate from the render rate.
Sensor Writer (Python)
Create a Python script that continuously writes the latest angles to a file:
cat > ~/sdl2-cube/imu_reader.py << 'EOF'
#!/usr/bin/env python3
"""Read BMI160 and write roll/pitch to a shared file for SDL2 apps."""
import time, math, struct
IIO = "/sys/bus/iio/devices/iio:device0"
OUTPUT = "/tmp/imu_angles"
ALPHA = 0.98
accel_scale = float(open(f"{IIO}/in_accel_scale").read())
gyro_scale = float(open(f"{IIO}/in_anglvel_scale").read())
roll, pitch = 0.0, 0.0
t_prev = time.monotonic()
while True:
ax = int(open(f"{IIO}/in_accel_x_raw").read()) * accel_scale
ay = int(open(f"{IIO}/in_accel_y_raw").read()) * accel_scale
az = int(open(f"{IIO}/in_accel_z_raw").read()) * accel_scale
gx = int(open(f"{IIO}/in_anglvel_x_raw").read()) * gyro_scale
gy = int(open(f"{IIO}/in_anglvel_y_raw").read()) * gyro_scale
t_now = time.monotonic()
dt = t_now - t_prev
t_prev = t_now
accel_roll = math.atan2(ay, az) * 180 / math.pi
accel_pitch = math.atan2(-ax, math.sqrt(ay*ay + az*az)) * 180 / math.pi
roll = ALPHA * (roll + gx * dt * 180 / math.pi) + (1 - ALPHA) * accel_roll
pitch = ALPHA * (pitch + gy * dt * 180 / math.pi) + (1 - ALPHA) * accel_pitch
# Write binary: two floats (roll, pitch) + timestamp
with open(OUTPUT, "wb") as f:
f.write(struct.pack("ddd", roll, pitch, t_now))
time.sleep(0.005) # ~200 Hz
EOF
chmod +x ~/sdl2-cube/imu_reader.py
Modified Cube (C)
Modify the cube's main.c to read angles from /tmp/imu_angles instead of auto-rotating. Replace the angle calculation in the render loop:
/* Replace the auto-rotation angle calculation with: */
float roll_deg = 0.0f, pitch_deg = 0.0f;
{
FILE *f = fopen("/tmp/imu_angles", "rb");
if (f) {
double vals[3];
if (fread(vals, sizeof(double), 3, f) == 3) {
roll_deg = (float)vals[0];
pitch_deg = (float)vals[1];
}
fclose(f);
}
}
float rx = pitch_deg * (3.14159f / 180.0f);
float ry = roll_deg * (3.14159f / 180.0f);
Then replace the rotation matrix construction:
/* Replace: mat4_rotate_y(Ry, angle); mat4_rotate_x(Rx, angle * 0.7f); */
mat4_rotate_y(Ry, ry);
mat4_rotate_x(Rx, rx);
Run Both Together
In one SSH session, start the sensor reader:
In another (or the same), run the cube:
Tilt the Pi — the cube should follow your hand movement.
Checkpoint
The cube rotates in response to physical tilting of the Pi. The response should feel smooth with <100 ms perceived latency.
4. Measure Sensor-to-Display Latency
Concept: The total latency from physical tilt to visible cube rotation is the sum of: sensor sampling time + filter processing + file read + GPU rendering + VSync wait.
Timestamp Method
The IMU reader writes a timestamp with each angle. The cube can read this timestamp and compare it to the current time to measure the pipeline delay:
/* After reading vals[2] (the timestamp): */
double sensor_time = vals[2];
double now = (double)SDL_GetPerformanceCounter() / (double)SDL_GetPerformanceFrequency();
/* Note: sensor_time uses CLOCK_MONOTONIC, SDL uses the same base */
double latency_ms = (now - sensor_time) * 1000.0;
Expected Latency Budget
| Stage | Typical Latency |
|---|---|
| Sensor sampling (200 Hz) | 5 ms |
| Complementary filter | <0.1 ms |
| File write + read | 0.5-2 ms |
| GPU render | 1-3 ms |
| VSync wait (worst case) | 0-16.7 ms |
| Total (typical) | 10-25 ms |
Fill In Your Measurements
| Metric | Value |
|---|---|
| Sensor read rate (Hz) | _ |
| Average pipeline latency (ms) | _ |
| Maximum pipeline latency (ms) | _ |
| Perceived responsiveness | _ |
Checkpoint
You have measured the sensor-to-display latency and can identify which stage contributes the most delay.
What Just Happened?
You built a complete physical-input-to-visual-output pipeline:
Physical tilt
→ BMI160 measures acceleration + angular velocity
→ Kernel driver reads SPI, exposes via sysfs
→ Python reader applies complementary filter
→ Writes angles to shared file (200 Hz)
→ C app reads angles each frame (60 Hz)
→ OpenGL ES renders rotated cube
→ SDL2 presents via DRM page flip at VSync
→ Display shows updated cube
This is the same architecture used in drone attitude displays, vehicle head-up displays, and motion-controlled games. The latency budget analysis tells you where to optimize if the response feels sluggish.
Challenges
Challenge 1: Direct SPI Read in C
Replace the Python reader + shared file with direct SPI reads in the C application (using spidev). This eliminates the file I/O and Python overhead. How much does latency improve?
Challenge 2: Shared Memory Instead of File
Replace /tmp/imu_angles with a POSIX shared memory segment (shm_open + mmap). This avoids filesystem overhead and provides true zero-copy communication between the sensor process and the renderer.
Challenge 3: Gyro-Only Fast Path
For fast movements, skip the accelerometer entirely and use only the gyroscope (pure integration). This eliminates the accelerometer read time. Observe how quickly drift accumulates without the accelerometer correction — when does the cube start rotating on its own?
Deliverable
- [ ] IMU-controlled cube responding to physical tilt
- [ ] Sensor fusion (complementary filter) producing smooth angles
- [ ] Latency measurement table filled in
- [ ] Brief note: which pipeline stage adds the most latency
Course Overview | Previous: ← Touch Paint | Next: 1D Ball Balancing →