Level Display: Python Prototype
Time estimate: ~30 minutes Prerequisites: BMI160 SPI Driver
Learning Objectives
By the end of this tutorial you will be able to:
- Build a rapid prototype of IMU visualization in Python
- Understand the sensor-to-display pipeline and its latency sources
- Observe screen tearing with raw framebuffer rendering
- Log frame timing data for later analysis
The Sensor-to-Pixel Pipeline
A real-time display application reads sensor data and turns it into visible pixels. This pipeline runs in a continuous loop, and each stage adds latency:
- Read IMU (~1 ms) -- SPI/I2C transfer fetches raw accelerometer values
- Filter noise (~0.1 ms) -- raw accelerometer data is noisy (vibration, electrical interference), so a low-pass filter or complementary filter smooths the signal at the cost of a small delay
- Compute angle (~0.01 ms) -- convert filtered acceleration into roll/pitch using
atan2 - Render frame (~2-5 ms) -- draw the artificial horizon into an image buffer
- Write to display (~1-16 ms) -- push pixels to the screen via framebuffer or DRM/KMS
The total pipeline latency determines how responsive the display feels. Writing directly to /dev/fb0 (the legacy framebuffer) causes screen tearing because the display controller reads the buffer at its own 60 Hz rate -- if your write happens mid-scanout, the top shows the old frame and the bottom shows the new one. The next tutorial fixes this using DRM/KMS with hardware page flipping synchronized to the vertical blanking interval.
For a deeper treatment of VSync, double buffering, and the DRM/KMS display pipeline, see the Real-Time Graphics reference.
1. Read BMI160 via spidev
The BMI160 accelerometer data lives at register 0x12 (6 bytes: accel X, Y, Z as little-endian int16). SPI reads require setting the MSB of the register address to indicate a read operation.
Open /dev/spidev0.0 and perform a raw SPI transfer:
import spidev
import struct
import math
spi = spidev.SpiDev()
spi.open(0, 0)
spi.max_speed_hz = 1_000_000
spi.mode = 0b00
def read_accel():
"""Read accelerometer X, Y, Z from BMI160 via SPI."""
# 0x80 | 0x12 = read bit | accel data register
tx = [0x80 | 0x12] + [0x00] * 7 # 1 addr + 1 dummy + 6 data bytes
rx = spi.xfer2(tx)
# First byte is dummy (sent during address byte), skip it
raw = rx[2:] # 6 bytes: ax_l, ax_h, ay_l, ay_h, az_l, az_h
ax, ay, az = struct.unpack('<3h', bytes(raw))
return ax, ay, az
Convert the raw accelerometer values to roll and pitch angles:
def accel_to_angles(ax, ay, az):
"""Convert accelerometer readings to roll and pitch in degrees."""
roll = math.atan2(ay, az)
pitch = math.atan2(-ax, math.sqrt(ay * ay + az * az))
return math.degrees(roll), math.degrees(pitch)
Test the sensor:
ax, ay, az = read_accel()
roll, pitch = accel_to_angles(ax, ay, az)
print(f"Accel: ({ax}, {ay}, {az}) Roll: {roll:.1f} Pitch: {pitch:.1f}")
Checkpoint
You should see non-zero accelerometer values that change when you tilt the sensor board. With the board flat on the table, Z should read close to +16384 (1g at default range) and roll/pitch should be near 0.
Stuck?
- Verify the SPI device exists:
ls /dev/spidev0.* - Check the BMI160 chip ID: read register
0x00— should return0xD1 - If all zeros, check wiring and ensure the BMI160 driver module is not loaded (it would claim the device)
2. Render with OpenCV
Create an artificial horizon display: blue sky on top, brown ground on bottom, and a white horizon line that rotates with the roll angle.
import cv2
import numpy as np
WIDTH, HEIGHT = 640, 480
def render_horizon(roll_deg, pitch_deg):
"""Render an artificial horizon indicator."""
img = np.zeros((HEIGHT, WIDTH, 3), dtype=np.uint8)
# Sky (top half) — BGR format
img[:HEIGHT // 2, :] = (255, 100, 50) # blue sky
# Ground (bottom half)
img[HEIGHT // 2:, :] = (15, 70, 120) # brown ground
# Horizon line: rotated by roll, shifted by pitch
cx, cy = WIDTH // 2, HEIGHT // 2
pitch_offset = int(pitch_deg * 3) # scale pitch to pixels
cy += pitch_offset
length = WIDTH
angle_rad = math.radians(roll_deg)
dx = int(length * math.cos(angle_rad) / 2)
dy = int(length * math.sin(angle_rad) / 2)
pt1 = (cx - dx, cy + dy)
pt2 = (cx + dx, cy - dy)
cv2.line(img, pt1, pt2, (255, 255, 255), 3)
return img
Tip
OpenCV uses BGR color order. The sky color (255, 100, 50) appears as a pleasant blue, while (15, 70, 120) gives an earthy brown.
3. Write to Framebuffer
The Linux framebuffer device /dev/fb0 expects raw pixel data. Most displays on Raspberry Pi use RGB565 format (16 bits per pixel). Convert the OpenCV BGR image and write it out:
def bgr_to_rgb565(img):
"""Convert BGR888 image to RGB565 format for framebuffer."""
b = (img[:, :, 0] >> 3).astype(np.uint16)
g = (img[:, :, 1] >> 2).astype(np.uint16)
r = (img[:, :, 2] >> 3).astype(np.uint16)
rgb565 = (r << 11) | (g << 5) | b
return rgb565.tobytes()
# Open framebuffer
fb = open('/dev/fb0', 'wb')
def write_to_fb(img):
"""Write image to framebuffer."""
data = bgr_to_rgb565(img)
fb.seek(0)
fb.write(data)
fb.flush()
Tip
If your display uses a different pixel format, check with:
Look forrgba 8/8/8/0 (32-bit) vs rgba 5/6/5/0 (16-bit RGB565).
Checkpoint
Write a test frame and verify it appears on screen:
You should see a horizontal blue/brown split with a white line in the middle.4. Add Low-Pass Filter
Raw accelerometer data is noisy. A simple exponential moving average (first-order low-pass filter) smooths the signal:
# Filter state
filtered_roll = 0.0
filtered_pitch = 0.0
ALPHA = 0.1 # smoothing factor: lower = smoother but more lag
def update_filter(raw_roll, raw_pitch):
global filtered_roll, filtered_pitch
filtered_roll = (1 - ALPHA) * filtered_roll + ALPHA * raw_roll
filtered_pitch = (1 - ALPHA) * filtered_pitch + ALPHA * raw_pitch
return filtered_roll, filtered_pitch
The filter equation filtered = 0.9 * filtered + 0.1 * raw means each new sample contributes only 10% to the output. This eliminates high-frequency noise (vibration, electrical interference) at the cost of introducing a small delay in the response. The cutoff frequency depends on the sample rate and the alpha value.
Tip
Try different alpha values:
ALPHA = 0.05— very smooth, noticeable lagALPHA = 0.1— good balance for this applicationALPHA = 0.5— responsive but noisyALPHA = 1.0— no filtering (raw signal)
5. Run and Observe
Assemble the complete main loop:
import time
try:
while True:
t_start = time.monotonic_ns()
# Read sensor
ax, ay, az = read_accel()
roll, pitch = accel_to_angles(ax, ay, az)
# Filter
roll_f, pitch_f = update_filter(roll, pitch)
# Render
img = render_horizon(roll_f, pitch_f)
# Display
write_to_fb(img)
t_end = time.monotonic_ns()
dt_ms = (t_end - t_start) / 1e6
print(f"\rdt={dt_ms:.1f}ms roll={roll_f:.1f} pitch={pitch_f:.1f}", end="")
except KeyboardInterrupt:
fb.close()
spi.close()
print("\nDone.")
Run the script:
Observe:
- The horizon should respond to tilting the sensor board
- You will likely see visible tearing — horizontal lines where the old and new frame overlap
- Frame timing printed on the console shows the render loop speed
Checkpoint
The horizon rotates smoothly when you tilt the board. You should be able to spot screen tearing, especially during fast rotations.
Stuck?
- If the display is blank, check that no desktop environment is running:
sudo systemctl stop lightdm - If colors look wrong, try swapping the RGB565 bit packing (R and B channels)
- If response is sluggish, increase ALPHA or check that SPI speed is at least 1 MHz
6. Log Frame Timing
Add CSV logging to record frame-by-frame timing data for later analysis:
import csv
csv_file = open('frame_timing.csv', 'w', newline='')
csv_writer = csv.writer(csv_file)
csv_writer.writerow(['timestamp_ns', 'dt_ms', 'roll_deg', 'pitch_deg'])
try:
while True:
t_start = time.monotonic_ns()
ax, ay, az = read_accel()
roll, pitch = accel_to_angles(ax, ay, az)
roll_f, pitch_f = update_filter(roll, pitch)
img = render_horizon(roll_f, pitch_f)
write_to_fb(img)
t_end = time.monotonic_ns()
dt_ms = (t_end - t_start) / 1e6
csv_writer.writerow([t_start, f"{dt_ms:.3f}", f"{roll_f:.2f}", f"{pitch_f:.2f}"])
except KeyboardInterrupt:
csv_file.close()
fb.close()
spi.close()
print(f"\nLogged to frame_timing.csv")
The complete source is available at src/embedded-linux/apps/level-display/level_display.py.
Let the script run for at least a few seconds to collect 100+ frames, then stop with Ctrl+C.
Verify the log:
What Just Happened?
You built a complete sensor-to-display pipeline in Python: SPI reads from the BMI160 accelerometer, angle computation, filtering, rendering with OpenCV, and raw framebuffer output.
This tears because fbdev has no VSync. When your code writes pixels to /dev/fb0, those bytes go directly to the framebuffer memory. The display controller reads that same memory at its own refresh rate (typically 60 Hz). If your write happens mid-scanout, the top of the screen shows the old frame and the bottom shows the new one — that is tearing.
The next tutorial fixes this by using DRM/KMS, which provides hardware page flipping synchronized to the vertical blanking interval.
Challenges
Challenge 1: Bubble Level Overlay
Add a "bubble level" to the display: draw a circle (the housing) and a small filled circle (the bubble) whose position depends on roll and pitch. Use cv2.circle() for both. The bubble should sit at the center when the board is level.
Challenge 2: Numerical Readout
Add text overlays showing the current roll and pitch values in degrees. Use cv2.putText() to render the numbers in the top-left corner of the frame.
Deliverable
- [ ] Running Python prototype that displays an artificial horizon responding to board tilt
- [ ] CSV log file (
frame_timing.csv) with at least 100 frames of timing data - [ ] Observe and describe the screen tearing effect in your lab notes