Skip to content

SPI TFT Display (3.5" HAT)

Time estimate: ~45 minutes Prerequisites: SSH Login, Framebuffer Basics

Learning Objectives

By the end of this tutorial you will be able to:

  • Connect and configure a 3.5" SPI TFT display HAT on a Raspberry Pi
  • Diagnose and fix a partial-display problem caused by a mismatched init sequence
  • Explain why SPI displays use CPU-driven rendering instead of GPU scan-out
  • Draw to the SPI framebuffer using PIL and fbi
  • Calibrate resistive touch input via the XPT2046 controller
  • Measure SPI display performance and compare it to HDMI
SPI Displays: CPU-Driven Rendering and Bandwidth Limits

SPI-connected TFT displays cannot use the GPU scan-out path available to HDMI and DSI panels. Instead, a kernel driver (fbtft) allocates a framebuffer in RAM, and the CPU must transfer every pixel over the SPI bus to the panel's controller IC (e.g., ILI9486). At a typical SPI clock of 32 MHz with 16-bit colour, the theoretical maximum throughput is ~2 MB/s — enough for about 13 full frames per second at 320x480. This CPU-driven architecture means the processor is busy during every frame transfer, leaving fewer cycles for application logic. The trade-off is cost and simplicity: SPI displays are $5-15, need only a few GPIO pins, and require no GPU support.

See also: Graphics Stack reference


Introduction

Most embedded Linux graphics tutorials assume HDMI output, where the GPU scans pixels out of a DRM buffer directly to the panel. But many real products use small SPI-connected TFT displays — they are cheap, compact, and need only a few GPIO pins. The trade-off is fundamental: SPI displays cannot use the GPU scan-out path. Every pixel must be pushed through the SPI bus by the CPU.

This tutorial connects a Waveshare 3.5" 320×480 ILI9486 HAT with XPT2046 resistive touch. You will see the display appear as a framebuffer device (/dev/fb0 or /dev/fb1 depending on your configuration), draw to it, measure performance, and understand why SPI bandwidth limits the achievable frame rate.


1. Physical Connection

Concept: The HAT plugs directly onto the 40-pin GPIO header, using SPI0 for both the LCD and touch controller.

Power off the Pi before connecting the HAT. The display uses the following pins:

Function Signal Pi Pin GPIO
LCD data out SPI0 MOSI 19 GPIO10
LCD clock SPI0 SCLK 23 GPIO11
LCD chip select SPI0 CE0 24 GPIO8
LCD data/command DC 22 GPIO25
LCD reset RST 18 GPIO24
Backlight BL 12 GPIO18
Touch data in SPI0 MISO 21 GPIO9
Touch chip select SPI0 CE1 26 GPIO7
Touch interrupt IRQ 11 GPIO17
Warning

Align pin 1 of the HAT with pin 1 of the GPIO header (marked on the Pi board). Inserting the HAT offset by even one pin can damage both the display and the Pi.

Power on the Pi after the HAT is seated firmly.

Checkpoint

The HAT is physically connected. The backlight may or may not turn on at this stage — the kernel driver controls it.


2. Enable SPI and Load the Overlay

Concept: The Pi needs SPI enabled and a device tree overlay that tells the kernel which controller IC the display uses, what SPI speed to run, and which GPIOs handle DC/RST/BL.

Enable SPI

sudo raspi-config nonint do_spi 0

Or use sudo raspi-config → Interface Options → SPI → Enable.

What raspi-config Actually Does

This command adds dtparam=spi=on to /boot/firmware/config.txt. The Pi's base device tree has an SPI controller node (spi0: spi@7e204000) with status = "disabled". The dtparam=spi=on parameter changes that status to "okay", which tells the kernel to probe the spi-bcm2835 driver at boot. You can verify this after reboot:

# Check that the SPI controller is enabled in the live device tree
cat /proc/device-tree/soc/spi@7e204000/status
# Should print: okay

# The kernel loaded the driver:
lsmod | grep spi_bcm2835

On a custom image (Buildroot/Yocto) without raspi-config, you either set dtparam=spi=on in config.txt directly, or enable the SPI node in your device tree source.

Add the Display Overlay

Edit the boot configuration:

sudo nano /boot/firmware/config.txt

Add at the end (for a Waveshare 3.5" with ILI9486 controller):

dtoverlay=fbtft,spi0-0,ili9486,width=320,height=480,reset_pin=25,dc_pin=24,led_pin=18,speed=32000000,rotate=0,bgr=1

Breaking down the parameters:

Parameter Value Meaning
spi0-0 Use SPI bus 0, chip select 0
ili9486 Controller IC type (selects the fb_ili9486 driver)
width=320,height=480 pixels Native panel resolution (portrait)
reset_pin=25 GPIO25 Hardware reset line
dc_pin=24 GPIO24 Data/Command select line
led_pin=18 GPIO18 Backlight control
speed=32000000 32 MHz SPI clock speed
rotate=0 degrees Portrait orientation (native)
bgr=1 Swap red and blue (display expects BGR order)
Want Landscape Instead? How to Rotate

You can change rotate=0 to rotate=90 (or 180, 270) in config.txt for landscape orientation. When you do, three things need to change:

What Portrait (rotate=0) Landscape (rotate=90)
Overlay rotate=0 rotate=90
Framebuffer size 320×480 480×320 (kernel swaps automatically)
Touch mapping (Python) SWAP_XY=False SWAP_XY=True, INVERT_X=True
Touch mapping (libinput) "1 0 0 0 1 0" (identity) "0 1 0 -1 0 1" (90° rotation)
App resolution Scripts auto-detect from sysfs — no code change needed

The display controller's MADCTL register (Memory Access Control, command 0x36) handles rotation in hardware — it changes the scan direction of the display RAM, so the SPI data rate and pixel format stay the same.

Warning

Older tutorials and guides use dtoverlay=waveshare35a — this overlay no longer exists on current Raspberry Pi OS images. The generic fbtft overlay replaces all vendor-specific overlays. If you see waveshare35a in old documentation, use the fbtft line above instead.

Inside the Overlay: What the Device Tree Describes

The fbtft overlay creates a device tree node under the SPI controller. The kernel's SPI core sees compatible = "ilitek,ili9486" and loads the matching fbtft driver module (fb_ili9486). The driver reads the GPIO pins, SPI speed, and rotation from the device tree to configure itself.

You can inspect what parameters the overlay accepts:

dtoverlay -h fbtft

The overlay supports many display controllers (ILI9341, ST7789, SSD1306, etc.) and named presets (piscreen, pitft, tinylcd35). Use the controller name directly when you know the IC, or a preset name if your display is listed.

Common Display Configurations

Different displays need different parameters. Check the controller IC printed on the board or in the product listing:

Display Controller Overlay Line
Waveshare 3.5" (A) ILI9486 fbtft,spi0-0,ili9486,width=320,height=480,reset_pin=25,dc_pin=24,led_pin=18,speed=32000000,bgr=1
PiScreen 3.5" ILI9486 fbtft,spi0-0,piscreen (preset)
Adafruit PiTFT 2.8" ILI9341 fbtft,spi0-0,ili9341,width=240,height=320,reset_pin=25,dc_pin=24,led_pin=18,speed=32000000
Waveshare 1.3" OLED SH1106 fbtft,spi0-0,sh1106,width=128,height=64,reset_pin=25,dc_pin=24,speed=8000000
Generic ST7789 1.5" ST7789V fbtft,spi0-0,st7789v,width=240,height=240,reset_pin=25,dc_pin=24,speed=40000000

All use the fbtft framework — they differ in the initialization sequence sent to the controller IC at probe time.

Reboot and First Test

sudo reboot

After reboot, run a quick test:

# Check the framebuffer appeared
ls -la /dev/fb*

# Fill with noise — you should see colored pixels
sudo cat /dev/urandom > /dev/fb0    # Ctrl+C to stop
Checkpoint

After reboot, the backlight should be on. The noise test should show colored pixels on the display. If the display fills completely with noise — great, your overlay works perfectly. Continue to Section 3.

When the Generic Overlay Doesn't Quite Work

With some displays (especially the Waveshare 3.5" A model), you may notice that the noise test only fills part of the screen — perhaps 50–80% of the width and height, with a blank area remaining. Or when you run the drawing scripts in Section 4, the image appears cropped or shifted.

Why this happens: The generic fb_ili9486 driver in the kernel uses a default initialization sequence that configures the ILI9486 controller's internal registers (column/row address window, pixel format, gamma curves, etc.). Different ILI9486 panel variants need slightly different init sequences. When the init sequence doesn't fully match the panel, the controller may not set the correct address window for the full display area — it thinks the active area is smaller than it actually is.

This is a common problem in embedded display work: the controller IC is the same, but the panel hardware varies between manufacturers. The kernel's generic driver cannot cover every variant.

What's Actually Happening Inside the Controller

The ILI9486 has an internal GRAM (Graphics RAM) that stores the pixel data for the LCD. The init sequence configures:

  • Column Address Set (command 0x2A) — start and end X coordinates
  • Row Address Set (command 0x2B) — start and end Y coordinates
  • Pixel Format (command 0x3A) — 16-bit or 18-bit color
  • Memory Access Control (command 0x36) — rotation, RGB/BGR order, scan direction
  • Gamma curves, power control, timing — panel-specific electrical parameters

If any of these are wrong, the display still works — but part of it may be inaccessible, colors may be off, or the image may be mirrored.

Fix: Use the Vendor-Specific Overlay

Display manufacturers provide their own overlay files with the correct init sequence for their specific panel. For the Waveshare 3.5" A:

# Download the Waveshare overlay from the manufacturer's GitHub
sudo wget -O /boot/firmware/overlays/waveshare35a.dtbo \
  "https://github.com/waveshare/LCD-show/raw/master/waveshare35a-overlay.dtb"

Now edit config.txt to use the vendor overlay instead:

sudo nano /boot/firmware/config.txt

Replace the fbtft overlay line with:

dtoverlay=waveshare35a,speed=32000000,rotate=0

Reboot and test again:

sudo reboot

# After reboot — noise should now fill the entire screen
sudo cat /dev/urandom > /dev/fb0
Checkpoint

The noise test now fills the entire display. The boot console text may also appear on the SPI screen. Continue to Section 3.

Why the Vendor Overlay Works

The Waveshare overlay is a compiled device tree blob (.dtb/.dtbo) that includes the exact init sequence for their panel variant — the right column/row address range, the right gamma curves, and the right power timing. The kernel's fbtft core sends this byte sequence to the ILI9486 over SPI during probe, fully configuring the controller for the specific panel.

This is a general pattern in embedded Linux: generic kernel drivers work for common cases, but vendor-specific configuration is often needed for full functionality. In production, you would include the correct overlay in your Buildroot or Yocto image.

Other Displays

If you have a different display brand, check the manufacturer's GitHub for their overlay file. The pattern is the same: download the .dtbo, place it in /boot/firmware/overlays/, and reference it in config.txt. If no vendor overlay exists, you can write a custom init sequence — see the fbtft documentation in drivers/staging/fbtft/ in the kernel source.

Stuck?
  • Backlight stays off — check that the HAT is seated correctly and that the overlay name matches your HAT
  • No overlay match — run ls /boot/firmware/overlays/*.dtbo | grep -i ili to list available display overlays
  • SPI not enabled — verify with ls /dev/spidev* (should show spidev0.0 and spidev0.1)
  • Verify overlay was loadedsudo vcdbg log msg 2>&1 | grep dtoverlay shows which overlays the firmware applied
  • Partial display with generic overlay — this is the init sequence mismatch described above; use the vendor overlay

3. Verify the Display

Concept: The fbtft kernel framework creates a framebuffer device for the SPI display. It may appear as /dev/fb0 (if no HDMI framebuffer exists, e.g., when KMS is active) or /dev/fb1 (alongside an HDMI framebuffer). Check which device corresponds to your SPI display.

The fbtft Driver Architecture

fbtft is a kernel framework in the staging tree (drivers/staging/fbtft/) designed specifically for SPI/I2C-connected small displays. When userspace writes pixels to the framebuffer device, the fbtft core:

  1. Detects which framebuffer region changed ("dirty region")
  2. Sends a column/row address set command to the controller IC (telling it which pixels are coming)
  3. Sends the pixel data over SPI using DMA if available
  4. The controller IC writes the pixels into its internal display RAM, which drives the LCD

Each controller IC (ILI9486, ILI9341, ST7789, etc.) has a different initialization sequence and command set, implemented as a separate fb_xxxxx.c module. The overlay's compatible string selects which module loads.

Why fbtft is in staging: The modern replacement is panel-mipi-dbi in the DRM subsystem (drivers/gpu/drm/tiny/), which exposes SPI panels as DRM devices instead of fbdev. If your kernel supports it, you get DRM features (atomic commits, proper mode setting) even on SPI displays — though still CPU-driven. For now, fbtft is simpler and better supported on Raspberry Pi OS.

Inspect the driver module:

modinfo fb_ili9486   # fbtft driver for the ILI9486
modinfo fbtft        # fbtft core framework

Check the Framebuffer Device

ls -la /dev/fb*
You see Meaning
/dev/fb0 + /dev/fb1 HDMI (fb0) + SPI (fb1) — use /dev/fb1 for SPI
/dev/fb0 only SPI display is fb0 (HDMI uses DRM-only path) — use /dev/fb0

To confirm which is the SPI display:

cat /sys/class/graphics/fb0/name    # e.g., "fb_ili9486" = SPI display

Note the device path — you will use it below. We'll call it FB_DEV (replace with /dev/fb0 or /dev/fb1 as appropriate).

Query Display Parameters

fbset -fb /dev/fb0    # or /dev/fb1

Expected output:

mode "320x480"
    geometry 320 480 320 480 16
    ...
    rgba 5/11,6/5,5/0,0/0

This confirms: 320×480 pixels, 16 bpp (RGB565).

Check Kernel Messages

dmesg | grep -i fbtft

Look for the driver probing successfully:

fbtft: module is from the staging directory, the quality is unknown, you have been warned.
fb_ili9486 spi0.0: fbtft_property_value: width = 320
fb_ili9486 spi0.0: fbtft_property_value: height = 480
Checkpoint

The SPI framebuffer device exists and fbset reports 320×480 at 16 bpp.


4. Draw to the SPI Display

Concept: Drawing to the SPI framebuffer works the same way as HDMI — write pixel data in the correct format. The fbtft driver handles SPI transfer in the background.

Quick Noise Test

sudo cat /dev/urandom > /dev/fb0    # use fb1 if SPI is on fb1

You should see random colored pixels on the SPI display. Press Ctrl+C to stop.

Install Dependencies

The drawing scripts use PIL for rendering and evdev for touch input (Section 5+). The dashboard (Section 9) needs a monospace font:

sudo apt-get install -y python3-pil python3-evdev fonts-dejavu-core

Draw with PIL (RGB565)

SPI displays use RGB565 (16 bits per pixel) to minimize bandwidth. The fbtft driver stores pixels in little-endian byte order in the framebuffer and byte-swaps internally when transmitting over SPI. So pixel packing uses "<H" (same as HDMI framebuffers).

Create the script:

cat > spi_draw.py << 'EOF'
#!/usr/bin/env python3
"""Draw a status UI on the SPI display.

Reads resolution and stride from sysfs automatically.
Detects which /dev/fb* is the SPI display.
"""
import struct, sys

# ── Find the SPI framebuffer device ──────────────────────
import glob, os

fb_dev = None
for fb in sorted(glob.glob("/sys/class/graphics/fb*")):
    name_file = os.path.join(fb, "name")
    if os.path.exists(name_file):
        with open(name_file) as f:
            name = f.read().strip()
        if "ili" in name.lower() or "fbtft" in name.lower() or "st7" in name.lower():
            fb_dev = "/dev/" + os.path.basename(fb)
            print(f"Found SPI display: {fb_dev} ({name})")
            break

if not fb_dev:
    print("No SPI framebuffer found. Check dmesg | grep fbtft")
    sys.exit(1)

# ── Read framebuffer parameters from sysfs ───────────────
fb_name = os.path.basename(fb_dev)
def read_sysfs(name):
    with open(f"/sys/class/graphics/{fb_name}/{name}") as f:
        return f.read().strip()

width, height = [int(x) for x in read_sysfs("virtual_size").split(",")]
bpp = int(read_sysfs("bits_per_pixel"))
stride = int(read_sysfs("stride"))

print(f"Resolution: {width}x{height}, {bpp} bpp, stride={stride}")

# ── Draw with PIL ────────────────────────────────────────
from PIL import Image, ImageDraw

img = Image.new("RGB", (width, height), (0, 0, 0))
draw = ImageDraw.Draw(img)

# Status UI
draw.rectangle([10, 10, width-10, height-10], outline=(0, 200, 255), width=2)
draw.text((30, 30), "SPI DISPLAY", fill=(110, 110, 120))
draw.text((30, 80), f"ILI9486 @ SPI", fill=(40, 110, 50))
draw.text((30, 130), f"{width}x{height} RGB565", fill=(40, 110, 50))

# Colored bars at bottom
bar_h = 40
colors = [(255, 0, 0), (0, 255, 0), (0, 0, 255), (255, 255, 0)]
bar_w = width // len(colors)
for i, color in enumerate(colors):
    draw.rectangle([i * bar_w, height - bar_h, (i+1) * bar_w, height], fill=color)

# ── Convert to RGB565 big-endian and write ───────────────
# fbtft stores little-endian in framebuffer, byte-swaps on SPI transmit
raw = bytearray(stride * height)

for y in range(height):
    for x in range(width):
        r, g, b = img.getpixel((x, y))
        rgb565 = ((r & 0xF8) << 8) | ((g & 0xFC) << 3) | (b >> 3)
        offset = y * stride + x * 2
        struct.pack_into("<H", raw, offset, rgb565)

with open(fb_dev, "wb") as fb:
    fb.write(raw)

print(f"Wrote {len(raw)} bytes to {fb_dev}")
EOF

Run it:

sudo python3 spi_draw.py

You should see a black screen with a cyan border, white and green text, and four colored bars at the bottom.

Display with fbi

You can also display any image:

sudo fbi -T 1 -d /dev/fb0 --noverbose fb_test.png    # use fb1 if needed

fbi handles the pixel format conversion automatically.

Checkpoint

You can see your drawn content on the SPI display — status UI with correct colors and no smearing.

Stuck?
  • Colors swapped (red↔blue) — toggle bgr=1 in the overlay line in config.txt and reboot, or swap the R and B values in the packing formula.
  • Image shifted or smeared — the script reads stride from sysfs, which should handle padding. If still wrong, check cat /sys/class/graphics/fb0/stride.
  • Bus error with mmap — fbtft does not support mmap on all kernel versions. Use fb.write() instead (as this script does). The Framebuffer Basics scripts use mmap which works on HDMI but may fail on SPI.
  • Permission denied — use sudo or add your user to the video group: sudo usermod -aG video $USER

5. Touch Input (XPT2046)

Concept: The XPT2046 is a resistive touch controller on SPI0 CE1. The kernel's ads7846 driver reads raw ADC values over SPI and reports them through the Linux input subsystem as /dev/input/eventN.

How the Touch Driver Works

The XPT2046 (compatible with TI ADS7846) is a 12-bit ADC with multiplexed inputs for the X and Y resistive layers. When a finger presses the panel, the two resistive layers make contact. The driver:

  1. Interrupt fires — GPIO17 (pendown-gpio) goes low when the panel is touched
  2. SPI read sequence — the driver sends commands 0xD0 (read X) and 0x90 (read Y) over SPI, reads 12-bit ADC values back
  3. Reports to input subsystem — calls input_report_abs(ABS_X, raw_x) and input_report_abs(ABS_Y, raw_y)
  4. Userspace receives events/dev/input/eventN delivers EV_ABS events to evtest, SDL2, or any application

The ti,x-plate-ohms property in the device tree sets the resistance of the X plate, used to calculate touch pressure. The spi-max-frequency for the touch controller is lower (2 MHz) than the display (32 MHz) because the ADC needs settling time between conversions.

Check which input device the driver created:

cat /proc/bus/input/devices | grep -A4 "ADS7846"

Verify Touch Device

sudo apt-get install -y evtest
sudo evtest

Look for a device named ADS7846 Touchscreen or similar. Select it and touch the screen — you should see ABS_X and ABS_Y events.

The raw values are ADC counts (0–4095), not screen coordinates. Our Python scripts in Section 8 handle the mapping from ADC values to pixels directly in code (Task A). If you later use the touch panel with a desktop environment (X11/Wayland), you would need a LIBINPUT_CALIBRATION_MATRIX udev rule instead — see the Arch Wiki: libinput calibration for details.

Checkpoint

evtest shows coordinate events when you touch the screen, and the coordinates roughly correspond to where you touch.


6. Performance Measurement

Concept: SPI bandwidth is the bottleneck. A full frame at 480×320×2 bytes = 307,200 bytes. At 32 MHz SPI clock (4 MB/s effective), one full frame takes ~77 ms — about 13 FPS maximum.

Frame Timing Script

cat > spi_benchmark.py << 'EOF'
#!/usr/bin/env python3
"""Measure SPI display frame rate.

Writes solid color frames and measures the time per frame.
"""
import struct, time, glob, os, sys

# ── Find SPI framebuffer ─────────────────────────────────
fb_dev = None
for fb in sorted(glob.glob("/sys/class/graphics/fb*")):
    name_file = os.path.join(fb, "name")
    if os.path.exists(name_file):
        with open(name_file) as f:
            name = f.read().strip().lower()
        if "ili" in name or "fbtft" in name or "st7" in name:
            fb_dev = "/dev/" + os.path.basename(fb)
            break

if not fb_dev:
    print("No SPI framebuffer found")
    sys.exit(1)

fb_name = os.path.basename(fb_dev)
def read_sysfs(name):
    with open(f"/sys/class/graphics/{fb_name}/{name}") as f:
        return f.read().strip()

width, height = [int(x) for x in read_sysfs("virtual_size").split(",")]
stride = int(read_sysfs("stride"))
fb_size = stride * height

print(f"Display: {width}x{height}, stride={stride}, frame={fb_size} bytes")
print(f"Device: {fb_dev}")

# ── Pre-generate solid color frames ──────────────────────
FRAMES = 30
colors = [(255, 0, 0), (0, 255, 0), (0, 0, 255)]
frames = []
for r, g, b in colors:
    rgb565 = ((r & 0xF8) << 8) | ((g & 0xFC) << 3) | (b >> 3)
    pixel = struct.pack("<H", rgb565)   # little-endian for fbtft framebuffer
    row = pixel * width + b'\x00' * (stride - width * 2)
    frames.append(row * height)

# ── Measure frame times ──────────────────────────────────
print(f"\nWriting {FRAMES} frames...")
times = []
with open(fb_dev, "wb") as fb:
    for i in range(FRAMES):
        t0 = time.monotonic()
        fb.seek(0)
        fb.write(frames[i % len(frames)])
        fb.flush()
        t1 = time.monotonic()
        times.append(t1 - t0)

avg_ms = sum(times) / len(times) * 1000
fps = 1000 / avg_ms
throughput = fb_size * 8 / (avg_ms / 1000) / 1e6
print(f"\nAverage frame time: {avg_ms:.1f} ms")
print(f"Effective FPS: {fps:.1f}")
print(f"Min: {min(times)*1000:.1f} ms  Max: {max(times)*1000:.1f} ms")
print(f"Throughput: {throughput:.1f} Mbit/s")
EOF
sudo python3 spi_benchmark.py

Fill In Your Measurements

Metric SPI Display (measured) HDMI Display (from Framebuffer Basics)
Resolution _ × _ _ × _
Frame size (bytes) _ _
Average frame time (ms) _ _
Effective FPS _ _
CPU usage during draw (%) _ _
SPI throughput (Mbit/s) _ N/A

To measure CPU usage, run htop in a second SSH session while the benchmark runs.

Checkpoint

Your measurement table is filled in. The SPI display should show ~10-15 FPS for full-frame updates, significantly slower than HDMI.


7. Run Existing Apps on the SPI Display

Concept: Many framebuffer-based tools accept a device path parameter, letting you redirect output from HDMI to the SPI display.

First, identify your SPI framebuffer device number:

# Find which fb* is the SPI display
for fb in /sys/class/graphics/fb*; do
    echo "$(basename $fb): $(cat $fb/name 2>/dev/null)"
done

Use the device number (N) from the output in the commands below.

Redirect fbi

sudo fbi -T 1 -d /dev/fbN --novercon2fbmapbose your_image.png    # replace N with your SPI fb number

Redirect Console

Map the Linux text console to the SPI display:

sudo con2fbmap 1 N    # replace N with your SPI fb number

To revert (map console back to HDMI):

sudo con2fbmap 1 0

Resolution Differences

Apps designed for 800×480 (HDMI) will need adjustment for 480×320 (SPI). Either:

  • Resize images before display: img = img.resize((480, 320))
  • Redesign layouts with larger text and fewer elements

What Just Happened?

You connected a display that uses a fundamentally different rendering path than HDMI:

SPI:  App → CPU render → RAM → fbtft driver → SPI bus → Panel controller → LCD
HDMI: App → GPU render → DRM buffer → Display controller → HDMI encoder → Monitor

The key difference: SPI displays cannot use GPU scan-out. The GPU on the Pi can render into a RAM buffer, but it cannot push those pixels over the SPI bus — that requires the CPU and the fbtft kernel driver. This means:

  • No hardware VSync or page flipping on SPI displays
  • Every pixel consumes CPU time and SPI bandwidth
  • Frame rate is limited by SPI clock speed, not GPU capability
  • If you also have HDMI connected, both displays work independently (HDMI via DRM/GPU, SPI via CPU)

This is why SPI displays are used for status panels and simple UIs, not video playback or complex animations.


Good to Know: Text Rendering on RGB565

If you draw text on the SPI display and it looks dotty, fuzzy, or has colored fringes around characters — this is not a bug in your code. It is a fundamental limitation of the RGB565 pixel format.

Why It Happens

RGB565 stores each pixel in 16 bits with unequal channel precision:

Channel Bits Levels Step size (in 8-bit scale)
Red 5 32 8
Green 6 64 4
Blue 5 32 8

When PIL renders text with a TrueType font, it anti-aliases the edges — each character boundary is a smooth gradient from the text color to the background. For example, white text on black produces edge pixels like:

8-bit:   0 → 30 → 70 → 120 → 180 → 255   (smooth gradient)
RGB565:  0 → 24 → 64 →  120 → 176 → 248   (jumps of 24–56!)

Adjacent pixels that were smoothly blended in 8-bit now snap to different quantization levels, creating visible banding. Because R/B (step 8) and G (step 4) quantize differently, "gray" edge pixels can shift toward green or magenta — producing colored fringing.

The Perception Threshold

Human brightness perception is logarithmic (Weber-Fechner law): a step of 8 at brightness 30 is barely visible, but the same step of 8 at brightness 180 is obvious. In practice:

Fill brightness Anti-aliasing quality Why
Below ~120 Clean, smooth edges Quantization steps are small relative to perceived brightness
120–160 Visible dots/banding Steps become noticeable, colored fringes appear
Above 160 Obvious artifacts Large perceived jumps between quantization levels

Practical Rules

  1. Keep text color components below ~120 for clean rendering: (110, 110, 120) instead of (200, 200, 200)
  2. Use a brightness hierarchy — labels dimmer than values, but all below the threshold
  3. Bars and filled shapes are fine at any brightness — solid colors don't have anti-aliased edges
  4. Avoid the ° symbol and other complex glyphs — their thin strokes anti-alias heavily
Tip

Test it yourself — add this to any SPI display script:

for brightness in [80, 120, 160, 200, 240]:
    c = (brightness, brightness, brightness)
    draw.text((4, brightness // 3), f"Test {brightness}", fill=c, font=font)
You will see the text quality degrade as brightness increases.

This effect does not exist on HDMI displays (RGB888, 8 bits per channel = 256 levels, step size 1) or on OLED panels that use 16-bit with dithering.


8. Build a Drawing App

Concept: Combine touch input (Section 5) with framebuffer drawing (Section 4) into an interactive application. This is the core pattern for any touch-driven embedded UI: read input events, update a pixel buffer, write it to the display.

The Problem

You need to:

  1. Find the touch input device automatically (not hardcode /dev/input/event0)
  2. Read raw ADC coordinates (0–4095) from the touch controller
  3. Map them to screen pixel coordinates (0–479, 0–319)
  4. Draw at the touched position into a pixel buffer
  5. Write the changed region to the SPI framebuffer

The tricky parts are the coordinate mapping (the axes may be swapped or inverted) and performance (full-frame redraws are too slow for responsive drawing).

Starter Code

Why evdev Instead of Reading /dev/input Directly?

You could read raw bytes from /dev/input/eventN and parse the input_event struct yourself (16 bytes: timestamp, type, code, value). The evdev library does exactly this but also handles device discovery, capability queries, and event decoding — saving you from parsing binary structs manually.

This is a working drawing app. Read through it, then run it and test. The comments mark places where you will need to adjust values for your specific display.

cat > touch_draw.py << 'PYEOF'
#!/usr/bin/env python3
"""Touch drawing app for SPI display.

Draws colored dots where you touch the screen.
Press Ctrl+C to exit.
"""
import struct, glob, os, sys, time

# ── Find SPI framebuffer ─────────────────────────────────
fb_dev = None
for fb in sorted(glob.glob("/sys/class/graphics/fb*")):
    name_file = os.path.join(fb, "name")
    if os.path.exists(name_file):
        with open(name_file) as f:
            name = f.read().strip()
        if "ili" in name.lower() or "fbtft" in name.lower() or "st7" in name.lower():
            fb_dev = "/dev/" + os.path.basename(fb)
            break

if not fb_dev:
    print("No SPI framebuffer found")
    sys.exit(1)

fb_name = os.path.basename(fb_dev)
def read_sysfs(attr):
    with open(f"/sys/class/graphics/{fb_name}/{attr}") as f:
        return f.read().strip()

WIDTH, HEIGHT = [int(x) for x in read_sysfs("virtual_size").split(",")]
STRIDE = int(read_sysfs("stride"))
print(f"Display: {fb_dev} ({WIDTH}x{HEIGHT}, stride={STRIDE})")

# ── Find touch input device ──────────────────────────────
import evdev

touch_dev = None
for path in evdev.list_devices():
    dev = evdev.InputDevice(path)
    if "ADS7846" in dev.name or "Touch" in dev.name:
        touch_dev = dev
        break

if not touch_dev:
    print("No touch device found. Check: sudo evtest")
    sys.exit(1)

# Read the ADC range from the device capabilities
caps = touch_dev.capabilities(absinfo=True)
abs_info = {code: info for code, info in caps.get(evdev.ecodes.EV_ABS, [])}
x_info = abs_info[evdev.ecodes.ABS_X]
y_info = abs_info[evdev.ecodes.ABS_Y]
print(f"Touch: {touch_dev.name} ({touch_dev.path})")
print(f"  X range: {x_info.min}..{x_info.max}")
print(f"  Y range: {y_info.min}..{y_info.max}")

# ── Coordinate mapping ───────────────────────────────────
# The ADC axes may not match the screen axes. You need to figure
# out which ADC axis maps to which screen axis, and whether it
# is inverted. Start with this default and adjust:
#
#   SWAP_XY  = False   # set True if X touch moves the dot vertically
#   INVERT_X = False   # set True if the dot moves opposite to your finger (horizontal)
#   INVERT_Y = False   # set True if the dot moves opposite to your finger (vertical)

SWAP_XY  = False    # no swap needed for portrait (rotate=0)
INVERT_X = False    # ← adjust these after testing!
INVERT_Y = False

def map_touch(raw_x, raw_y):
    """Map raw ADC values to screen pixel coordinates."""
    # Normalize to 0.0–1.0
    nx = (raw_x - x_info.min) / (x_info.max - x_info.min)
    ny = (raw_y - y_info.min) / (y_info.max - y_info.min)

    if SWAP_XY:
        nx, ny = ny, nx
    if INVERT_X:
        nx = 1.0 - nx
    if INVERT_Y:
        ny = 1.0 - ny

    # Scale to pixel coordinates
    px = int(nx * (WIDTH - 1))
    py = int(ny * (HEIGHT - 1))
    return max(0, min(WIDTH-1, px)), max(0, min(HEIGHT-1, py))

# ── Drawing state ─────────────────────────────────────────
# Color palette — cycle with multi-tap in the same spot
COLORS = [
    (255, 255, 255),  # white
    (255,   0,   0),  # red
    (  0, 255,   0),  # green
    (  0, 100, 255),  # blue
    (255, 255,   0),  # yellow
]
color_idx = 0
BRUSH_SIZE = 3   # radius in pixels

def rgb565(r, g, b):
    """Pack RGB888 to RGB565 for fbtft framebuffer."""
    return struct.pack("<H", ((r & 0xF8) << 8) | ((g & 0xFC) << 3) | (b >> 3))

# Start with a black canvas
canvas = bytearray(STRIDE * HEIGHT)

# Draw a thin border so you can see the screen bounds
border_pixel = rgb565(40, 40, 40)
for x in range(WIDTH):
    off_top = x * 2
    off_bot = (HEIGHT - 1) * STRIDE + x * 2
    canvas[off_top:off_top+2] = border_pixel
    canvas[off_bot:off_bot+2] = border_pixel
for y in range(HEIGHT):
    off_left = y * STRIDE
    off_right = y * STRIDE + (WIDTH - 1) * 2
    canvas[off_left:off_left+2] = border_pixel
    canvas[off_right:off_right+2] = border_pixel

# Write initial canvas
with open(fb_dev, "wb") as fb:
    fb.write(canvas)

# ── Helper: draw a filled circle into the canvas ─────────
def draw_dot(cx, cy, radius, color_rgb):
    """Draw a filled circle and return the bounding box (y_min, y_max)."""
    r, g, b = color_rgb
    pixel = rgb565(r, g, b)
    y_min = max(0, cy - radius)
    y_max = min(HEIGHT - 1, cy + radius)
    for dy in range(-radius, radius + 1):
        py = cy + dy
        if py < 0 or py >= HEIGHT:
            continue
        # Horizontal span for this row of the circle
        dx_max = int((radius**2 - dy**2) ** 0.5)
        x_start = max(0, cx - dx_max)
        x_end = min(WIDTH - 1, cx + dx_max)
        for px in range(x_start, x_end + 1):
            off = py * STRIDE + px * 2
            canvas[off:off+2] = pixel
    return y_min, y_max

# ── Helper: write only changed rows to framebuffer ───────
def flush_rows(fb_file, y_min, y_max):
    """Write only the dirty rows — much faster than a full frame."""
    offset = y_min * STRIDE
    length = (y_max - y_min + 1) * STRIDE
    fb_file.seek(offset)
    fb_file.write(canvas[offset:offset + length])

# ── Main event loop ──────────────────────────────────────
print(f"\nDrawing with: {COLORS[color_idx]} (brush size {BRUSH_SIZE})")
print("Touch the screen to draw. Ctrl+C to quit.")

raw_x, raw_y = 0, 0
touching = False

try:
    fb_file = open(fb_dev, "wb")
    for event in touch_dev.read_loop():
        if event.type == evdev.ecodes.EV_ABS:
            if event.code == evdev.ecodes.ABS_X:
                raw_x = event.value
            elif event.code == evdev.ecodes.ABS_Y:
                raw_y = event.value

        elif event.type == evdev.ecodes.EV_KEY:
            # BTN_TOUCH: 1 = finger down, 0 = finger up
            if event.code == evdev.ecodes.BTN_TOUCH:
                touching = (event.value == 1)
                if not touching:
                    # Finger lifted — next touch could change color
                    pass

        elif event.type == evdev.ecodes.EV_SYN:
            # SYN_REPORT marks a complete event packet
            if touching:
                px, py = map_touch(raw_x, raw_y)
                y_min, y_max = draw_dot(px, py, BRUSH_SIZE, COLORS[color_idx])
                flush_rows(fb_file, y_min, y_max)

except KeyboardInterrupt:
    print("\nDone.")
finally:
    fb_file.close()
PYEOF

Run it:

sudo python3 touch_draw.py

Touch the screen — you should see white dots appearing where you touch. Press Ctrl+C to stop.

Checkpoint

Dots appear on the screen when you touch it. The position roughly tracks your finger (it may be offset or mirrored — that's expected, you will fix it next).

Task A: Fix the Coordinate Mapping

The starter code sets SWAP_XY = True, INVERT_X = True, INVERT_Y = False. These may or may not be correct for your display orientation.

How to calibrate:

  1. Run the app and touch the top-left corner of the screen
  2. Observe where the dot appears
  3. Adjust the three flags based on what you see:
Symptom Fix
Dot appears in the wrong corner entirely Toggle SWAP_XY
Dot moves left when you move right Toggle INVERT_X
Dot moves up when you move down Toggle INVERT_Y
Dot tracks correctly but is offset from your finger Normal for resistive touch without precise calibration
Tip

Touch all four corners systematically. The border drawn by the app helps you see where the screen edges are. Once the four corners map correctly, the middle will be close enough.

Task B: Add Color Switching

The COLORS palette is defined but there is no way to switch colors while drawing. Add a feature to cycle through colors. Some ideas:

  • Corner tap: If the touch position is in the top-right corner (e.g., px > WIDTH - 40 and py < 40), increment color_idx instead of drawing
  • Double tap: Detect two taps within 300 ms and switch color
  • Color bar: Draw a row of colored squares at the top of the screen; tapping one selects that color
Tip

The simplest approach: add a check inside the EV_SYN handler, before draw_dot(). If the touch is in a "button" region, change color_idx and print the new color. Otherwise, draw.

Task C: Add a Clear Button

Add a way to clear the canvas back to black. For example, reserve the top-left corner as a "clear" button:

# Inside the EV_SYN handler, before draw_dot():
if px < 40 and py < 40:
    # Clear — reset canvas to black
    for i in range(len(canvas)):
        canvas[i] = 0
    fb_file.seek(0)
    fb_file.write(canvas)
    continue

Draw a small indicator (e.g., a red square) in that corner so users know where to tap.

Task D: Draw Lines Instead of Dots

The current app draws individual dots — if you move your finger quickly, you get separate circles with gaps between them. Real drawing apps draw lines between consecutive touch positions.

Hint: Store the previous touch position. When a new SYN_REPORT arrives while touching is True, draw a line from (prev_x, prev_y) to (px, py) using Bresenham's line algorithm:

Tip

Bresenham's algorithm in Python:

def draw_line(x0, y0, x1, y1, color_rgb):
    """Draw a line using Bresenham's algorithm. Returns (y_min, y_max)."""
    dx = abs(x1 - x0)
    dy = abs(y1 - y0)
    sx = 1 if x0 < x1 else -1
    sy = 1 if y0 < y1 else -1
    err = dx - dy
    y_min, y_max = min(y0, y1), max(y0, y1)

    while True:
        # Draw a dot at each point along the line
        draw_dot(x0, y0, BRUSH_SIZE, color_rgb)
        if x0 == x1 and y0 == y1:
            break
        e2 = 2 * err
        if e2 > -dy:
            err -= dy
            x0 += sx
        if e2 < dx:
            err += dx
            y0 += sy

    return y_min - BRUSH_SIZE, y_max + BRUSH_SIZE

Call this instead of draw_dot() when you have a previous position.

Task E: Measure Drawing Latency

How responsive is the drawing? Add timing measurements:

  1. Record time.monotonic() when a SYN_REPORT arrives
  2. Record it again after flush_rows() completes
  3. Print the latency every 100 events

Questions to answer:

  • What is the average touch-to-pixel latency?
  • How does BRUSH_SIZE affect it? (Try size 1 vs 10 vs 20)
  • What is the bottleneck — the Python processing or the SPI write?
Tip

The SPI write time for N rows is approximately N × STRIDE × 8 / 32_000_000 seconds. Compare your measured latency to this theoretical SPI time to see how much is Python overhead vs hardware.


9. Build a System Dashboard

Concept: This is what small SPI screens are actually used for in real products — a live status display showing system health. No desktop environment, just a loop that reads system data, renders it with PIL, and writes to the framebuffer. You will build this step by step.

Detach the Console from the SPI Display

The kernel maps a text console (tty) to each framebuffer. If you see a blinking cursor on the SPI screen, detach it first:

# Check what framebuffers exist
cat /proc/fb

# Headless (SPI only, no HDMI) — unbind framebuffer console entirely
sudo sh -c 'echo 0 > /sys/class/vtconsole/vtcon1/bind'

# If both HDMI and SPI — map console to HDMI (fb0), freeing SPI
sudo con2fbmap 1 0

To make this permanent, add fbcon=map:0 to /boot/firmware/cmdline.txt (same line, space-separated).

Step 1: Read System Data from /proc

Before drawing anything, explore the data sources. Run these on the Pi and study the output:

# CPU times per core — idle vs busy
cat /proc/stat | head -6

# Memory info — look for MemTotal, MemAvailable
cat /proc/meminfo | head -5

# CPU temperature in millidegrees
cat /sys/class/thermal/thermal_zone0/temp

# Uptime in seconds
cat /proc/uptime

# IP address
hostname -I
Note

/proc/stat gives cumulative CPU tick counts since boot, not percentages. To compute usage you need two samples: usage = 1 - (idle₂ - idle₁) / (total₂ - total₁). This is the same approach htop uses.

Step 2: Skeleton — One Static Frame

Start with a script that reads the data sources above and renders a single static frame to the display. This is your foundation — get it working before adding the update loop.

cat > spi_dashboard.py << 'PYEOF'
#!/usr/bin/env python3
"""System dashboard for SPI display — build it step by step."""
import struct, glob, os, sys, time, subprocess
from PIL import Image, ImageDraw, ImageFont

# ── Find SPI framebuffer (reuse from Section 4) ──────────
fb_dev = None
for fb in sorted(glob.glob("/sys/class/graphics/fb*")):
    name_file = os.path.join(fb, "name")
    if os.path.exists(name_file):
        with open(name_file) as f:
            name = f.read().strip()
        if "ili" in name.lower() or "fbtft" in name.lower() or "st7" in name.lower():
            fb_dev = "/dev/" + os.path.basename(fb)
            break

if not fb_dev:
    print("No SPI framebuffer found.")
    sys.exit(1)

fb_name = os.path.basename(fb_dev)
def read_sysfs(attr):
    with open(f"/sys/class/graphics/{fb_name}/{attr}") as f:
        return f.read().strip()

WIDTH, HEIGHT = [int(x) for x in read_sysfs("virtual_size").split(",")]
STRIDE = int(read_sysfs("stride"))

# ── Font ─────────────────────────────────────────────────
# Pillow >= 10.1 has a scalable built-in font; older versions need a TTF file.
try:
    font = ImageFont.load_default(size=20)
except TypeError:
    for p in ["/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf",
              "/usr/share/fonts/truetype/freefont/FreeSansBold.ttf"]:
        if os.path.exists(p):
            font = ImageFont.truetype(p, 20)
            break
    else:
        font = ImageFont.load_default()

# ── Read one data point ──────────────────────────────────
def read_temp():
    try:
        with open("/sys/class/thermal/thermal_zone0/temp") as f:
            return int(f.read().strip()) / 1000
    except (FileNotFoundError, ValueError):
        return 0.0

# ── Render a single frame ────────────────────────────────
img = Image.new("RGB", (WIDTH, HEIGHT), (0, 0, 0))
draw = ImageDraw.Draw(img)

temp = read_temp()
draw.text((4, 4), f"CPU Temp: {temp:.1f} C", fill=(110, 110, 120), font=font)
draw.text((4, 24), f"Display:  {WIDTH}x{HEIGHT}", fill=(80, 80, 90), font=font)

# ── Convert to RGB565 and write to framebuffer ───────────
def image_to_fb(img):
    raw = bytearray(STRIDE * HEIGHT)
    pixels = img.load()
    for y in range(HEIGHT):
        for x in range(WIDTH):
            r, g, b = pixels[x, y]
            struct.pack_into("<H", raw, y * STRIDE + x * 2,
                             ((r & 0xF8) << 8) | ((g & 0xFC) << 3) | (b >> 3))
    return raw

with open(fb_dev, "wb") as fb:
    fb.write(image_to_fb(img))

print(f"Wrote single frame to {fb_dev} ({WIDTH}x{HEIGHT})")
PYEOF
sudo python3 spi_dashboard.py
Checkpoint

You see the temperature and display resolution on the screen. It is a single static frame — the script exits immediately after writing.

Task F: Add a Refresh Loop

Convert the single-frame script into a 1-second update loop. Each iteration should:

  1. Read the temperature
  2. Read uptime from /proc/uptime and format it as HH:MM (or Xd HH:MM if uptime > 24 hours)
  3. Render a new PIL image
  4. Convert to RGB565 and write to the framebuffer
Tip
try:
    with open(fb_dev, "wb") as fb:
        while True:
            # ... read data, render, write ...
            time.sleep(1)
except KeyboardInterrupt:
    print("Stopped.")

Verify it works: the uptime value should increment every second.

Task G: Read CPU Usage from /proc/stat

This is the trickiest data source. /proc/stat gives cumulative tick counts, not percentages. You need two consecutive reads to compute the delta.

Each cpuN line in /proc/stat has columns: user nice system idle iowait irq softirq steal. The CPU is busy when it is doing anything other than idle + iowait.

Algorithm:

  1. Read /proc/stat, parse the cpu0, cpu1, ... lines (skip the aggregate cpu line — note the space)
  2. For each core, compute idle = fields[3] + fields[4] and total = sum(all fields)
  3. Store these values. On the next read, compute: usage% = 100 × (1 - Δidle / Δtotal)
  4. Return the first sample as 0% (you have no delta yet)
Tip

Use a global variable to store the previous sample:

_prev_cpu = None

def read_cpu():
    global _prev_cpu
    with open("/proc/stat") as f:
        lines = [l for l in f if l.startswith("cpu") and l[3] != " "]
    # ... parse, compute deltas, return list of percentages ...

Add the CPU percentage next to the temperature. Even just a single number like "CPU0: 12%" is fine for now.

Task H: Draw Progress Bars

Text-only dashboards are hard to read at a glance. Add a draw_bar() function that draws a horizontal filled rectangle representing a percentage.

Specification:

  • draw_bar(draw, x, y, width, height, percentage) — draws a background rectangle and a filled portion
  • The fill width is width × percentage / 100
  • Use color coding: green below 60%, amber 60–85%, red above 85%
  • PIL has draw.rectangle([x0, y0, x1, y1], fill=(r, g, b))

Draw a bar for each CPU core and one for memory. To read memory:

def read_memory():
    info = {}
    with open("/proc/meminfo") as f:
        for line in f:
            parts = line.split()
            info[parts[0].rstrip(":")] = int(parts[1])
    total = info["MemTotal"] / 1024   # KB → MB
    avail = info.get("MemAvailable", info.get("MemFree", 0)) / 1024
    return total - avail, total

Task I: Partial Updates

Right now you write the full frame (320 × 480 × 2 = 307 KB) every second. Most of the screen doesn't change between frames — only the bars and values.

Optimization: Compare the new frame buffer with the previous one row by row. Only write the rows that actually changed.

Tip
prev_raw = None
# ... in the loop:
raw = image_to_fb(img)
if prev_raw is not None:
    # Find the first and last row that differ
    y_min, y_max = None, None
    for y in range(HEIGHT):
        off = y * STRIDE
        if raw[off:off + STRIDE] != prev_raw[off:off + STRIDE]:
            if y_min is None: y_min = y
            y_max = y
    if y_min is not None:
        fb.seek(y_min * STRIDE)
        fb.write(raw[y_min * STRIDE : (y_max + 1) * STRIDE])
else:
    fb.seek(0)
    fb.write(raw)
prev_raw = raw

Measure the difference: print how many rows you write per update vs the full frame height. You should see significant reduction.

Task J: Add More Data and Polish the Layout

Add the remaining data sources:

  • Disk usage: os.statvfs("/") gives f_blocks, f_bavail, f_frsize — compute used/total in GB
  • IP address: subprocess.check_output(["hostname", "-I"]) — refresh this every 10 seconds, not every 1 second
  • Separating lines: draw.line([(x0, y), (x1, y)], fill=(60, 60, 65)) between sections

Design the layout for 320×480 (portrait). You have plenty of vertical space. Consider:

  • A header with a title and uptime
  • A section per metric: label, value text, and progress bar
  • Dim colors for labels, brighter for values, saturated for bars
  • Consistent margins
Design Tips
  • Text brightness below ~120 — see the RGB565 text rendering section for why
  • Use 2–3 font sizes (e.g., 14 for labels, 16 for values, 20 for the title)
  • Bars and filled shapes can be brighter — they have no anti-aliased edges
  • Separate slow-changing data (IP, disk) from fast-changing data (CPU, temp)
Checkpoint

Your dashboard shows at least: CPU usage per core (with bars), memory, temperature, and IP address. It updates every second and only writes changed rows. The display should be readable from arm's length.

Task K: Create a systemd Service

In a real product the dashboard starts at boot and restarts on crash. Create a systemd unit file:

  1. Write /etc/systemd/system/spi-dashboard.service with:
  2. ExecStartPre to detach the console (echo 0 > /sys/class/vtconsole/vtcon1/bind)
  3. ExecStart pointing to your script
  4. Restart=on-failure with RestartSec=5
  5. Enable it: sudo systemctl enable --now spi-dashboard.service
  6. Verify: reboot the Pi and confirm the dashboard appears without manual intervention
  7. Check logs: journalctl -u spi-dashboard.service -f
Tip
[Unit]
Description=SPI Display Dashboard
After=multi-user.target

[Service]
Type=simple
ExecStartPre=/bin/sh -c 'echo 0 > /sys/class/vtconsole/vtcon1/bind'
ExecStart=/usr/bin/python3 /home/pi/spi_dashboard.py
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

To disable: sudo systemctl disable --now spi-dashboard.service


Challenges

Challenge 1: Partial Updates

Instead of redrawing the entire 307 KB frame, update only the changed region. Modify the PIL script to write only the rows that contain updated text. Measure the speedup — how much faster is a 50-row partial update compared to a full frame?

Challenge 2: Port Single-App UI

Adapt the Single-App Fullscreen UI tutorial to work on the SPI display at 480×320. Resize the layout elements and change the framebuffer target to the SPI device you identified in Section 3.

Challenge 3: Bandwidth Calculation

Calculate the theoretical maximum FPS for your SPI display:

  • Frame size: 480 × 320 × 2 = 307,200 bytes = 2,457,600 bits
  • SPI clock: 32 MHz = 32,000,000 bits/sec
  • Max FPS = 32,000,000 / 2,457,600 = ?

Compare with your measured FPS. Why is the measured value lower than theoretical? (Hint: SPI protocol overhead, kernel scheduling, Python overhead.)

Solution Code

Reference solutions for all tasks and challenges are available in the course repository: src/embedded-linux/solutions/spi-display/ — contains:

  • spi_dashboard.py — complete dashboard with all Tasks F–K (CPU bars, memory, temp, disk, IP, partial updates, color-coded bars)
  • touch_draw_complete.py — full drawing app with all tasks A–E (color switching, clear button, Bresenham lines, latency measurement)
  • touch_draw_lines.py — minimal solution for Task D only (line drawing)
  • challenge1_partial.py — partial update benchmark with timing comparison
  • challenge3_bandwidth.py — theoretical vs measured bandwidth analysis

Try to solve each task yourself first — the learning happens in the debugging.


Summary: SPI Displays at a Glance

Parameter Value
Interface SPI (4-wire: MOSI, SCLK, CS, D/C)
Typical resolution 320×240 – 480×320
Color depth RGB565 (16-bit, 65K colors)
Max SPI clock 32–80 MHz (typically 32 MHz on Pi)
Max FPS (theoretical) ~13 FPS at 320×480 @ 32 MHz
Measured FPS 8–10 FPS (protocol + kernel overhead)
Touch-to-pixel latency 5–15 ms (depends on region size)
CPU involvement 100% — every pixel goes through the CPU
GPU acceleration None — no DRM/KMS, no hardware compositing
Kernel driver fbtft (legacy framebuffer, /dev/fbN)

When to Use SPI Displays

Use case SPI display Why
Status panel / dashboard Yes Low refresh rate is fine, simple wiring
Industrial HMI (buttons, gauges) Yes Touch input + simple UI, no GPU needed
Server rack monitor Yes Headless, shows IP/load/temp
Sensor readout (lab equipment) Yes Real-time numbers, 1–10 FPS is enough
Video playback No Max ~10 FPS, no hardware decode path
Desktop environment No Too slow, no GPU compositing
Multi-touch gestures No Resistive touch = single point only
High-resolution UI No 320×480 max, RGB565 color banding

Comparison with Other Display Interfaces

SPI DSI HDMI
Wiring 5–6 GPIO pins 15-pin ribbon cable Standard cable
Max resolution 480×320 800×480 – 1280×800 Up to 4K
Refresh rate ~10 FPS 60 FPS 60+ FPS
GPU acceleration No Yes (DRM/KMS) Yes (DRM/KMS)
Touch Separate SPI (resistive) Often built-in (capacitive) External USB
CPU load High (every pixel) Near zero (DMA scan-out) Near zero
Power ~20 mA backlight ~100–300 mA Display-dependent
Cost ~$5–15 ~$15–40 ~$50+
Best for Status panels Embedded UIs Development/desktop
Note

The next tutorial (DSI Display) covers DSI — which gives you GPU-accelerated rendering at 60 FPS with capacitive multi-touch, using the same Pi.


Deliverable

  • [ ] SPI display showing drawn content (PIL or fbi)
  • [ ] Touch input verified with evtest
  • [ ] Performance measurement table filled in
  • [ ] Drawing app with correct coordinate mapping (Task A) and at least one extension (B–E)
  • [ ] System dashboard with live CPU bars and at least 3 data sources (Tasks F–J)
  • [ ] systemd service that starts the dashboard at boot (Task K)
  • [ ] Brief note: one sentence explaining why SPI displays cannot use GPU scan-out

Course Overview | Next: DSI Display →