Display Applications: OpenCV + Input Events
Time estimate: ~30 minutes Prerequisites: Framebuffer Basics, Single-App Fullscreen UI
Learning Objectives
By the end of this tutorial you will be able to:
- Render graphics to the framebuffer using OpenCV
- Handle keyboard input events using the
evdevlibrary - Combine display rendering and input handling into an interactive application
Direct Framebuffer Rendering Without a Window Manager
Embedded products like kiosks, ATMs, and industrial HMIs typically run a single fullscreen application — no window manager, no compositor, no desktop environment. The application renders frames (using OpenCV, PIL, or any image library) and writes pixel data directly to /dev/fb0 or uses DRM/KMS page flips. Input events arrive from /dev/input/eventN via the kernel's evdev subsystem, bypassing X11/Wayland entirely. This architecture boots faster, uses less RAM, and eliminates an entire class of display-server bugs. The trade-off is that you handle all rendering and input yourself — there are no widgets, no layout managers, and no window stacking.
See also: Graphics Stack reference
Introduction
The previous tutorials showed how to display static or sensor-driven UIs on the framebuffer. This tutorial goes further: you will build an interactive display application that responds to input events — all without X11 or Wayland.
This pattern is common in embedded products like kiosks, vending machines, and industrial HMIs where a single fullscreen application handles both display and input.
1. Install Dependencies
python3-opencv— image processing and renderingpython3-evdev— reading input devices (keyboard, buttons)python3-numpy— array operations for framebuffer manipulation
2. Render to Framebuffer with OpenCV
OpenCV can create images in memory and write them directly to the framebuffer device.
python3 - <<'PY'
import cv2
import numpy as np
# Create a 1920x1080 BGR image (adjust to your display resolution)
W, H = 1920, 1080
img = np.zeros((H, W, 3), dtype=np.uint8)
# Draw some graphics
cv2.rectangle(img, (20, 20), (W-20, 120), (255, 200, 0), 2)
cv2.putText(img, "OpenCV on Framebuffer", (40, 80),
cv2.FONT_HERSHEY_SIMPLEX, 1.2, (255, 255, 255), 2)
cv2.circle(img, (W//2, H//2 + 60), 80, (0, 255, 0), 3)
# Convert BGR to RGB565 for framebuffer
def bgr_to_rgb565(img):
b, g, r = img[:,:,0], img[:,:,1], img[:,:,2]
rgb565 = ((r >> 3).astype(np.uint16) << 11) | \
((g >> 2).astype(np.uint16) << 5) | \
(b >> 3).astype(np.uint16)
return rgb565.tobytes()
with open("/dev/fb0", "wb") as fb:
fb.write(bgr_to_rgb565(img))
print("Image written to framebuffer")
PY
Checkpoint
You should see a blue rectangle with "OpenCV on Framebuffer" text and a green circle on your display.
Stuck?
If you see garbled output, check your display's pixel format with fbset -i. The code above assumes RGB565 (16-bit). For 32-bit displays, write the BGR image directly without conversion.
3. Read Input Events with evdev
The evdev library reads input devices directly from /dev/input/ without needing X11.
First, find your input device:
Tip
If the Python command returns no devices, check the raw kernel view:
Look for a handler line witheventX — that is your /dev/input/eventX path. You may need sudo to read input devices.
Then read key events:
python3 - <<'PY'
import evdev
# Use the first available input device (adjust path if needed)
devices = [evdev.InputDevice(p) for p in evdev.list_devices()]
if not devices:
print("No input devices found")
exit(1)
dev = devices[0]
print(f"Reading from: {dev.name}")
print("Press keys (Ctrl+C to stop)...")
for event in dev.read_loop():
if event.type == evdev.ecodes.EV_KEY:
key = evdev.categorize(event)
if key.keystate == key.key_down:
print(f"Key pressed: {key.keycode}")
PY
Stuck?
If you get a permission error, run with sudo or add your user to the input group: sudo usermod -aG input $USER
4. Interactive Display Application
Combine rendering and input into a single application. This example draws a movable cursor on screen:
python3 - <<'PY'
import cv2
import numpy as np
import evdev
import select
W, H = 1920, 1080
cursor_x, cursor_y = W // 2, H // 2
STEP = 20
# Find keyboard device
devices = [evdev.InputDevice(p) for p in evdev.list_devices()]
kbd = None
for d in devices:
caps = d.capabilities(verbose=True)
if any("EV_KEY" in str(c) for c in caps):
kbd = d
break
if not kbd:
print("No keyboard found")
exit(1)
print(f"Using: {kbd.name}")
print("Arrow keys to move, Q to quit")
def render(cx, cy):
img = np.zeros((H, W, 3), dtype=np.uint8)
cv2.putText(img, "Interactive Display", (20, 40),
cv2.FONT_HERSHEY_SIMPLEX, 1.0, (255, 255, 255), 2)
cv2.putText(img, f"Position: ({cx}, {cy})", (20, 80),
cv2.FONT_HERSHEY_SIMPLEX, 0.7, (180, 180, 180), 1)
cv2.circle(img, (cx, cy), 15, (0, 0, 255), -1)
cv2.circle(img, (cx, cy), 16, (255, 255, 255), 1)
# Convert to RGB565
b, g, r = img[:,:,0], img[:,:,1], img[:,:,2]
rgb565 = ((r >> 3).astype(np.uint16) << 11) | \
((g >> 2).astype(np.uint16) << 5) | \
(b >> 3).astype(np.uint16)
with open("/dev/fb0", "wb") as fb:
fb.write(rgb565.tobytes())
render(cursor_x, cursor_y)
try:
for event in kbd.read_loop():
if event.type == evdev.ecodes.EV_KEY and event.value == 1:
if event.code == evdev.ecodes.KEY_UP:
cursor_y = max(0, cursor_y - STEP)
elif event.code == evdev.ecodes.KEY_DOWN:
cursor_y = min(H - 1, cursor_y + STEP)
elif event.code == evdev.ecodes.KEY_LEFT:
cursor_x = max(0, cursor_x - STEP)
elif event.code == evdev.ecodes.KEY_RIGHT:
cursor_x = min(W - 1, cursor_x + STEP)
elif event.code == evdev.ecodes.KEY_Q:
break
render(cursor_x, cursor_y)
except KeyboardInterrupt:
pass
print("Done")
PY
Checkpoint
You should see a red cursor on screen that moves with arrow keys.
What Just Happened?
You built an interactive embedded UI without any window manager:
- OpenCV handled rendering (text, shapes, image processing)
- evdev read hardware input events directly from the kernel
- Framebuffer provided the display output path
This is the same architecture used in ATMs, point-of-sale terminals, and industrial control panels. The key insight is that Linux provides all the building blocks — you just skip the desktop layers you don't need.
Challenges
Challenge 1: Live Camera Feed
Modify the application to capture frames from the camera (cv2.VideoCapture) and display them on the framebuffer. Use a key press to take a snapshot.
Challenge 2: System Monitor
Build a display app that shows CPU temperature, memory usage, and disk space. Update every second. Use color thresholds (green → yellow → red) for temperature warnings.
Deliverable
- Screenshot or photo of the interactive display application running
- Source code of your application