Linux Input Subsystem
Goal: Understand how Linux handles keyboards, mice, touchscreens, joysticks, and other input devices — from hardware interrupt to userspace event — so you can read input, create virtual devices, and debug input problems on embedded targets.
Related Tutorials
For hands-on practice, see: Input Events (raw evdev reading, multi-touch, gesture detection) | Display Apps (keyboard input with evdev) | SPI Display (resistive touch) | SDL2 Touch Paint (touch responsiveness) | DSI Display (capacitive touch) | Doom on Pi (uinput virtual devices, touch overlay)
Every keyboard press, mouse movement, and finger touch passes through the same kernel subsystem before reaching your application. Understanding this subsystem lets you read any input device, create virtual devices that inject synthetic events, measure input latency, and debug "my touch doesn't work" problems. The same framework handles a USB gaming keyboard and a capacitive touchscreen — the difference is which event types and codes the device reports.
1. Architecture Overview
The Linux input subsystem connects hardware drivers to userspace applications through a layered pipeline:
graph TD
subgraph "Hardware"
HW1[USB Keyboard]
HW2[Capacitive Touch<br>FT5x06 / GT911]
HW3[Resistive Touch<br>XPT2046]
HW4[USB Mouse]
end
subgraph "Kernel: Input Drivers"
D1[HID driver]
D2[I2C touch driver]
D3[SPI touch driver<br>ads7846]
D4[HID driver]
end
subgraph "Kernel: Input Core"
IC[input_dev<br>registration]
EH[Event Handlers]
EVD[evdev handler<br>/dev/input/eventN]
end
subgraph "Userspace"
LIB[libevdev / python-evdev]
LI[libinput]
SDL[SDL2 event loop]
APP[Application]
end
HW1 --> D1 --> IC
HW2 --> D2 --> IC
HW3 --> D3 --> IC
HW4 --> D4 --> IC
IC --> EH --> EVD
EVD --> LIB --> APP
EVD --> LI --> SDL --> APP
The pipeline in detail
| Stage | What happens | Latency contribution |
|---|---|---|
| Hardware sampling | Touch controller or keyboard scans for input | 1–10 ms (depends on scan rate) |
| Bus transfer | USB/I2C/SPI transaction delivers data to kernel | 0.1–2 ms |
| Kernel driver | Parses hardware data, calls input_report_*() |
<0.1 ms |
| Input core | Routes event to registered handlers | <0.01 ms |
| evdev handler | Copies event to per-client ring buffer | <0.01 ms |
| Userspace read | Application reads from /dev/input/eventN |
0–1 ms (depends on polling) |
| Total | Hardware to application | 2–15 ms typical |
2. The input_event Struct
Every input event — key press, touch coordinate, mouse movement — is delivered as the same 24-byte struct (on 64-bit systems):
struct input_event {
struct timeval time; /* timestamp (seconds + microseconds) */
__u16 type; /* event type: EV_KEY, EV_ABS, etc. */
__u16 code; /* which key/axis/button */
__s32 value; /* key state, coordinate, or delta */
};
On 32-bit ARM (Raspberry Pi OS 32-bit), this is 16 bytes. On 64-bit, struct timeval is larger — use struct input_event from <linux/input.h>, never hardcode the size.
Reading raw events from C
#include <linux/input.h>
#include <fcntl.h>
#include <unistd.h>
#include <stdio.h>
int fd = open("/dev/input/event0", O_RDONLY);
struct input_event ev;
while (read(fd, &ev, sizeof(ev)) == sizeof(ev)) {
printf("type=%d code=%d value=%d\n", ev.type, ev.code, ev.value);
}
Reading with Python evdev
import evdev
dev = evdev.InputDevice("/dev/input/event0")
for event in dev.read_loop():
print(evdev.categorize(event))
The evdev library wraps the same read() call and decodes event types and codes into human-readable names.
3. Event Types
Each event has a type field that identifies what category of input it represents:
| Type | Constant | Purpose | Example |
|---|---|---|---|
| 0x00 | EV_SYN |
Synchronization — marks end of an event packet | Separates multi-axis touch reports |
| 0x01 | EV_KEY |
Key/button press or release | Keyboard keys, mouse buttons, BTN_TOUCH |
| 0x02 | EV_REL |
Relative axis movement | Mouse X/Y delta, scroll wheel |
| 0x03 | EV_ABS |
Absolute axis position | Touch X/Y, joystick, pressure |
| 0x04 | EV_MSC |
Miscellaneous | Scan codes, timestamps |
| 0x11 | EV_LED |
LED control | Caps Lock LED, Num Lock LED |
| 0x15 | EV_FF |
Force feedback | Rumble motors in game controllers |
EV_KEY — Keys and Buttons
The value field indicates the state:
| Value | Meaning |
|---|---|
| 0 | Released |
| 1 | Pressed |
| 2 | Auto-repeat (held down) |
Common codes:
| Code | Constant | Device |
|---|---|---|
| 1 | KEY_ESC |
Keyboard |
| 28 | KEY_ENTER |
Keyboard |
| 103 | KEY_UP |
Keyboard |
| 272 | BTN_LEFT |
Mouse |
| 330 | BTN_TOUCH |
Touchscreen |
EV_ABS — Absolute Axes
Used by touchscreens, joysticks, and tablets. The value field is the absolute position. Each axis has a defined range reported in device capabilities.
| Code | Constant | Typical use |
|---|---|---|
| 0x00 | ABS_X |
Single-touch X coordinate |
| 0x01 | ABS_Y |
Single-touch Y coordinate |
| 0x18 | ABS_PRESSURE |
Touch pressure |
| 0x2f | ABS_MT_SLOT |
Multi-touch finger slot index |
| 0x35 | ABS_MT_POSITION_X |
Multi-touch X coordinate |
| 0x36 | ABS_MT_POSITION_Y |
Multi-touch Y coordinate |
| 0x39 | ABS_MT_TRACKING_ID |
Multi-touch finger ID (–1 = lift) |
EV_REL — Relative Axes
Used by mice. The value field is the delta (change since last event).
| Code | Constant | Typical use |
|---|---|---|
| 0x00 | REL_X |
Mouse horizontal movement |
| 0x01 | REL_Y |
Mouse vertical movement |
| 0x08 | REL_WHEEL |
Scroll wheel |
EV_SYN — Synchronization
SYN_REPORT (code 0) marks the boundary between complete event packets. A touchscreen reports X, Y, and pressure as separate events — the SYN_REPORT at the end tells the reader that all axes for this sample have been delivered:
EV_ABS ABS_MT_POSITION_X 542
EV_ABS ABS_MT_POSITION_Y 318
EV_KEY BTN_TOUCH 1
EV_SYN SYN_REPORT 0 ← process this packet now
4. Device Capabilities and Discovery
Each input device declares what event types and codes it supports. This is how userspace distinguishes a keyboard from a touchscreen.
/proc/bus/input/devices
I: Bus=0018 Vendor=0000 Product=0000 Version=0000
N: Name="Goodix Capacitive TouchScreen"
P: Phys=input/ts
S: Sysfs=/devices/platform/.../i2c-1/1-005d/input/input2
U: Uniq=
H: Handlers=event2
B: PROP=2
B: EV=b
B: KEY=400 0 0 0 0 0
B: ABS=2658000 3
The B: (bitmap) lines encode capabilities. The H: line tells you which /dev/input/eventN file to open.
Querying with evtest
Lists all input devices with their capabilities, then reads events in real time. This is the most useful input debugging tool.
Querying with Python
import evdev
for path in evdev.list_devices():
dev = evdev.InputDevice(path)
caps = dev.capabilities(verbose=True)
print(f"{dev.path}: {dev.name}")
for ev_type, codes in caps.items():
print(f" {ev_type}: {codes}")
Identifying device type from capabilities
| Device type | Must have | Typical extras |
|---|---|---|
| Keyboard | EV_KEY with KEY_A..KEY_Z |
EV_LED, EV_REP |
| Mouse | EV_REL with REL_X, REL_Y |
EV_KEY with BTN_LEFT |
| Touchscreen | EV_ABS with ABS_X/ABS_Y or ABS_MT_* |
EV_KEY with BTN_TOUCH, INPUT_PROP_DIRECT |
| Joystick | EV_ABS with ABS_X/ABS_Y |
EV_KEY with BTN_A..BTN_Z |
The INPUT_PROP_DIRECT property distinguishes touchscreens (direct mapping — finger coordinates match screen pixels) from touchpads (indirect — finger movement is relative).
5. Multi-Touch Protocol
Modern capacitive touchscreens support multiple simultaneous fingers. Linux uses the MT protocol type B (slotted), where each concurrent finger gets a slot:
EV_ABS ABS_MT_SLOT 0 ← finger 0
EV_ABS ABS_MT_TRACKING_ID 45 ← unique ID for this contact
EV_ABS ABS_MT_POSITION_X 320
EV_ABS ABS_MT_POSITION_Y 240
EV_ABS ABS_MT_SLOT 1 ← finger 1
EV_ABS ABS_MT_TRACKING_ID 46
EV_ABS ABS_MT_POSITION_X 500
EV_ABS ABS_MT_POSITION_Y 400
EV_SYN SYN_REPORT 0
When a finger lifts, its tracking ID becomes –1:
Slot tracking in userspace
Track position per slot and respond to tracking ID changes:
slot = 0
positions = {} # slot → (x, y)
for event in dev.read_loop():
if event.type == ecodes.EV_ABS:
if event.code == ecodes.ABS_MT_SLOT:
slot = event.value
elif event.code == ecodes.ABS_MT_TRACKING_ID:
if event.value == -1:
positions.pop(slot, None) # finger lifted
elif event.code == ecodes.ABS_MT_POSITION_X:
positions.setdefault(slot, [0, 0])[0] = event.value
elif event.code == ecodes.ABS_MT_POSITION_Y:
positions.setdefault(slot, [0, 0])[1] = event.value
elif event.type == ecodes.EV_SYN:
process(positions) # all slots updated for this frame
This is exactly the pattern used in the Doom touch overlay for mapping margin touches to keyboard events.
6. Coordinate Mapping and Calibration
Touch devices report raw coordinates in their hardware range (e.g., 0–4095 for a 12-bit ADC). You must map these to screen pixels.
Reading the hardware range
abs_info = dev.capabilities(absinfo=True)[ecodes.EV_ABS]
for code, info in abs_info:
if code == ecodes.ABS_MT_POSITION_X:
print(f"X range: {info.min}..{info.max}") # e.g., 0..4095
Linear mapping
screen_x = (raw_x - x_min) / (x_max - x_min) * screen_width
screen_y = (raw_y - y_min) / (y_max - y_min) * screen_height
Axis swapping and inversion
Some touch panels report axes rotated or mirrored relative to the display:
| Symptom | Fix |
|---|---|
| Touch X moves display Y | Swap X and Y |
| Touch at left → pixel at right | Invert X: screen_x = screen_width - screen_x |
| Touch at top → pixel at bottom | Invert Y: screen_y = screen_height - screen_y |
Device Tree calibration
For DRM/KMS + libinput setups, calibration is specified in the device tree or via libinput:
For direct evdev access (as in the tutorials), handle calibration in your code.
7. Virtual Input Devices (uinput)
The /dev/uinput interface lets userspace programs create virtual input devices — synthetic keyboards, mice, or touchscreens that inject events into the input subsystem. Other programs (including SDL2) cannot distinguish these events from real hardware.
Why uinput?
| Use case | Example |
|---|---|
| Touch-to-keyboard mapping | Doom touch overlay — touch in screen margins → arrow keys |
| IMU-to-keyboard mapping | Doom IMU controller — tilt → arrow keys |
| Automated testing | Inject key sequences for UI testing without a physical keyboard |
| Accessibility | Alternative input methods (eye tracking, sip-and-puff) → standard events |
| Remote control | Network-received commands → local input events |
Creating a virtual keyboard (Python)
from evdev import UInput, ecodes
# Declare which keys this virtual device supports
ui = UInput(
{ecodes.EV_KEY: [ecodes.KEY_UP, ecodes.KEY_DOWN, ecodes.KEY_SPACE]},
name="my-virtual-keyboard",
)
print(f"Created: {ui.device.path}")
# Press and release a key
ui.write(ecodes.EV_KEY, ecodes.KEY_SPACE, 1) # press
ui.syn() # send SYN_REPORT
ui.write(ecodes.EV_KEY, ecodes.KEY_SPACE, 0) # release
ui.syn()
ui.close()
Creating a virtual keyboard (C)
#include <linux/uinput.h>
#include <fcntl.h>
#include <unistd.h>
#include <string.h>
int fd = open("/dev/uinput", O_WRONLY | O_NONBLOCK);
/* Enable EV_KEY and specific keys */
ioctl(fd, UI_SET_EVBIT, EV_KEY);
ioctl(fd, UI_SET_KEYBIT, KEY_UP);
ioctl(fd, UI_SET_KEYBIT, KEY_SPACE);
/* Create the device */
struct uinput_setup setup = {0};
strncpy(setup.name, "my-virtual-keyboard", UINPUT_MAX_NAME_SIZE);
setup.id.bustype = BUS_USB;
ioctl(fd, UI_DEV_SETUP, &setup);
ioctl(fd, UI_DEV_CREATE);
sleep(1); /* wait for device node to appear */
/* Inject a key press */
struct input_event ev = {0};
ev.type = EV_KEY;
ev.code = KEY_SPACE;
ev.value = 1;
write(fd, &ev, sizeof(ev));
/* Send SYN_REPORT */
ev.type = EV_SYN;
ev.code = SYN_REPORT;
ev.value = 0;
write(fd, &ev, sizeof(ev));
Permission requirements
/dev/uinput requires root or membership in the input group:
Or create a udev rule for persistent permissions:
8. Exclusive Access (Grab)
A process can grab a device for exclusive access — other processes (including SDL2) will not receive events from it:
This is useful when an overlay process needs to intercept all events from a device and selectively re-inject some of them via uinput. Without grabbing, both the overlay and the application would see the same raw events.
Warning
Grabbing a keyboard makes it invisible to other processes. If your overlay crashes while holding a grab, you lose keyboard input until you kill the process (via SSH or serial).
9. Userspace Access Methods Compared
| Method | Language | Level | When to use |
|---|---|---|---|
Raw read() on /dev/input/eventN |
C | Lowest | Kernel module testing, minimal-dependency embedded apps |
libevdev |
C | Low | C applications that need capability queries + event reading |
python-evdev |
Python | Low | Prototyping, overlay scripts, debugging tools |
libinput |
C | Mid | Desktop-grade input handling with calibration, gestures, palm rejection |
| SDL2 events | C/Python | High | Games and interactive apps — abstracts keyboard, mouse, touch, gamepad |
SDL2's input handling
SDL2 opens /dev/input/eventN devices internally and maps them to SDL events:
| Linux event | SDL2 event | Coordinates |
|---|---|---|
EV_KEY with keyboard codes |
SDL_KEYDOWN / SDL_KEYUP |
Key symbol |
EV_KEY with BTN_LEFT |
SDL_MOUSEBUTTONDOWN |
Pixel position |
EV_REL |
SDL_MOUSEMOTION |
Pixel delta |
EV_ABS (single-touch) |
SDL_MOUSEMOTION |
Pixel position |
EV_ABS (multi-touch MT) |
SDL_FINGERDOWN / SDL_FINGERMOTION |
Normalized 0.0–1.0 |
SDL2 automatically discovers input devices at startup. Virtual devices created via uinput appear as normal devices — SDL2 picks them up without configuration. This is why the Doom touch overlay works: the overlay creates a virtual keyboard, and SDL2 reads from it alongside the real keyboard.
10. Debugging Input Problems
Essential tools
# List all input devices with capabilities
sudo evtest
# Watch events from a specific device
sudo evtest /dev/input/event2
# List devices (quick overview)
cat /proc/bus/input/devices
# Check permissions
ls -la /dev/input/event*
# Check which process has a device grabbed
sudo lsof /dev/input/event2
Common problems
| Symptom | Cause | Fix |
|---|---|---|
No /dev/input/eventN for your device |
Driver not loaded or device not detected | Check dmesg, load driver module |
Permission denied on /dev/input/eventN |
Not root and not in input group |
sudo usermod -aG input $USER + re-login |
| Touch coordinates are wrong | Axis swap/inversion, or wrong calibration | Use evtest to check raw values, adjust mapping |
| SDL2 ignores your virtual keyboard | Device created after SDL2 init | Create uinput device before launching the application |
| Double events (real + virtual) | Overlay injects events but does not grab the original device | Use dev.grab() if needed |
Touch works in evtest but not in app |
App expects mouse events, device reports touch | Check if app handles SDL_FINGER* or only SDL_MOUSE* |
Latency measurement
Measure input-to-application latency by comparing the kernel timestamp in the event with the current time:
import time, evdev
dev = evdev.InputDevice("/dev/input/event0")
for event in dev.read_loop():
if event.type != evdev.ecodes.EV_SYN:
kernel_time = event.timestamp()
now = time.time()
latency_ms = (now - kernel_time) * 1000
print(f"Latency: {latency_ms:.2f} ms")
11. Kernel-Side Overview
For driver developers — the kernel API that hardware drivers use to report input events.
Registering an input device
#include <linux/input.h>
struct input_dev *idev = input_allocate_device();
idev->name = "My Touch Controller";
idev->phys = "spi0.0/input0";
/* Declare capabilities */
set_bit(EV_ABS, idev->evbit);
set_bit(EV_KEY, idev->evbit);
set_bit(BTN_TOUCH, idev->keybit);
input_set_abs_params(idev, ABS_X, 0, 4095, 0, 0);
input_set_abs_params(idev, ABS_Y, 0, 4095, 0, 0);
input_register_device(idev);
Reporting events
/* In interrupt handler or polling function: */
input_report_abs(idev, ABS_X, x_value);
input_report_abs(idev, ABS_Y, y_value);
input_report_key(idev, BTN_TOUCH, 1);
input_sync(idev); /* sends SYN_REPORT */
input_sync() is the kernel equivalent of SYN_REPORT — it tells the input core that all axes for this sample have been reported and the packet is complete.
Device tree binding
Touch controller drivers are typically bound via device tree. Example for the FT5x06 on I2C:
&i2c1 {
touchscreen@38 {
compatible = "edt,edt-ft5406";
reg = <0x38>;
interrupt-parent = <&gpio>;
interrupts = <4 IRQ_TYPE_EDGE_FALLING>;
touchscreen-size-x = <800>;
touchscreen-size-y = <480>;
};
};
The touchscreen-size-x/y properties tell the driver what coordinate range to report — the input core uses this for the ABS_X/ABS_Y max values that userspace reads from capabilities.