Testing Embedded Systems
Testing is critical in embedded systems where bugs can mean hardware damage, safety hazards, or expensive recalls. This guide covers testing strategies from your robot to professional automotive and aerospace systems.
Why Testing Matters in Embedded
Desktop software bug:
User sees error → Restart app → Annoying
Embedded software bug:
Robot crashes into wall → Annoying
Car brake fails → Injury/death
Aircraft control fails → Catastrophic
The difference: Embedded systems interact with the physical world
where software bugs become physical consequences.
Testing Pyramid for Embedded
┌─────────────────┐
│ System Test │ ← Real hardware, real world
│ (HIL, Vehicle) │ Expensive, slow, essential
├───────────────────┤
│ Integration Test │ ← Components working together
│ (Module + Hardware) │ Find interface bugs
├───────────────────────┤
│ Unit Test │ ← Individual functions
│ (Software only) │ Fast, cheap, many
└───────────────────────────┘
Professional embedded: All three levels, automated, traced to requirements
Your robot: Focus on unit tests + manual system tests
Unit Testing Your Robot Code
What to Test
| Component | What to Test | Example |
|---|---|---|
| PID controller | Output for given inputs | pid.update(error=10) returns expected |
| State machine | Transitions, actions | IDLE + button → RUNNING |
| Sensor processing | Position calculation | 4 sensors → correct position |
| Motor mapping | PWM calculation | speed=100 → correct duty cycle |
| Filters | Smoothing behavior | Noisy input → smooth output |
Simple Unit Test Framework
# test_framework.py - Minimal test framework for MicroPython
class TestRunner:
def __init__(self):
self.passed = 0
self.failed = 0
self.errors = []
def assert_equal(self, actual, expected, message=""):
if actual == expected:
self.passed += 1
print(f" ✓ {message}")
else:
self.failed += 1
self.errors.append(f"{message}: expected {expected}, got {actual}")
print(f" ✗ {message}: expected {expected}, got {actual}")
def assert_close(self, actual, expected, tolerance=0.01, message=""):
if abs(actual - expected) <= tolerance:
self.passed += 1
print(f" ✓ {message}")
else:
self.failed += 1
self.errors.append(f"{message}: expected {expected}±{tolerance}, got {actual}")
print(f" ✗ {message}")
def assert_true(self, condition, message=""):
self.assert_equal(condition, True, message)
def run_test(self, name, test_func):
print(f"\n{name}:")
try:
test_func(self)
except Exception as e:
self.failed += 1
self.errors.append(f"{name}: Exception {e}")
print(f" ✗ Exception: {e}")
def summary(self):
total = self.passed + self.failed
print(f"\n{'='*40}")
print(f"Results: {self.passed}/{total} passed")
if self.errors:
print("Failures:")
for e in self.errors:
print(f" - {e}")
return self.failed == 0
Testing a PID Controller
# test_pid.py
from test_framework import TestRunner
# The code under test
class PIDController:
def __init__(self, kp, ki, kd, dt):
self.kp = kp
self.ki = ki
self.kd = kd
self.dt = dt
self.integral = 0
self.prev_error = 0
def update(self, error):
p = self.kp * error
self.integral += error * self.dt
i = self.ki * self.integral
d = self.kd * (error - self.prev_error) / self.dt
self.prev_error = error
return p + i + d
def reset(self):
self.integral = 0
self.prev_error = 0
# Tests
def test_proportional_only(t):
"""P-only controller should output Kp * error"""
pid = PIDController(kp=2.0, ki=0, kd=0, dt=0.01)
output = pid.update(error=10)
t.assert_equal(output, 20.0, "P output = Kp * error")
def test_integral_accumulates(t):
"""Integral should accumulate over multiple calls"""
pid = PIDController(kp=0, ki=1.0, kd=0, dt=0.1)
pid.update(error=10) # integral = 10 * 0.1 = 1.0
pid.update(error=10) # integral = 1.0 + 1.0 = 2.0
output = pid.update(error=10) # integral = 3.0
t.assert_close(output, 3.0, tolerance=0.01, message="Integral accumulates")
def test_derivative_responds_to_change(t):
"""Derivative should respond to error change"""
pid = PIDController(kp=0, ki=0, kd=1.0, dt=0.1)
pid.update(error=0)
output = pid.update(error=10) # d = 1.0 * (10-0) / 0.1 = 100
t.assert_close(output, 100.0, tolerance=0.01, message="Derivative responds")
def test_reset_clears_state(t):
"""Reset should clear integral and previous error"""
pid = PIDController(kp=1.0, ki=1.0, kd=1.0, dt=0.1)
pid.update(error=10)
pid.update(error=20)
pid.reset()
# After reset, should behave like fresh controller
t.assert_equal(pid.integral, 0, "Integral cleared")
t.assert_equal(pid.prev_error, 0, "Previous error cleared")
def test_zero_error_zero_output(t):
"""Zero error should produce zero output (for fresh controller)"""
pid = PIDController(kp=1.0, ki=1.0, kd=1.0, dt=0.1)
output = pid.update(error=0)
t.assert_equal(output, 0, "Zero error → zero output")
# Run tests
if __name__ == "__main__":
runner = TestRunner()
runner.run_test("P-only output", test_proportional_only)
runner.run_test("Integral accumulation", test_integral_accumulates)
runner.run_test("Derivative response", test_derivative_responds_to_change)
runner.run_test("Reset clears state", test_reset_clears_state)
runner.run_test("Zero error handling", test_zero_error_zero_output)
runner.summary()
Testing a State Machine
# test_state_machine.py
from test_framework import TestRunner
# State machine under test
class State:
IDLE = 0
RUNNING = 1
ERROR = 2
class Event:
START = "start"
STOP = "stop"
FAULT = "fault"
RESET = "reset"
class StateMachine:
def __init__(self):
self.state = State.IDLE
self.entry_actions = [] # Track actions for testing
def process(self, event):
if self.state == State.IDLE:
if event == Event.START:
self.state = State.RUNNING
self.entry_actions.append("motor_on")
return True
elif self.state == State.RUNNING:
if event == Event.STOP:
self.state = State.IDLE
self.entry_actions.append("motor_off")
return True
elif event == Event.FAULT:
self.state = State.ERROR
self.entry_actions.append("alarm_on")
return True
elif self.state == State.ERROR:
if event == Event.RESET:
self.state = State.IDLE
self.entry_actions.append("alarm_off")
return True
return False # No transition
# Tests
def test_initial_state(t):
"""State machine should start in IDLE"""
sm = StateMachine()
t.assert_equal(sm.state, State.IDLE, "Initial state is IDLE")
def test_idle_to_running(t):
"""START event should transition IDLE → RUNNING"""
sm = StateMachine()
result = sm.process(Event.START)
t.assert_true(result, "Transition occurred")
t.assert_equal(sm.state, State.RUNNING, "State is RUNNING")
t.assert_true("motor_on" in sm.entry_actions, "Motor turned on")
def test_running_to_idle(t):
"""STOP event should transition RUNNING → IDLE"""
sm = StateMachine()
sm.process(Event.START) # Get to RUNNING
result = sm.process(Event.STOP)
t.assert_true(result, "Transition occurred")
t.assert_equal(sm.state, State.IDLE, "State is IDLE")
def test_invalid_transition_ignored(t):
"""Invalid events should not change state"""
sm = StateMachine()
result = sm.process(Event.STOP) # Can't STOP from IDLE
t.assert_true(not result, "No transition")
t.assert_equal(sm.state, State.IDLE, "State unchanged")
def test_error_recovery(t):
"""RESET should recover from ERROR state"""
sm = StateMachine()
sm.process(Event.START)
sm.process(Event.FAULT)
t.assert_equal(sm.state, State.ERROR, "In ERROR state")
sm.process(Event.RESET)
t.assert_equal(sm.state, State.IDLE, "Recovered to IDLE")
def test_full_sequence(t):
"""Test complete operation sequence"""
sm = StateMachine()
# Normal operation
sm.process(Event.START)
t.assert_equal(sm.state, State.RUNNING, "Started")
sm.process(Event.STOP)
t.assert_equal(sm.state, State.IDLE, "Stopped")
# Error and recovery
sm.process(Event.START)
sm.process(Event.FAULT)
t.assert_equal(sm.state, State.ERROR, "Faulted")
sm.process(Event.RESET)
t.assert_equal(sm.state, State.IDLE, "Reset")
# Run tests
if __name__ == "__main__":
runner = TestRunner()
runner.run_test("Initial state", test_initial_state)
runner.run_test("IDLE → RUNNING", test_idle_to_running)
runner.run_test("RUNNING → IDLE", test_running_to_idle)
runner.run_test("Invalid transition", test_invalid_transition_ignored)
runner.run_test("Error recovery", test_error_recovery)
runner.run_test("Full sequence", test_full_sequence)
runner.summary()
Testing Sensor Processing
# test_sensors.py
from test_framework import TestRunner
def calculate_line_position(sensors):
"""Calculate weighted position from 4 sensors (0=black, 1=white)"""
weights = [-1.5, -0.5, 0.5, 1.5]
on_line = [s == 0 for s in sensors]
if not any(on_line):
return None # Line lost
total = sum(w for w, on in zip(weights, on_line) if on)
count = sum(on_line)
return total / count
# Tests
def test_centered(t):
"""Middle sensors on line → position 0"""
pos = calculate_line_position([1, 0, 0, 1])
t.assert_close(pos, 0.0, tolerance=0.01, message="Centered")
def test_left_of_center(t):
"""Left sensors on line → negative position"""
pos = calculate_line_position([0, 0, 1, 1])
t.assert_true(pos < 0, "Position is negative (left)")
def test_right_of_center(t):
"""Right sensors on line → positive position"""
pos = calculate_line_position([1, 1, 0, 0])
t.assert_true(pos > 0, "Position is positive (right)")
def test_far_left(t):
"""Only leftmost sensor → minimum position"""
pos = calculate_line_position([0, 1, 1, 1])
t.assert_equal(pos, -1.5, "Far left position")
def test_far_right(t):
"""Only rightmost sensor → maximum position"""
pos = calculate_line_position([1, 1, 1, 0])
t.assert_equal(pos, 1.5, "Far right position")
def test_line_lost(t):
"""All sensors white → None (line lost)"""
pos = calculate_line_position([1, 1, 1, 1])
t.assert_equal(pos, None, "Line lost returns None")
def test_all_on_line(t):
"""All sensors black → centered (wide line or intersection)"""
pos = calculate_line_position([0, 0, 0, 0])
t.assert_close(pos, 0.0, tolerance=0.01, message="All black → centered")
# Run tests
if __name__ == "__main__":
runner = TestRunner()
runner.run_test("Centered position", test_centered)
runner.run_test("Left of center", test_left_of_center)
runner.run_test("Right of center", test_right_of_center)
runner.run_test("Far left", test_far_left)
runner.run_test("Far right", test_far_right)
runner.run_test("Line lost", test_line_lost)
runner.run_test("All on line", test_all_on_line)
runner.summary()
Integration Testing
Integration tests verify that components work together correctly.
Example: Control Loop Integration Test
# test_control_integration.py
from test_framework import TestRunner
class MockSensors:
"""Simulates sensors for testing"""
def __init__(self):
self.position_sequence = []
self.index = 0
def set_positions(self, positions):
self.position_sequence = positions
self.index = 0
def get_position(self):
if self.index < len(self.position_sequence):
pos = self.position_sequence[self.index]
self.index += 1
return pos
return 0
class MockMotors:
"""Records motor commands for verification"""
def __init__(self):
self.commands = []
def set_speeds(self, left, right):
self.commands.append((left, right))
def last_command(self):
return self.commands[-1] if self.commands else (0, 0)
class LineFollowController:
"""The actual controller to test"""
def __init__(self, sensors, motors, kp=50, base_speed=100):
self.sensors = sensors
self.motors = motors
self.kp = kp
self.base_speed = base_speed
def update(self):
pos = self.sensors.get_position()
if pos is None:
self.motors.set_speeds(0, 0)
return False
correction = self.kp * pos
left = int(self.base_speed - correction)
right = int(self.base_speed + correction)
self.motors.set_speeds(left, right)
return True
# Integration tests
def test_straight_line(t):
"""Centered position → equal motor speeds"""
sensors = MockSensors()
motors = MockMotors()
controller = LineFollowController(sensors, motors)
sensors.set_positions([0, 0, 0]) # Centered
for _ in range(3):
controller.update()
left, right = motors.last_command()
t.assert_equal(left, right, "Equal speeds when centered")
def test_correction_left(t):
"""Line to the left → turn left (right faster)"""
sensors = MockSensors()
motors = MockMotors()
controller = LineFollowController(sensors, motors)
sensors.set_positions([-1.0]) # Line to the left
controller.update()
left, right = motors.last_command()
t.assert_true(right > left, "Right motor faster to turn left")
def test_correction_right(t):
"""Line to the right → turn right (left faster)"""
sensors = MockSensors()
motors = MockMotors()
controller = LineFollowController(sensors, motors)
sensors.set_positions([1.0]) # Line to the right
controller.update()
left, right = motors.last_command()
t.assert_true(left > right, "Left motor faster to turn right")
def test_line_lost_stops(t):
"""Line lost → motors stop"""
sensors = MockSensors()
motors = MockMotors()
controller = LineFollowController(sensors, motors)
sensors.set_positions([None]) # Line lost
result = controller.update()
t.assert_true(not result, "Returns False when line lost")
left, right = motors.last_command()
t.assert_equal((left, right), (0, 0), "Motors stopped")
def test_tracking_sequence(t):
"""Robot should follow position changes"""
sensors = MockSensors()
motors = MockMotors()
controller = LineFollowController(sensors, motors, kp=50, base_speed=100)
# Simulate: left, center, right
sensors.set_positions([-0.5, 0, 0.5])
controller.update()
left1, right1 = motors.last_command()
controller.update()
left2, right2 = motors.last_command()
controller.update()
left3, right3 = motors.last_command()
# Verify correction direction
t.assert_true(right1 > left1, "First: turn left")
t.assert_equal(left2, right2, "Second: straight")
t.assert_true(left3 > right3, "Third: turn right")
# Run tests
if __name__ == "__main__":
runner = TestRunner()
runner.run_test("Straight line", test_straight_line)
runner.run_test("Correction left", test_correction_left)
runner.run_test("Correction right", test_correction_right)
runner.run_test("Line lost stops", test_line_lost_stops)
runner.run_test("Tracking sequence", test_tracking_sequence)
runner.summary()
System Testing
System tests verify the complete system with real hardware.
Manual System Test Checklist
□ Power-On Test
□ Robot powers on without errors
□ OLED shows startup message
□ LEDs indicate ready state
□ Sensor Test
□ Line sensors detect black/white correctly
□ Ultrasonic measures distance accurately
□ IMU reports reasonable values
□ Motor Test
□ Both motors respond to commands
□ Forward/backward works
□ Left/right turn works
□ Stop command works
□ Control Test
□ Robot follows straight line
□ Robot handles gentle curves
□ Robot handles sharp turns
□ Robot stops at obstacle
□ Edge Cases
□ Robot recovers when line lost
□ Robot handles line crossing
□ Robot handles dead end
□ Low battery behavior
□ Endurance Test
□ Run for 10 minutes continuously
□ No overheating
□ No memory leaks (if applicable)
Automated Hardware Test
# test_hardware.py - Run on actual robot
import time
from machine import Pin, I2C
def test_leds():
"""Test addressable LEDs"""
from neopixel import NeoPixel
led = NeoPixel(Pin(6), 8)
print("LED test: Red...")
for i in range(8):
led[i] = (255, 0, 0)
led.write()
time.sleep(0.5)
print("LED test: Green...")
for i in range(8):
led[i] = (0, 255, 0)
led.write()
time.sleep(0.5)
print("LED test: Blue...")
for i in range(8):
led[i] = (0, 0, 255)
led.write()
time.sleep(0.5)
for i in range(8):
led[i] = (0, 0, 0)
led.write()
return True
def test_i2c_devices():
"""Scan for expected I2C devices"""
i2c = I2C(1, sda=Pin(14), scl=Pin(15), freq=400000)
devices = i2c.scan()
print(f"I2C devices found: {[hex(d) for d in devices]}")
expected = [0x3C, 0x68] # OLED, IMU
found_all = all(d in devices for d in expected)
if not found_all:
print(f"Missing devices! Expected: {[hex(d) for d in expected]}")
return found_all
def test_line_sensors():
"""Test line sensors respond to surface"""
sensors = [Pin(p, Pin.IN) for p in [2, 3, 4, 5]]
print("Place robot on WHITE surface, press Enter...")
input()
white_readings = [s.value() for s in sensors]
print(f"White surface: {white_readings}")
print("Place robot on BLACK line, press Enter...")
input()
black_readings = [s.value() for s in sensors]
print(f"Black line: {black_readings}")
# Verify sensors show different values
if white_readings == black_readings:
print("ERROR: Sensors show same reading for both surfaces!")
return False
return True
def test_motors():
"""Test motor movement"""
from picobot import Motors
motors = Motors()
print("Testing left motor forward...")
motors.set_speeds(100, 0)
time.sleep(0.5)
print("Testing right motor forward...")
motors.set_speeds(0, 100)
time.sleep(0.5)
print("Testing both motors forward...")
motors.set_speeds(100, 100)
time.sleep(0.5)
print("Testing both motors backward...")
motors.set_speeds(-100, -100)
time.sleep(0.5)
motors.stop()
print("Motors stopped")
return True
def run_hardware_tests():
"""Run all hardware tests"""
print("="*40)
print("HARDWARE TEST SUITE")
print("="*40)
tests = [
("LEDs", test_leds),
("I2C Devices", test_i2c_devices),
("Line Sensors", test_line_sensors),
("Motors", test_motors),
]
results = []
for name, test_func in tests:
print(f"\n--- {name} ---")
try:
passed = test_func()
results.append((name, passed))
print(f"Result: {'PASS' if passed else 'FAIL'}")
except Exception as e:
results.append((name, False))
print(f"Result: ERROR - {e}")
print("\n" + "="*40)
print("SUMMARY")
print("="*40)
for name, passed in results:
print(f" {name}: {'✓ PASS' if passed else '✗ FAIL'}")
total_passed = sum(1 for _, p in results if p)
print(f"\nTotal: {total_passed}/{len(results)} passed")
if __name__ == "__main__":
run_hardware_tests()
Professional Context: Industrial & Automotive Testing
Your manual testing works for a robot. Professional systems use rigorous, automated, and certified testing processes. Here's how they compare:
Testing Comparison
| Aspect | Your Robot | Industrial | Automotive | Aerospace |
|---|---|---|---|---|
| Unit tests | Optional, manual | Required, automated | Required, traced | Required, certified |
| Integration | Manual | Automated, CI/CD | HIL simulation | HIL + formal |
| System test | Manual on bench | Factory acceptance | Vehicle testing | Flight test + sim |
| Coverage | Unknown | Statement coverage | MC/DC (ASIL-D) | MC/DC (DAL-A) |
| Traceability | None | To requirements | Full bidirectional | Full + independent |
| Automation | None | Partial | Extensive | Extensive |
| Documentation | Comments | Test reports | Full evidence | Certification package |
Test Coverage Requirements
Your robot:
"Run it, seems to work" ✓
Industrial (IEC 61508 SIL 2):
Statement coverage: 100%
Branch coverage: ≥90%
Automotive (ISO 26262 ASIL-D):
Statement coverage: 100%
Branch coverage: 100%
MC/DC coverage: 100%
(Modified Condition/Decision Coverage)
Aerospace (DO-178C DAL-A):
Statement coverage: 100%
Decision coverage: 100%
MC/DC coverage: 100%
+ Structural coverage analysis
+ Dead code analysis
+ Independence of testing
MC/DC example:
if (A && B) || C:
action()
Must test:
A=T, B=T, C=F → True (A matters)
A=T, B=F, C=F → False (B matters)
A=F, B=T, C=F → False (B matters, A matters)
A=F, B=F, C=T → True (C matters)
A=F, B=F, C=F → False (C matters)
Hardware-in-the-Loop (HIL) Testing
Professional systems test software against simulated hardware:
Your robot testing:
Write code → Flash to Pico → Put on track → Watch it
HIL testing:
┌─────────────────────────────────────────────────────────────┐
│ │
│ Real ECU HIL Simulator │
│ ┌─────────┐ ┌─────────────────────────┐ │
│ │ │ Sensor signals │ │ │
│ │ Software│◄──────────────────│ Simulated Environment │ │
│ │ Under │ │ • Vehicle dynamics │ │
│ │ Test │ Actuator outputs │ • Sensor models │ │
│ │ │──────────────────►│ • Fault injection │ │
│ └─────────┘ │ • Scenario playback │ │
│ └─────────────────────────┘ │
│ │
│ Benefits: │
│ • Test dangerous scenarios safely │
│ • Reproducible (same test, same result) │
│ • Automated (run thousands of tests overnight) │
│ • Fault injection (test error handling) │
│ • No physical wear on vehicle │
│ │
└─────────────────────────────────────────────────────────────┘
Automotive V-Model
The V-model shows how testing corresponds to development phases:
┌─────────────────────────────────────────────────────────────────────┐
│ Automotive V-Model │
├─────────────────────────────────────────────────────────────────────┤
│ │
│ Requirements ─────────────────────────────────► System Test │
│ │ ▲ │
│ │ Trace requirements to tests │ │
│ ▼ │ │
│ System Design ───────────────────────────► Integration Test │
│ │ ▲ │
│ ▼ │ │
│ Component Design ────────────────────────► Component Test │
│ │ ▲ │
│ ▼ │ │
│ Implementation ─────────────────────────────► Unit Test │
│ │ ▲ │
│ └───────────────────────────────────────────────┘ │
│ Code │
│ │
│ Left side: Development (design) │
│ Right side: Verification (testing) │
│ Each test level verifies corresponding design level │
│ │
└─────────────────────────────────────────────────────────────────────┘
Test Automation Infrastructure
Your robot:
Run tests manually when you remember
Professional (CI/CD pipeline):
┌─────────────────────────────────────────────────────────────┐
│ │
│ Developer │
│ │ │
│ ▼ git push │
│ ┌─────────┐ │
│ │ CI │───► Build ───► Static Analysis ───► Unit Test │
│ │ Server │ │ │
│ └─────────┘ ▼ │
│ ┌─────────────┐│
│ │ Integration ││
│ │ Test ││
│ └──────┬──────┘│
│ ▼ │
│ ┌─────────────┐│
│ │ HIL Test ││
│ │ (overnight) ││
│ └──────┬──────┘│
│ ▼ │
│ Pass/Fail ◄──────────────────────────────── Report │
│ │
└─────────────────────────────────────────────────────────────┘
Automotive extras:
• MISRA-C static analysis (code quality)
• Code coverage measurement
• Requirements traceability check
• Test case versioning
Fault Injection Testing
Safety systems must handle failures gracefully:
Your robot:
Sensor fails → Robot crashes → "Oops"
Safety system testing:
┌─────────────────────────────────────────────────────────────┐
│ │
│ Fault Injection Scenarios: │
│ │
│ Sensor faults: │
│ • Signal stuck high/low │
│ • Signal out of range │
│ • Signal noisy/erratic │
│ • Sensor disconnected │
│ │
│ Communication faults: │
│ • CAN message lost │
│ • CAN message corrupted │
│ • CAN bus-off │
│ │
│ Power faults: │
│ • Voltage drop (cold crank) │
│ • Voltage spike (load dump) │
│ • Power cycle during operation │
│ │
│ For each fault: │
│ • System must detect it │
│ • System must react safely │
│ • System must log/report it │
│ │
└─────────────────────────────────────────────────────────────┘
What the Industry Uses
| Tool | Use | Cost |
|---|---|---|
| pytest | Python unit testing | Free |
| Unity | C unit testing (embedded) | Free |
| VectorCAST | Embedded test automation | $$$$ |
| Tessy | Unit testing, coverage | $$$ |
| dSPACE HIL | Hardware-in-the-loop | $$$$ |
| NI TestStand | Test management | $$$ |
| Jenkins | CI/CD automation | Free |
| LDRA | Code analysis, coverage | $$$$ |
| Polyspace | Static analysis | $$$$ |
| Cantata | Unit testing | $$$ |
Lessons for Your Robot
| Professional Practice | Apply to Your Robot | How |
|---|---|---|
| Unit tests | Test functions in isolation | Use simple test framework |
| Mocking | Fake hardware for testing | MockSensors, MockMotors classes |
| Automated tests | Run tests before deploying | python test_all.py before flash |
| Coverage awareness | Know what's tested | List tested vs untested functions |
| Regression testing | Rerun tests after changes | Keep test files, run them |
| Edge cases | Test unusual situations | Line lost, obstacles, low battery |
| Documentation | Record what was tested | Comments, test output logs |
Hardware Limits Principle
What Software Testing Can and Cannot Verify
Testing CAN verify: - Logic correctness → unit tests prove algorithms work - Integration → components communicate correctly - Behavior → system responds as expected to inputs - Edge cases → error handling works
Testing CANNOT verify: - Exhaustive correctness → can't test infinite input combinations - Real-world conditions → bench test ≠ actual environment - Hardware reliability → software tests can't catch hardware defects - Timing under load → unit tests don't catch race conditions - Long-term stability → memory leaks may take hours to appear
The lesson: Testing increases confidence but doesn't guarantee correctness. Professional systems combine testing with static analysis, formal verification, and design reviews. For your robot, good tests catch most bugs; for a brake system, multiple independent verification methods are required.
Quick Reference: Test Checklist
Before Every Code Change
- [ ] Run unit tests
- [ ] Check for syntax errors
- [ ] Review changed code
Before Deployment
- [ ] All unit tests pass
- [ ] Integration tests pass
- [ ] Basic hardware test on robot
- [ ] Test primary use case manually
Before Demo/Competition
- [ ] Full system test
- [ ] Edge case testing
- [ ] Battery level check
- [ ] Backup plan if something fails
Further Reading
- Test Driven Development for Embedded C - James Grenning
- pytest Documentation - Python testing framework
- Unity Test Framework - C embedded testing
- ISO 26262 Overview - Automotive functional safety
- DO-178C Overview - Aerospace software certification