Seeing the Line
Time: 135 min
Learning Objectives
By the end of this lab you will be able to:
- Read infrared reflectance sensors and interpret their output
- Calculate a position error from multiple sensor readings
- Experience why digital bang-bang steering fails and threshold steering still jerks
- Discover why proportional correction outperforms both approaches
- Implement a proportional controller for line following from scratch
- Tune the proportional gain by feel and understand its effect on behavior
- Recognize the trade-off between responsiveness and stability
You will build on the bang-bang experience from Make It Move, discover that proportional correction outperforms threshold-based approaches, and build a complete feedback controller one piece at a time.
Lab Setup
Connect your Pico via USB, then from the picobot/ directory:
Verify in REPL: from picobot import Robot; Robot() → "Robot ready!"
First time? See the full setup guide.
From Bang-Bang to Better
In Make It Move you built a bang-bang line follower that kept the robot on a straight line -- but the motion was jerky. The robot lurched left, right, left, right, because the correction was always full strength regardless of how far off the line it was. The sensors gave you binary left/right -- can we do better?
The bang-bang approach had a fundamental limitation: it only knew which side the line was on, not how far off center. Every error -- tiny or huge -- got the same full-strength correction. To follow curves smoothly, we need something smarter.
Part 1: What Does the Robot See? (~20 min)
Your robot has 4 infrared sensors on its belly. You already read them as raw GPIO pins in GPIO & Sensors. Now let's use them for real.
How the Sensors Work
Each sensor is an optocoupler (specifically a TCRT5000) containing:
- Infrared LED -- constantly shines invisible light downward
- Phototransistor -- detects how much light bounces back
White surfaces reflect most of the infrared light. Black surfaces absorb it. The sensor converts this into a digital signal:
- White surface (reflective): sensor outputs
1 - Black surface (absorbing): sensor outputs
0

Tip
Your phone camera can see infrared light! Point your phone at the robot's belly while it's powered on -- you'll see the IR LEDs glowing purple. This is a quick way to verify the sensors are working.
Why does the comparator invert the signal?
The phototransistor is "active low" — light reflects → it conducts → voltage drops. But this signal goes through a voltage comparator (LM324) before reaching the MCU. The comparator inverts it: high reflection (white) → GPIO reads 1, low reflection (black) → GPIO reads 0. This matches intuition and is what Pin.value() returns.
TCRT5000 Deep Dive
| Parameter | Value | Why It Matters |
|---|---|---|
| Optimal distance | 2.5 mm | Robot chassis height is designed for this |
| Operating range | 0.2--15 mm | Too close = saturates, too far = no signal |
| LED wavelength | 950 nm | Invisible to humans, visible to phone cameras |
The sensor output passes through a comparator with hysteresis before reaching the Pico's GPIO pins, cleaning up any noise.
Datasheet: TCRT5000 (Vishay)
Read Raw Values
The four sensors are arranged in a row under the robot's belly (see the robot schematic (PDF) for the full circuit):
| Sensor | GPIO | Position |
|---|---|---|
| X1 | GP2 | Far left |
| X2 | GP3 | Left-center |
| X3 | GP4 | Right-center |
| X4 | GP5 | Far right |
from machine import Pin
import time
# Set up the 4 line sensor pins — just digital inputs
line_pins = [
Pin(2, Pin.IN), # X1 — far left
Pin(3, Pin.IN), # X2 — left-center
Pin(4, Pin.IN), # X3 — right-center
Pin(5, Pin.IN), # X4 — far right
]
def read_line():
"""Read all 4 line sensors. Returns list of 0/1 values."""
return [p.value() for p in line_pins]
print("Move the robot by hand over the track.")
print("0 = black (absorbing), 1 = white (reflective)")
print()
while True:
s = read_line()
# Visual: █ for black (0), ░ for white (1)
pattern = "".join("█" if v == 0 else "░" for v in s)
print(f"Sensors: {s} |{pattern}|")
time.sleep(0.1)
Pick the robot up and slide it by hand over the black tape. Watch what happens.
| Robot Position | Expected Reading | Pattern |
|---|---|---|
| Centered on line | [1, 0, 0, 1] |
░██░ |
| Line under left sensors | [0, 0, 1, 1] |
██░░ |
| Line under right sensors | [1, 1, 0, 0] |
░░██ |
| Completely off the line | [1, 1, 1, 1] |
░░░░ |
| On a wide junction | [0, 0, 0, 0] |
████ |
The pattern column shows █ for black (sensor triggered) and ░ for white.
Checkpoint -- Sensors Responding
Each sensor should toggle as you slide the robot over the tape. If a sensor never changes, check that it's not blocked or damaged. Try reading individual pins in the REPL: Pin(2, Pin.IN).value()
Question
You have four binary values. That's 16 possible combinations. But not all combinations make physical sense -- can you have [0, 1, 1, 0] (outer sensors detect, inner sensors don't)? Under what conditions might that happen?
Try It: Digital Bang-Bang
You can read the sensors. Before we do any math, let's try the most intuitive thing: steer with simple if/else logic on the raw values. No weighted averages, no formulas — just "which side sees the line?"
from machine import Pin
from picobot import Robot
import time
# Line sensor pins (same as before)
line_pins = [Pin(2, Pin.IN), Pin(3, Pin.IN), Pin(4, Pin.IN), Pin(5, Pin.IN)]
def read_line():
return [p.value() for p in line_pins]
robot = Robot() # For motor control
SPEED = 90
print("Digital bang-bang -- the simplest possible line follower")
print("Place robot on line -- starting in 3 seconds...")
for countdown in range(3, 0, -1):
robot.set_leds((255, 255, 0))
time.sleep(0.5)
robot.leds_off()
time.sleep(0.5)
robot.set_leds((0, 255, 0))
try:
while True:
s = read_line()
# s = [X1, X2, X3, X4]
# 0 = black (line), 1 = white (no line)
left_sees_line = (s[0] == 0 or s[1] == 0)
right_sees_line = (s[2] == 0 or s[3] == 0)
if left_sees_line and not right_sees_line:
# Line is on the left -- turn left
robot.set_motors(-SPEED, SPEED)
elif right_sees_line and not left_sees_line:
# Line is on the right -- turn right
robot.set_motors(SPEED, -SPEED)
elif left_sees_line and right_sees_line:
# Centered -- drive straight
robot.set_motors(SPEED, SPEED)
else:
# Lost the line -- stop
robot.stop()
time.sleep(0.02)
except KeyboardInterrupt:
robot.stop()
Run this on the track and watch carefully.
Question
What does the robot's movement look like? Is it smooth? Does it follow curves well? What happens at the boundary between "line is left" and "line is right"?
It follows the line — sort of. But it snaps between three fixed states: hard left, hard right, or straight. There is nothing in between. On a gentle curve, the robot alternates rapidly between "full left turn" and "straight ahead," producing a visible zigzag.
This is bang-bang control — the same approach your home thermostat uses (heater fully on or fully off). For a thermostat, the oscillation between 20°C and 22°C is acceptable. For a robot on a track, the constant jerking wastes energy and slows you down.
Question
Count how many times the robot switches direction in 5 seconds on a straight section. Now imagine a solution where the number of direction changes is close to zero on a straight. What would need to be different?
The fundamental problem: the robot only knows "line is left" or "line is right." It has no idea how far left or right. A tiny drift gets the same full-strength correction as a massive one. To do better, we need a single number that tells us not just which side the line is on, but how far off center it is.
From Four Numbers to One: The Error Value
Four separate binary readings can only tell you which side the line is on. What we really want is a single number that says: "The line is this far to the left or right."
No library function needed — let's build it ourselves. The idea: assign each sensor a weight based on its physical position. Sensors on the left get negative weights, sensors on the right get positive weights. When a sensor sees the line (reads 0 = black), its weight "votes" for that direction.
from picobot import Robot
import time
robot = Robot()
# Weights match physical position: left is negative, right is positive
# Far-left pulls hard left, inner sensors pull gently
W = [-2, -0.5, 0.5, 2]
print("Slide robot over the line by hand. Watch how the error changes.")
print("Negative = line is left, Positive = line is right")
print()
while True:
s = read_line() # [X1, X2, X3, X4] — 0=black, 1=white
# Which sensors see the line?
active = [W[i] for i in range(4) if s[i] == 0]
if len(active) == 0:
msg = "LINE LOST — no sensor sees black"
else:
# Average the weights of sensors that see the line
error = sum(active) / len(active)
# Visual bar: 20 chars wide, | marks the error position
bar_pos = int((error + 2) * 5) # Map -2..+2 to 0..20
bar_pos = max(0, min(20, bar_pos))
bar = " " * bar_pos + "|" + " " * (20 - bar_pos)
msg = f"Raw: {s} Error: {error:+5.1f} [{bar}]"
# Pad to fixed width so \r clears the previous line cleanly
print(f"{msg:<60s}", end="\r")
time.sleep(0.1)
Move the robot by hand again and watch the error value change. Pay attention to which raw values produce which error:
| Robot Position | Raw values | Active weights | Error | Meaning |
|---|---|---|---|---|
| Line far left | [0, 1, 1, 1] |
[-2] |
-2.0 | Strong left correction needed |
| Line left-center | [0, 0, 1, 1] |
[-2, -0.5] |
-1.25 | Moderate left |
| Centered | [1, 0, 0, 1] |
[-0.5, 0.5] |
0.0 | Perfect — no correction |
| Line right-center | [1, 1, 0, 0] |
[0.5, 2] |
1.25 | Moderate right |
| Line far right | [1, 1, 1, 0] |
[2] |
2.0 | Strong right correction needed |
| Line lost | [1, 1, 1, 1] |
[] |
— | No sensor sees black |
"Line Lost" Happens on Curves Too — Try It!
Slide the robot slowly from far-right toward center. You'll see something like:
[1, 1, 1, 0] → far right sensor sees line (error: +2.0)
[1, 1, 1, 1] → NO sensor sees line — "LINE LOST"!
[1, 1, 0, 1] → inner-right sensor picks it up (error: +0.5)
The line didn't disappear — it passed through the gap between the far sensor (X4) and the inner sensor (X3). The line is narrower than the spacing between those sensors, so there's a blind zone where no sensor can see it. This also happens between X1 and X2 on the left side.
This means "line lost" ([1,1,1,1]) doesn't always mean the robot went off the track. On curves and transitions, it can happen routinely. Any line-following code must handle this — you can't just stop when no sensor sees the line. We'll deal with this using state memory later in the tutorial.
What We Have and What We Don't
Take a moment to think about the constraints of this setup:
What we have: 4 digital sensors that output 0 (black) or 1 (white). That's 4 bits of information — at most 16 possible states.
What we don't have:
- Analog readings — we can't tell how dark the surface is, only black or white. This means the error jumps in steps (a staircase), not a smooth curve.
- Speed feedback — we have no encoders on the motors, so we don't know how fast the robot is actually moving. We command a PWM value and hope.
- More sensors — with 8 or 16 sensors across the width, the error would be much finer-grained. We have to work with 4.
The SW challenge for this tutorial: given these limitations, how do we make the best possible line follower using software alone? Every improvement from here on is a software solution to a hardware constraint.
Why Averaging Matters
Why divide by the count of active sensors? Without it, when 3 sensors see the line the error is 3x bigger than when 1 sensor sees it — the error would depend on the line's width, not its position. Averaging keeps the error in a consistent range regardless of how many sensors are triggered.
The Staircase Problem
The error can only take a handful of discrete values (~5-7 distinct steps). Between sensor transitions, the error stays constant — the robot is "blind" in those gaps. You'll see this clearly when you log the data later. We can't fix it in hardware (we'd need analog or more sensors), but we can work around it in software — you'll see how later in this tutorial.
Stuck?
- All sensors read 1 (white): Robot is too high above the surface (>15mm). Check chassis height.
- Values don't change when moving over tape: Check that you're using GP2--GP5.
- Error is always None: The line is not visible to any sensor. Use matte black tape (≥19mm wide) on a matte white surface.
- Erratic readings: Glossy surfaces reflect IR unpredictably -- use matte materials.
Part 2: From Seeing to Steering (~25 min)
You have the weighted error number that tells you how far the line is. The bang-bang approach only knew which side. Surely using this better information with the same if/else logic will help?
First Attempt: Threshold Steering
from machine import Pin
from picobot import Robot
import time
line_pins = [Pin(2, Pin.IN), Pin(3, Pin.IN), Pin(4, Pin.IN), Pin(5, Pin.IN)]
def read_line():
return [p.value() for p in line_pins]
robot = Robot()
SPEED = 90
W = [-2, -0.5, 0.5, 2]
print("Threshold steering -- watch what happens")
print("Place robot on line -- starting in 3 seconds...")
for countdown in range(3, 0, -1):
robot.set_leds((255, 255, 0))
time.sleep(0.5)
robot.leds_off()
time.sleep(0.5)
robot.set_leds((0, 255, 0))
direction_changes = 0
last_side = 0 # Last non-zero direction
try:
while True:
s = read_line()
active = [W[i] for i in range(4) if s[i] == 0]
if len(active) == 0:
robot.stop()
time.sleep(0.1)
continue
error = sum(active) / len(active)
if error > 0:
robot.set_motors(SPEED, -SPEED)
elif error < 0:
robot.set_motors(-SPEED, SPEED)
else:
robot.set_motors(SPEED, SPEED)
# Count how often the line crosses from one side to the other
if error > 0 and last_side < 0:
direction_changes += 1
elif error < 0 and last_side > 0:
direction_changes += 1
if error != 0:
last_side = 1 if error > 0 else -1
print(f"Error: {error:+5.1f} Changes: {direction_changes} ", end="\r")
time.sleep(0.02)
except KeyboardInterrupt:
robot.stop()
Run this. Put the robot on a straight section of the track.
Question
What does the robot look like? How does it feel compared to how a smoothly-moving robot should behave?
It works... sort of. The robot lurches left, then right, then left again. It follows the line, but the motion is jerky. It's either correcting at full strength or not at all -- there's no in-between.
The direction_changes counter tells you how bad it is. On a straight line, a well-behaved robot should barely change direction. Yours is probably racking up dozens of changes per second.
Question
Compare this to the digital bang-bang you tried in Part 1. Is it actually better? Both have the same core problem — what is it?
The Key Insight
Both attempts so far suffer from the same flaw: they treat all errors the same. The digital bang-bang in Part 1 didn't even know how far off the line it was. The threshold steering knows the error value, but throws that information away — when the line is barely to the right (error = +0.25), the robot makes the same correction as when the line is far to the right (error = +2.0). That's like slamming the steering wheel for every minor drift.
Question
What if the size of the correction were proportional to the size of the error? A small error gets a small correction. A big error gets a big correction. How would you write that?
Second Attempt: Proportional Correction
Think about this: you have an error number that ranges from about -2.0 to +2.0. You need a correction number to apply to the motors. What if you just... multiplied?
from machine import Pin
from picobot import Robot
import time
line_pins = [Pin(2, Pin.IN), Pin(3, Pin.IN), Pin(4, Pin.IN), Pin(5, Pin.IN)]
def read_line():
return [p.value() for p in line_pins]
robot = Robot()
SPEED = 90
W = [-2, -0.5, 0.5, 2]
print("Proportional correction -- fill in the multiplier")
time.sleep(1)
try:
while True:
s = read_line()
active = [W[i] for i in range(4) if s[i] == 0]
if len(active) == 0:
robot.stop()
time.sleep(0.1)
continue
error = sum(active) / len(active)
# THE KEY LINE: How much should we multiply the error by?
# Try a number. Start with something like 20.
correction = ??? * error # <-- FILL THIS IN
left_speed = SPEED + correction
right_speed = SPEED - correction
print(f"error: {error:+.1f} correction: {correction:+.0f} "
f"L: {left_speed:.0f} R: {right_speed:.0f}", end="\r")
robot.set_motors(int(left_speed), int(right_speed))
time.sleep(0.02)
except KeyboardInterrupt:
robot.stop()
Replace ??? with a number. Any number. Start with something -- maybe 20? -- and see what happens.
Be Honest: How "Proportional" Is This Really?
Look at the print output while the robot runs. Watch the L: and R: values carefully.
With SPEED = 90 and the motor dead zone at ~75, any correction larger than ~15 pushes one motor below the dead zone — it stops completely. That's not "turn gently left," that's "one wheel stops, the other drives." And with Kp = 20 and error = 2.0, the correction is 40 — one motor gets 130, the other gets 50 (stalled in the dead zone).
So for any significant error, the robot is effectively tank turning — one wheel drives, one wheel stalls or reverses. The "proportional" part only works for small errors near center. On curves, it's really a finer-grained bang-bang with dead-zone effects.
This is a fundamental constraint of this hardware: the narrow usable PWM range (~75-255) leaves very little room for proportional blending between "both wheels forward" and "one wheel stopped."
Finding the Right Number

Now try different numbers. Don't calculate anything. Just try, observe, adjust.
Try a small number (e.g., 10):
Put the robot on the track. What happens on curves? It probably drifts off. The corrections are too gentle — one motor drops to ~80, the other goes to ~100. That's barely a turn.
Try a large number (e.g., 50):
What happens now? Watch the motor values — on any curve, one motor goes well below the dead zone (stopped!) while the other is at full. The robot lurches, tank-turns, lurches. Not smooth at all.
Try something in between (e.g., 20-30):
This is the compromise. On straights (small errors), the correction is gentle and both motors stay above the dead zone. On curves, one motor still hits the dead zone — but the response is at least somewhat graded instead of all-or-nothing.
Question
Write down three numbers you tried and what happened. Which one felt best?
| Multiplier | Behavior |
|---|---|
| ______ | ____ |
| ______ | ____ |
| ______ | ____ |
The Reveal
That multiplier you just found by feel? It has a name: Kp, the proportional gain. The formula:
is called proportional control (P-control) in control theory. But let's be precise about what you actually built.
In a textbook P-controller, the error is a continuous value — it changes smoothly as the system moves. Your error comes from 4 digital sensors that are either 0 or 1. The weighted average produces only ~5-7 distinct values. So your Kp × error is really Kp × one of a handful of fixed numbers — closer to a lookup table than true proportional control. On top of that, the motors have a dead zone (~75 PWM), the PWM-to-speed relationship is nonlinear, and the sensor response depends on surface reflectivity and height. None of this is "linear" in the way the equation implies.
So why use the equation at all? Because it's a useful approximation. Even with all these nonlinearities, correction = Kp × error produces noticeably smoother behavior than bang-bang. It's not perfect P-control — it's P-control applied to a messy real-world system with coarse sensors. Working with that gap — where the clean math meets nonlinear hardware, limited sensors, and real-time constraints — is one aspect of embedded systems engineering. There are many others (hardware interfaces, communication protocols, reliability, power management, real-time scheduling), but this exercise gives you a taste of what it means to make theory work on real hardware.
The same equation — with better sensors that provide smoother error signals — appears in:
- Cruise control in cars (error = desired speed - actual speed)
- Thermostats (error = set temperature - room temperature)
- Drone altitude hold (error = desired altitude - actual altitude)
Real systems add layers on top of PID: gain scheduling, feedforward, anti-windup, cascaded loops, state estimation, and model-based control. A car's cruise control isn't just Kp * (target - speed) — it accounts for slope, gear, engine dynamics, and traction. But the feedback loop structure — measure, compute error, correct, repeat — is the same one you just built.
The Embedded Perspective: What's Running This Loop?
Your control loop runs at ~50 Hz (20ms per iteration). That 20ms is split between:
- Reading 4 GPIO pins + computing weighted error (~20 us)
- Kp * error: one multiplication (~10 us in Python)
- set_motors(): configures PWM hardware (~50 us)
- time.sleep(0.02): the remaining ~19.9 ms (idle!)
The CPU does almost nothing. The PWM hardware generates the motor signals continuously. The GPIO reads are instant. Python just makes a decision every 20ms and updates two registers.
What would happen if you added oled.show() to the loop? (~10ms blocking I2C transfer)
Your 20ms budget would drop to 10ms of actual control time -- the loop rate effectively halves.
Background: Why Loops Need Good Timing
A control loop has inherent delay: sense → compute → act → physical effect. If \(K_p\) is too high, the correction overshoots before the sensor detects the change, causing oscillation. Faster loops reduce this delay and allow higher \(K_p\) before instability. As a rule of thumb, poll sensors at least 5× faster than the signal changes.
Log What Your Robot Actually Does
You tuned Kp by feel — "it looks smooth" or "it wobbles." But what does the robot actually see? Let's log the data and look at it.
from machine import Pin
from picobot import Robot
import time
line_pins = [Pin(2, Pin.IN), Pin(3, Pin.IN), Pin(4, Pin.IN), Pin(5, Pin.IN)]
def read_line():
return [p.value() for p in line_pins]
robot = Robot()
SPEED = 90
KP = 30 # Your best value from tuning
W = [-2, -0.5, 0.5, 2]
# Open CSV file on the Pico
f = open("line_log.csv", "w")
f.write("time_ms,s0,s1,s2,s3,error,correction,left_pwm,right_pwm\n")
print(f"Logging line following: Kp={KP}")
print("Place robot on line -- starting in 3 seconds...")
time.sleep(3)
start = time.ticks_ms()
try:
for _ in range(250): # ~5 seconds at 20ms
s = read_line()
active = [W[i] for i in range(4) if s[i] == 0]
if len(active) > 0:
error = sum(active) / len(active)
correction = KP * error
left_pwm = int(SPEED + correction)
right_pwm = int(SPEED - correction)
robot.set_motors(left_pwm, right_pwm)
else:
error = 0
correction = 0
left_pwm = 0
right_pwm = 0
robot.stop()
elapsed = time.ticks_diff(time.ticks_ms(), start)
f.write(f"{elapsed},{s[0]},{s[1]},{s[2]},{s[3]},"
f"{error:.2f},{correction:.1f},{left_pwm},{right_pwm}\n")
time.sleep(0.02)
finally:
robot.stop()
f.close()
The data is saved on the Pico as line_log.csv. Download and plot on your computer (first time? see the Host Plotting Setup Guide):
Save the following as plot_line.py and run it: python plot_line.py
import csv
import matplotlib.pyplot as plt
# Load data
data = {"time_ms": [], "error": [], "correction": [], "left_pwm": [], "right_pwm": []}
with open("line_log.csv") as f:
reader = csv.DictReader(f)
for row in reader:
for key in data:
data[key].append(float(row[key]))
time_s = [t / 1000 for t in data["time_ms"]]
fig, axes = plt.subplots(3, 1, figsize=(10, 7), sharex=True)
# Error: the staircase
axes[0].plot(time_s, data["error"], ".-", markersize=3)
axes[0].set_ylabel("Error")
axes[0].set_title("What the robot sees (notice the discrete steps)")
axes[0].axhline(0, color="gray", linewidth=0.5)
axes[0].grid(True, alpha=0.3)
# Correction
axes[1].plot(time_s, data["correction"], ".-", markersize=3, color="orange")
axes[1].set_ylabel("Correction")
axes[1].set_title("Correction values (how many distinct values?)")
axes[1].axhline(0, color="gray", linewidth=0.5)
axes[1].grid(True, alpha=0.3)
# Motor PWMs
axes[2].plot(time_s, data["left_pwm"], label="Left PWM")
axes[2].plot(time_s, data["right_pwm"], label="Right PWM")
axes[2].set_ylabel("PWM")
axes[2].set_xlabel("Time (s)")
axes[2].set_title("What the motors get")
axes[2].legend()
axes[2].grid(True, alpha=0.3)
plt.tight_layout()
plt.savefig("line_follow_plot.png", dpi=150)
plt.show()
No matplotlib?
You can also open the CSV in Google Sheets, Excel, or LibreOffice Calc to plot and analyze the data — select the columns you want and insert a line chart. See the Host Plotting Guide for setup and alternatives.
Study your plot and answer:
- How many distinct error values do you see? Count them. With 4 digital sensors, you should see roughly 5-7 steps — not a smooth curve.
- How many distinct correction values appear? It's the same count, just multiplied by Kp.
- Where does the error spike? Probably on curves. How quickly does the controller bring it back?
- Are the motor PWMs ever below ~75? If so, that motor is in the dead zone — not actually spinning. This wastes control authority.
The Staircase Problem — Now You Can See It
Your plot makes the fundamental limitation visible: with 4 digital sensors, the error is a staircase. Between steps, the error doesn't change at all — the robot is "flying blind." On curves, it can drift significantly within one step before the sensors notice.
Hardware solutions: more sensors (8 or 16), or analog sensors (smooth output). Software solutions: remember the rate of change of error (→ D-term), or slow down on curves (→ adaptive speed). These come later.
Tune with Data, Not Feelings
Now run the logger with 2-3 different Kp values on the same track section. In the logging code above, change the KP value and the filename for each run:
KP = 10 # First run
f = open("kp_10.csv", "w")
KP = 25 # Second run
f = open("kp_25.csv", "w")
KP = 40 # Third run
f = open("kp_40.csv", "w")
Download all three: mpremote cp :kp_10.csv . && mpremote cp :kp_25.csv . && mpremote cp :kp_40.csv .
Plot all runs on the same chart:
import csv
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(10, 4))
for kp in [10, 25, 40]:
data = {"time_ms": [], "error": []}
with open(f"kp_{kp}.csv") as f:
reader = csv.DictReader(f)
for row in reader:
data["time_ms"].append(float(row["time_ms"]))
data["error"].append(float(row["error"]))
time_s = [t / 1000 for t in data["time_ms"]]
ax.plot(time_s, data["error"], label=f"Kp = {kp}", alpha=0.8)
ax.axhline(0, color="gray", linewidth=0.5)
ax.set_xlabel("Time (s)")
ax.set_ylabel("Error")
ax.set_title("Kp Comparison — same track, different gains")
ax.legend()
ax.grid(True, alpha=0.3)
plt.tight_layout()
plt.savefig("kp_comparison.png", dpi=150)
plt.show()
What to look for:
| Kp | What the plot shows |
|---|---|
| Too low | Error stays large for long stretches — corrections too gentle |
| Good | Error returns to zero quickly, stays small on straights |
| Too high | Error oscillates rapidly around zero — overcorrecting |
Now you're tuning like an engineer: change a parameter, measure the result, compare the data. Not "it looks smoother" but "the average |error| dropped from 0.8 to 0.3 and zero-crossings decreased from 40 to 12."
Part 3: Understanding What You Built (~20 min)
Now that you've found something that works and seen the data, let's understand why it works -- and exactly how it breaks.
The Control Loop
What your code does, on every iteration:
1. MEASURE: error = where the line is (from sensors)
2. COMPUTE: correction = Kp × error
3. ACT: left_motor = speed + correction
right_motor = speed - correction
4. REPEAT
This is a feedback control loop. The robot constantly asks "Where am I?" and "How do I get back?" The loop runs tens of times per second, making tiny adjustments each time.
When the line drifts right (positive error):
correctionis positive- Left motor speeds up (
speed + correction), right motor slows down (speed - correction) - Robot steers right, back toward the line
When the line drifts left (negative error):
correctionis negative- Left motor slows down, right motor speeds up
- Robot steers left, back toward the line
When centered (error near zero):
correctionis near zero- Both motors run at roughly the same speed
- Robot drives straight
The beauty is that the response is proportional: a small drift gets a gentle nudge, a large drift gets an aggressive correction. No thresholds, no if-else chains. One formula handles everything.
Break It on Purpose
Look at your Kp comparison plot from the logging section. Now let's understand why each Kp value behaves the way it does.
Kp too low (around 15):
- Your plot shows error staying large for long stretches — the robot wanders off the line on curves
- Few zero-crossings — it's too sluggish to even oscillate
- The corrections are so timid that the robot can't keep up
Kp too high (around 50+):
- Your plot shows error oscillating rapidly around zero — constant overshooting
- Many zero-crossings — each correction overshoots, triggering a correction the other way
- Remember: with only ~5 distinct error values, high Kp just makes bigger jumps between the same steps
Kp in the sweet spot (around 25-40):
- Error returns to zero quickly on curves, stays small on straights
- Moderate zero-crossings — responsive but not jittery
The Goldilocks Insight
- Kp too low = sluggish, drifts off
- Kp too high = jittery, oscillates
- Kp just right = smooth, responsive
There is an optimal range, not a single perfect number. Finding that range is what control engineers call tuning.
Why Does High Kp Oscillate?
This is the most important idea in this entire lab. It comes down to delay.
There is always a gap between when the robot senses an error and when the correction takes effect:
- The sensor reads the position (takes time)
- The code computes the correction (takes time)
- The motor changes speed (takes time)
- The robot physically moves (takes time -- inertia!)
With a moderate Kp, the correction is proportional and the robot converges smoothly toward the line. But with a high Kp, the robot commands a massive correction for even a small error. By the time that correction takes physical effect, the robot has already passed the line. Now the error is on the other side, triggering an equally massive correction in the opposite direction. Each overshoot triggers a bigger overshoot.

The Oscillation Spiral
- Robot drifts right: error = +0.5
- High Kp correction yanks it left
- Robot overshoots to error = -0.7 (further than original!)
- Correction yanks it right
- Overshoots to error = +0.9
- Each swing is bigger than the last
This is instability. The system's delay combined with high gain turns negative feedback (corrective) into positive feedback (destructive).
Stability and Phase Margin
In formal control theory, the point where a system becomes unstable is analyzed using Bode plots and phase margin. The closed-loop system oscillates when the total phase delay around the loop reaches 180 degrees and the gain is still above 1 at that frequency. Negative feedback becomes positive feedback.
You experienced this intuitively -- no math required to feel the robot wobble. But the math explains precisely when and why it happens.
Measure the Loop
You talked about the 20ms loop budget. How fast does it actually run? Measure it.
from machine import Pin
from picobot import Robot
import time
line_pins = [Pin(2, Pin.IN), Pin(3, Pin.IN), Pin(4, Pin.IN), Pin(5, Pin.IN)]
def read_line():
return [p.value() for p in line_pins]
robot = Robot()
SPEED = 90
KP = 30
W = [-2, -0.5, 0.5, 2]
print("Measuring control loop timing...")
print("Place robot on line -- starting in 3 seconds...")
for countdown in range(3, 0, -1):
robot.set_leds((255, 255, 0))
time.sleep(0.5)
robot.leds_off()
time.sleep(0.5)
robot.set_leds((0, 255, 0))
# Measure how fast the control loop actually runs
loop_times = []
for _ in range(200):
t0 = time.ticks_us()
s = read_line()
active = [W[i] for i in range(4) if s[i] == 0]
if len(active) > 0:
error = sum(active) / len(active)
correction = int(KP * error)
robot.set_motors(SPEED + correction, SPEED - correction)
loop_time = time.ticks_diff(time.ticks_us(), t0)
loop_times.append(loop_time)
time.sleep_ms(10)
robot.stop()
avg = sum(loop_times) // len(loop_times)
print(f"Average loop time: {avg} us ({1_000_000 // avg} Hz)")
print(f"Max: {max(loop_times)} us, Min: {min(loop_times)} us")
Question
The loop_times list measures only the sense-compute-act portion, not the sleep. How much of each 10ms cycle is actual work vs idle time?
Now try adding robot.read_distance() inside the loop (ultrasonic sensor, ~25ms blocking). Or oled.show() (~10ms I2C transfer). What happens to the loop timing? Does the robot's tracking quality change?
Part 4: Make It Yours (~15 min)
You've experimented with individual pieces. Now write your own complete control loop from scratch. No copy-paste -- write every line yourself, because you understand every line now.
Skeleton Code
Fill in the blanks. Every ??? is something you've already figured out.
from machine import Pin
from picobot import Robot
import time
line_pins = [Pin(2, Pin.IN), Pin(3, Pin.IN), Pin(4, Pin.IN), Pin(5, Pin.IN)]
def read_line():
return [p.value() for p in line_pins]
robot = Robot()
# --- TUNING PARAMETERS ---
SPEED = ??? # Base forward speed (try 85-100)
KP = ??? # Your best Kp from Part 3
W = [-2, -0.5, 0.5, 2] # Sensor position weights
# --- STATE ---
last_known_direction = 0 # Remember which way we last saw the line
print("Your line follower -- Ctrl+C to stop")
print("Place robot on line -- starting in 3 seconds...")
for countdown in range(3, 0, -1):
robot.set_leds((255, 255, 0))
time.sleep(0.5)
robot.leds_off()
time.sleep(0.5)
robot.set_leds((0, 255, 0))
try:
while True:
s = read_line()
active = [W[i] for i in range(4) if s[i] == 0]
if len(active) > 0:
error = sum(active) / len(active)
else:
error = None
# --- CASE 1: Line visible ---
if error is not None:
# Remember which side the line was on
if error > 0:
last_known_direction = 1 # Line was to the right
elif error < 0:
last_known_direction = -1 # Line was to the left
# P-control
correction = ??? # Kp * error
left_speed = ??? # speed + correction
right_speed = ??? # speed - correction
robot.set_motors(int(left_speed), int(right_speed))
robot.leds.show_error(error)
# --- CASE 2: Line lost (no sensor sees black) ---
elif error is None:
# Spin toward where we last saw the line
if last_known_direction > 0:
robot.set_motors(???, ???) # Spin right
else:
robot.set_motors(???, ???) # Spin left
robot.set_leds((255, 0, 0)) # Red = lost
# --- CASE 3: Junction ---
# (All sensors see black -- crossroads or T-junction)
# For now: just drive through it
# Later: you could stop, beep, or choose a direction
time.sleep(0.02) # ~50 Hz control loop
except KeyboardInterrupt:
robot.stop()
robot.leds_off()
Question
Why do we save last_known_direction? What would happen if the robot just stopped when it lost the line?
Checkpoint -- Your Loop Works
Your robot should follow the line smoothly, recover when it briefly loses the line, and handle gentle curves. If it drifts off on curves, increase Kp. If it wobbles on straights, decrease Kp.
The Sensor Gap Problem
Run your line follower on a curve and log the raw sensor values. You'll see something like this:
Sensors: [1, 1, 1, 0] |░░░█| ← line on far right
Sensors: [1, 1, 1, 0] |░░░█|
Sensors: [1, 1, 1, 0] |░░░█|
Sensors: [1, 1, 1, 1] |░░░░| ← line between sensors — GONE!
Sensors: [1, 1, 1, 1] |░░░░| ← still gone
Sensors: [1, 1, 1, 1] |░░░░| ← 3 readings with no line
Sensors: [1, 1, 0, 1] |░░█░| ← reappears on next sensor
Sensors: [1, 0, 0, 1] |░██░| ← back to center
The line didn't disappear — it passed through the gap between X4 and X3. The line is narrower than the spacing between the outer and inner sensors, so there are moments where no sensor sees it. Without previous state, the robot thinks "line lost" and either stops or spins wildly.
This is why last_known_direction matters. Look at the data — the last valid reading was on X4 (far right, error = +2.0). When the sensors all read [1,1,1,1], the line is still to the right. It just hasn't reached X3 yet. Keeping the last known error lets the robot continue correcting in the right direction through the gap.
# This is why the skeleton code saves direction:
if error is not None:
last_error = error # Remember the actual error value
last_known_direction = 1 if error > 0 else -1
else:
error = last_error # Use last known error through the gap
Saving the Error Value vs Just the Direction
The skeleton code saves last_known_direction as +1 or -1. But saving the actual last error is better — it preserves how far off the line was, not just which side. If the last reading was error = +2.0 (far right), the robot should correct harder than if it was error = +0.25 (barely off center). This is the difference between "turn right" and "turn right hard."
This Is a Sensor Geometry Problem
The gaps happen because 4 sensors can't cover the full width continuously. More sensors would reduce the gaps, and analog sensors would eliminate them entirely. But with the hardware you have, state memory is the software solution — store the previous reading and use it to bridge the blind spots. This same pattern appears in any system with intermittent sensor coverage: GPS dropouts, WiFi signal gaps, barcode readers between bars.
Track Feature Detection
Now that you understand sensor gaps on curves, consider a harder problem: junctions and turns. A real track isn't just curves and straights — it has T-junctions, crossroads, dead ends, and stop markers. Your 4 sensors can detect these, but not from a single reading. You need patterns over time: what did the sensors see before, during, and after the event?
What the Sensors See at Track Features
Feature What happens to sensors over time
─────────────────────────────────────────────────────────
Straight line: ... [1,0,0,1] [1,0,0,1] [1,0,0,1] ...
Stable center pattern
Curve (left): ... [1,0,0,1] [0,0,0,1] [0,0,1,1] ...
Line drifts left, error grows negative
───┬─── ... [1,0,0,1] [0,0,0,0] [1,0,0,1] ...
│ All black briefly, then line continues
│ T-junction → Line ahead + branch to one side
(forward + right)
───┼─── ... [1,0,0,1] [0,0,0,0] [1,0,0,1] ...
│ Same as T! Can't tell from sensors alone
│ Crossroads without driving into the branch
───┘ ... [1,0,0,1] [0,0,0,0] [1,1,1,1]
All black, then all white = dead end
Left turn only → Must turn left (or stop)
└─── ... [1,0,0,1] [0,0,0,0] [1,1,1,1]
Same pattern as above!
Right turn only → Need to check: is there line to left? right?
─── ... [1,0,0,1] [1,0,0,1] [1,1,1,1]
(line ends) Line simply disappears = stop marker or end
The key lesson: a single all-black reading [0,0,0,0] doesn't tell you what kind of junction it is. You have to drive past it and see what's still there.
Detecting and Classifying Junctions
from machine import Pin
from picobot import Robot
import time
line_pins = [Pin(2, Pin.IN), Pin(3, Pin.IN), Pin(4, Pin.IN), Pin(5, Pin.IN)]
def read_line():
return [p.value() for p in line_pins]
robot = Robot()
W = [-2, -0.5, 0.5, 2]
KP = 30
SPEED = 90
# --- State tracking ---
on_junction = False
junction_count = 0
all_black_start = 0 # When did we first see all-black?
MIN_JUNCTION_MS = 50 # Ignore brief all-black (noise/wide line)
print("Line following with junction detection")
print("Ctrl+C to stop")
time.sleep(3)
try:
while True:
s = read_line()
now = time.ticks_ms()
all_black = all(v == 0 for v in s)
all_white = all(v == 1 for v in s)
if all_black and not on_junction:
# Entering a junction — start timing
all_black_start = now
on_junction = True
elif on_junction and not all_black:
# Leaving the junction — classify what we see now
duration = time.ticks_diff(now, all_black_start)
if duration > MIN_JUNCTION_MS:
junction_count += 1
if all_white:
# No line ahead — this is a turn or dead end
print(f"Junction {junction_count}: DEAD END / TURN")
robot.stop()
robot.set_leds((255, 0, 0))
# TODO: check left/right by turning and reading
# For now, just stop
time.sleep(1)
else:
# Line continues ahead — crossroads or T
print(f"Junction {junction_count}: CROSS / T-JUNCTION")
robot.set_leds((0, 0, 255))
# Strategy: drive straight through
# Later: decide to turn based on mission
on_junction = False
# Normal line following
active = [W[i] for i in range(4) if s[i] == 0]
if len(active) > 0:
error = sum(active) / len(active)
correction = KP * error
robot.set_motors(int(SPEED + correction), int(SPEED - correction))
elif not on_junction:
# Lost line (not at junction) — stop
robot.stop()
time.sleep(0.02)
except KeyboardInterrupt:
robot.stop()
Classifying Turn Direction
When you hit a dead end (all black → all white), you need to figure out which way to turn. One approach: spin slowly and check which side has a line:
def find_turn_direction():
"""After a dead end, spin to find where the line went."""
# Check right first
robot.set_motors(SPEED, -SPEED)
for _ in range(20): # ~400ms
s = read_line()
if s[2] == 0 or s[3] == 0: # Right sensors see line
return "right"
time.sleep(0.02)
# Not right — try left (spin back past center)
robot.set_motors(-SPEED, SPEED)
for _ in range(40): # ~800ms (double to get past center)
s = read_line()
if s[0] == 0 or s[1] == 0: # Left sensors see line
return "left"
time.sleep(0.02)
robot.stop()
return "none" # No line found — true dead end
Counting Junctions for Navigation
Once you can detect junctions, you can navigate a known track:
# Mission: follow line, turn right at 2nd junction, stop at 3rd
TURN_AT = 2
STOP_AT = 3
# In the junction handler:
if junction_count == TURN_AT:
print("Turning right!")
# Drive past junction, then turn
robot.set_motors(SPEED, SPEED)
time.sleep(0.3)
robot.set_motors(SPEED, -SPEED)
time.sleep(0.5) # Approximate 90° — refine with testing
elif junction_count == STOP_AT:
print("Mission complete!")
robot.stop()
robot.set_leds((0, 255, 0))
What This Teaches (Embedded, Not Control Theory)
Junction detection is not a control problem — it's a pattern recognition and state management problem:
- Debouncing: The
MIN_JUNCTION_MSthreshold filters out noise, just like button debouncing - State tracking:
on_junctionflag remembers "I'm currently on a junction" across loop iterations - Timing: Using
ticks_ms()to measure event duration — same skill as the timing tutorial - Classification: Deciding what happened after driving through, not from a single snapshot
- Mission logic: Junction counting turns sensor data into navigation decisions
These are all embedded systems patterns — they show up in any system that must interpret sensor events over time, from barcode readers to industrial conveyors.
Speed and Kp Interact
Try running your loop at different speeds:
Try running your loop at different SPEED and KP combinations:
| SPEED | KP | Behavior |
|---|---|---|
| 50 | 30 | Slow, stable |
| 100 | 30 | Fast — same Kp. Still stable? |
| 100 | 50 | Fast + higher Kp. Better on curves? |
At higher speed, the robot covers more distance between sensor readings. By the time you detect a drift, you've drifted further. The same Kp that worked at low speed becomes too sluggish at high speed.
Question
If you doubled the robot's speed, would you need to exactly double Kp? More? Less? Try it and see.
Stuck?
- Robot drives but ignores the line: Check the sign of your correction.
SPEED + correctionshould go to the left motor. - Robot oscillates wildly: Your Kp is too high for your speed. Lower it.
- Robot loses the line on every curve: Your Kp is too low, or your speed is too high. Try speed=60, kp=35 as a starting point.
NameError: name 'robot' is not defined: Make surefrom picobot import Robotandrobot = Robot()are at the top.
Part 5: Where You Are and Where This Goes (~15 min)
Let's be honest about what you built and what it isn't.
What This Is
This is a learning exercise. You built a simple line follower with 4 digital sensors and a proportional controller. It works — it follows a line, handles gentle curves, recovers when it drifts. But it's far from a competition robot or an industrial system. The point was never to build the best line follower. The point was to learn the process:
- Read sensors — turn physical signals into numbers
- Compute error — fuse multiple readings into one meaningful value
- Close the loop —
correction = Kp × error - Tune — try values, log data, plot, compare
- Hit the limits — understand why it breaks, not just that it breaks
This process is the same whether you're following a line, balancing a drone, or controlling a chemical reactor. The sensor changes, the actuator changes, the formula stays the same.
What's Honestly Limited
Your plots from the logging section show the problems clearly:
| What you saw | Why | What field addresses it |
|---|---|---|
| Error is a staircase, not smooth | 4 digital sensors → ~5 distinct values | Signal processing — more sensors, analog sensing, interpolation |
| Speed changes with battery level | No speed feedback, open-loop motors | Control systems — closed-loop speed control with encoders |
| Robot drifts to one side | Motor mismatch, no way to detect it | Feedback control — differential encoder matching |
| High Kp oscillates, low Kp drifts | Fundamental gain-delay trade-off | Control theory — PID, stability analysis, Bode plots |
| Can't go fast on curves | No way to know "curve coming" | Path planning — lookahead, curvature estimation |
| Tuning is manual trial-and-error | No system model | System identification — measure plant response, compute optimal gains |
None of these are failures of your code. They're the natural boundaries of what 4 digital sensors + open-loop motors + P-control can do. Each boundary points to a field of engineering that exists to push past it.
The Embedded Perspective
This tutorial used control theory (P-control, correction = Kp * error) as a tool, but the focus was on the embedded context: reading GPIO pins, dealing with coarse digital sensors, working within PWM dead zones, measuring loop timing, and logging data on a constrained device.
If you explored the SW improvement section, you also saw how time-domain processing (the D-term using transition timing, EMA smoothing, integral accumulation) extracts information that the raw sensor values alone don't provide. That's not just control theory — it's embedded signal processing: making the most of limited hardware through software.
The formal control theory behind all of this — why PID works, when it doesn't, stability analysis, tuning methods — is a separate field worth studying if you're interested. But this course is about embedded systems: making real hardware do useful things under real constraints. The control loop is one tool among many (communication protocols, real-time scheduling, hardware interfaces, power management).
Where to Go Deeper
If this interested you, here's where each thread leads:
"I want better control" — The Encoder-Based Control Track adds wheel encoders, giving you actual speed measurement. The final module layers speed control under your line follower — inner loop handles speed (encoders), outer loop handles steering (line sensors). That's cascaded control.
→ Advanced 13–18: Encoder-Based Control Track
"I want to understand the math" — Why does high Kp oscillate? What determines the exact threshold? Bode plots, phase margin, root locus — the formal tools that predict stability from the math, without trial and error.
"I want better sensors" — Analog line sensors give smooth error curves instead of staircases. ToF lidar gives precise distance. IMU gives heading. Combining them is sensor fusion — getting more information than any single sensor provides.
→ Advanced 01: Sensor Theory, Advanced 04: Sensor Fusion
"I want to use data properly" — System identification, regression, model-based tuning. Instead of trying Kp values by hand, measure the system's response and compute the optimal gain.
→ Advanced 07: Data-Driven Methods
"I want faster code" — Your Python loop runs at 50 Hz. C runs at 10 kHz+. What does the abstraction cost? When does it matter? How do you trace from set_motors() down to the hardware register?
→ Advanced 08: Abstraction Layers, Advanced 02: Real-Time Concepts
What You Take With You
Regardless of where you go from here, you practiced the embedded systems workflow:
- Read hardware directly —
Pin.value(), not a library black box - Compute in software — weighted average, error, correction
- Drive actuators — PWM registers, dead zones, timing
- Log and measure — CSV on device, download, plot on PC
- Tune from data — not from feelings
And you wrote it all yourself:
from machine import Pin
line_pins = [Pin(2, Pin.IN), Pin(3, Pin.IN), Pin(4, Pin.IN), Pin(5, Pin.IN)]
W = [-2, -0.5, 0.5, 2]
def read_line():
return [p.value() for p in line_pins]
def compute_line_error():
s = read_line()
active = [W[i] for i in range(4) if s[i] == 0]
if len(active) == 0:
return None
return sum(active) / len(active)
Every line is yours, from Pin.value() to Kp * error. That's the starting point — not the destination.
What's Next?
In Precise Turns, you'll use a gyroscope — a completely different sensor measuring angular rotation — to turn exact angles. The control structure is identical:
- Read the sensor (gyroscope instead of IR)
- Calculate the error (angle remaining instead of line offset)
- Apply proportional correction (same
Kp * error) - Repeat until done
Same process, different sensor. That's the point.
Improving in Software Only (~15 min, optional)
Your line follower works, but the staircase error limits how smooth it can be. Before reaching for extra hardware (encoders, more sensors), there are several software-only techniques that can improve performance with the same 4 digital sensors. These are all real techniques used in embedded systems when you can't change the hardware.

Technique 1: Window Average (Creating Values Between the Steps)
The raw error only produces ~5 discrete values. But think about what happens over time as the robot transitions between two sensor states — say from error = 0 (centered) to error = -1.25 (drifting left). The readings look like:
Reading: 0 0 0 -1.25 -1.25 -1.25 -1.25 -1.25
─────────────┼─────────────────────────────────
still centered│ sensor X1 picks up the line
If you average the last N readings, you get intermediate values that the raw sensor can't produce:
Window of 6: [0, 0, 0, -1.25, -1.25, -1.25] → average = -0.625
Next step: [0, 0, -1.25, -1.25, -1.25, -1.25] → average = -0.83
Next: [0, -1.25, -1.25, -1.25, -1.25, -1.25] → average = -1.04
The time proportion spent at each level encodes where the line is between the sensor gaps. A window average captures this — turning ~5 discrete steps into a smoother signal:
WINDOW = 6
error_history = [0] * WINDOW
# In the control loop:
s = read_line()
active = [W[i] for i in range(4) if s[i] == 0]
if len(active) > 0:
raw_error = sum(active) / len(active)
else:
raw_error = error_history[-1] # Hold last value through gaps
# Shift window and add new reading
error_history.pop(0)
error_history.append(raw_error)
# Average the window
error = sum(error_history) / WINDOW
correction = KP * error
| Window size | Effect |
|---|---|
| 1 | No smoothing — raw staircase |
| 4-6 | Good balance — smooths transitions, still responsive |
| 10+ | Very smooth but sluggish — robot reacts too late on curves |
Why This Actually Adds Information
Unlike simple noise filtering, the window average here does something useful: it converts temporal information into position resolution. When the robot is exactly between two sensor positions, the error flips rapidly between two values. The proportion of time at each value tells you where between the sensors the line is. A window average measures that proportion. You still don't know the robot's speed, but you get finer position resolution from the same 4 sensors.
Alternative: EMA (Exponential Moving Average)
A window average needs a buffer. If memory is tight, an EMA achieves similar smoothing with just one variable:
alpha = 0.3 # 0.0 = no update, 1.0 = no smoothing
smoothed_error = 0
# In the control loop:
smoothed_error = alpha * raw_error + (1 - alpha) * smoothed_error
correction = KP * smoothed_error
Lower alpha = more smoothing (like a longer window). The EMA is simpler but the window average is easier to reason about — you can literally print the buffer and see what's in it.
Technique 2: Last-Known Error (Memory)
When no sensor sees the line (error is None), don't stop — use the last valid error to keep correcting:
last_error = 0
# In the control loop:
s = read_line()
active = [W[i] for i in range(4) if s[i] == 0]
if len(active) > 0:
error = sum(active) / len(active)
last_error = error
else:
error = last_error # Keep correcting in the last known direction
correction = KP * error
This is a form of state memory — the simplest kind of prediction. It works because the line doesn't teleport: if it was to the left 20 ms ago, it's probably still to the left.
Technique 3: Time-Based Derivative (The D-Term)
Here's an insight that goes beyond textbook P-control: even though the error only takes ~5 discrete values, the time between transitions carries real information.
Think about it: the line moves under the sensors as the robot drives. If the error changes from 0 to -1.25, that means the line shifted from center to left. But how fast it shifted matters:
- Error changed in 40 ms → the robot is turning fast into a curve, or driving fast
- Error changed in 200 ms → the robot is drifting slowly
You can't measure the robot's absolute speed without encoders. But you can measure the rate of change of the line position relative to the sensors — and that's a proxy for how fast the robot is turning or how sharp the curve is.
import time
prev_error = 0
prev_time = time.ticks_us()
KD = 0.005 # Start very small — this is per-microsecond
# In the control loop:
s = read_line()
active = [W[i] for i in range(4) if s[i] == 0]
if len(active) > 0:
error = sum(active) / len(active)
else:
error = prev_error
now = time.ticks_us()
dt = time.ticks_diff(now, prev_time) # Microseconds since last reading
if dt > 0:
d_error_dt = (error - prev_error) * 1_000_000 / dt # Error change per second
else:
d_error_dt = 0
correction = KP * error + KD * d_error_dt
prev_error = error
prev_time = now
What the D-term does in practice:
- Entering a curve (error changing fast) → D-term adds extra correction in the same direction → responds faster than P alone
- Error is stable (on a straight, or steady on a curve) → D-term is ~0 → doesn't interfere
- Exiting a curve (error returning toward 0) → D-term opposes the P-term → brakes the correction before you overshoot
Warning
The D-term amplifies sudden changes. With digital sensors, every sensor transition is a step change that produces a spike in d_error_dt. Two ways to handle this:
- Apply EMA smoothing to the error before computing the derivative
- Keep KD very small and increase gradually
If the robot starts twitching at sensor transitions, reduce KD.
Is This a Real Derivative?
In control theory, the D-term is \(K_d \frac{de}{dt}\) — the derivative of a continuous error signal. Here, d_error_dt is a discrete approximation from a signal with only ~5 values. Most of the time the derivative is zero (error hasn't changed), and occasionally it's a large spike (sensor transition). It's crude — but the timing of those spikes genuinely carries information about speed and curvature. This is worth more than it might seem.
Technique 4: Integral Term (The I-Term)
The P-term corrects the current error. The D-term reacts to how fast the error changes. But what if the robot has a persistent small drift — for example, one motor is slightly stronger, so the robot always ends up slightly left of center?
P-control won't fix this: if the steady-state error is small (say 0.25), the correction KP * 0.25 might not be enough to overcome the motor mismatch. The error never reaches zero.
The I-term accumulates error over time. A small but persistent error builds up until the integral is large enough to correct it:
integral = 0
KI = 0.5 # Start small
MAX_INTEGRAL = 50 # Anti-windup limit
# In the control loop:
error = ...
dt_sec = dt / 1_000_000 # Convert µs to seconds
integral += error * dt_sec
integral = max(-MAX_INTEGRAL, min(MAX_INTEGRAL, integral)) # Clamp
correction = KP * error + KI * integral + KD * d_error_dt
Integral Windup
Without the clamp, the integral grows unbounded when the robot is off the line (error is large for a long time). When the robot finally finds the line, the accumulated integral causes a massive overcorrection the other way. The MAX_INTEGRAL clamp prevents this.
When Does I Help?
With 4 digital sensors on this robot, the I-term is often unnecessary — the error resolution is too coarse to have a meaningful "small persistent offset." The I-term matters more when you have smooth error signals (analog sensors or encoders). But it's worth understanding the concept: P handles the present, I handles the past, D handles the future (prediction from rate of change).
Technique 5: Adaptive Speed (Slow for Curves)
Use the error magnitude as a proxy for curvature — large |error| means the robot is on a curve:
MAX_SPEED = 100
MIN_SPEED = 60
# In the control loop:
speed = MAX_SPEED - abs(error) * 20 # Slow down proportional to error
speed = max(MIN_SPEED, speed) # Don't go below minimum
correction = KP * error
robot.set_motors(int(speed + correction), int(speed - correction))
This isn't really smoothing — it's admitting that you can't follow curves as fast as straights with limited sensors. By slowing down, you give the sensor more time to detect transitions, effectively increasing your spatial resolution.
Technique 5: Gain Scheduling
Use a gentle Kp when the error is small (straight line, fine corrections) and an aggressive Kp when the error is large (curve, need to recover fast):
# In the control loop:
if abs(error) < 0.5:
kp = 20 # Gentle on straights
else:
kp = 45 # Aggressive on curves
correction = kp * error
This is called gain scheduling — using different controller parameters for different operating regions. It acknowledges that a single Kp can't be optimal for both straights and curves.
Combine Them
These techniques aren't mutually exclusive. A practical line follower might use all of them:
import time
alpha = 0.3
smoothed_error = 0
prev_error = 0
prev_time = time.ticks_us()
last_valid_error = 0
integral = 0
KP_LOW = 20
KP_HIGH = 45
KD = 0.003
KI = 0.3
MAX_INTEGRAL = 50
MAX_SPEED = 100
MIN_SPEED = 60
while True:
s = read_line()
active = [W[i] for i in range(4) if s[i] == 0]
if len(active) > 0:
raw_error = sum(active) / len(active)
last_valid_error = raw_error
else:
raw_error = last_valid_error # Memory
# Timing
now = time.ticks_us()
dt_us = time.ticks_diff(now, prev_time)
dt_sec = dt_us / 1_000_000
prev_time = now
# Smoothing
smoothed_error = alpha * raw_error + (1 - alpha) * smoothed_error
# D-term: rate of change (time-based)
if dt_us > 0:
d_error_dt = (smoothed_error - prev_error) / dt_sec
else:
d_error_dt = 0
# I-term: accumulated error
integral += smoothed_error * dt_sec
integral = max(-MAX_INTEGRAL, min(MAX_INTEGRAL, integral))
# Gain scheduling + adaptive speed
kp = KP_HIGH if abs(smoothed_error) > 0.5 else KP_LOW
speed = max(MIN_SPEED, MAX_SPEED - abs(smoothed_error) * 20)
# PID + adaptive speed
correction = kp * smoothed_error + KI * integral + KD * d_error_dt
robot.set_motors(int(speed + correction), int(speed - correction))
prev_error = smoothed_error
time.sleep(0.02)
What This Can and Can't Do
The position resolution is still ~5 values — no software can change that. But the time dimension adds real information. The D-term doesn't just see "error changed by 0.75" — it sees "error changed by 0.75 in 30 ms", which tells you something about relative speed and curvature that the P-term alone can't know. The I-term detects persistent bias that individual readings are too coarse to reveal.
Together, these techniques extract significantly more from the same 4 digital sensors. They won't match what encoders or analog sensors could provide — the spatial resolution is fundamentally limited. But they demonstrate how time-domain processing compensates for limited sensor resolution, which is a technique used across embedded systems, signal processing, and control engineering.
Log and Compare
Try the combined approach and compare it to your basic P-control using the CSV logging from earlier. The plot should show smoother correction values and fewer abrupt transitions, even though the raw sensor data is the same staircase.
Challenges (Optional)
Challenge: Variable Speed
Slow down for sharp curves, speed up on straights. When |error| is large, the robot is on a curve. When it's small, the robot is on a straight.
Challenge: Smooth Start
The robot jerks when starting because the motors go from 0 to full speed instantly. Ramp up the base speed gradually over the first second.
Hint: Increase SPEED by 1 each loop iteration until it reaches the target.
Challenge: Stop at Junction
Make the robot follow the line until it reaches a junction (all sensors black), then stop and beep.
Hint: Check all(v == 0 for v in s) — all four sensors see black.
Challenge: Recovery Spin
When the line is lost, the robot currently spins in one direction. Can you make it spin, and if it doesn't find the line within 1 second, try spinning the other way?
Challenge: Adaptive Kp
Use a small Kp when the error is small (smooth cruising) and a larger Kp when the error is large (aggressive recovery). This is called gain scheduling.
Challenge: Compute Statistics from Your Log
Using your CSV data from the logging section, compute these metrics in Python:
import csv
with open("line_log.csv") as f:
reader = csv.DictReader(f)
errors = [float(row["error"]) for row in reader]
avg_error = sum(abs(e) for e in errors) / len(errors)
crossings = sum(1 for i in range(1, len(errors))
if errors[i] * errors[i-1] < 0)
distinct = len(set(f"{e:.2f}" for e in errors))
print(f"Average |error|: {avg_error:.3f}")
print(f"Zero-crossings: {crossings} ({crossings / 5:.1f}/sec)")
print(f"Distinct error values: {distinct}")
Compare across your Kp runs. Which Kp has the lowest average |error|? How does the zero-crossing rate change?
Competition: Fastest Lap
Who can complete the oval track fastest with P-control?
Rules
- Robot must complete one full lap of the oval track
- Must use P-control (not bang-bang)
- Must stay on the line the entire lap (no manual intervention)
- Place a piece of tape across the line as a "lap marker" — detect it (all 4 sensors black) to time the lap
- Log your data and show your error plot
Code Template
from machine import Pin
from picobot import Robot
import time
line_pins = [Pin(2, Pin.IN), Pin(3, Pin.IN), Pin(4, Pin.IN), Pin(5, Pin.IN)]
def read_line():
return [p.value() for p in line_pins]
robot = Robot()
SPEED = ??? # Your speed
KP = ??? # Your Kp
W = [-2, -0.5, 0.5, 2]
f = open("lap.csv", "w")
f.write("time_ms,error,correction,left_pwm,right_pwm\n")
# Wait for start
for countdown in range(3, 0, -1):
robot.set_leds((255, 255, 0))
time.sleep(0.5)
robot.leds_off()
time.sleep(0.5)
robot.set_leds((0, 255, 0))
start = time.ticks_ms()
try:
while True:
s = read_line()
elapsed = time.ticks_diff(time.ticks_ms(), start)
# Detect lap marker (all sensors black) — skip first 2 seconds
if all(v == 0 for v in s) and elapsed > 2000:
print(f"LAP TIME: {elapsed} ms!")
break
active = [W[i] for i in range(4) if s[i] == 0]
if len(active) > 0:
error = sum(active) / len(active)
correction = int(KP * error)
left_pwm = SPEED + correction
right_pwm = SPEED - correction
robot.set_motors(int(left_pwm), int(right_pwm))
f.write(f"{elapsed},{error:.2f},{correction},{left_pwm},{right_pwm}\n")
time.sleep_ms(10)
finally:
robot.stop()
f.close()
Download the lap data: mpremote cp :lap.csv .
Optimization Ideas
- Higher SPEED needs higher KP — but too high oscillates
- Reduce TURN_SPEED on curves, increase on straights (adaptive Kp)
- The error plot shows where you're losing time — wide oscillations = slow section
- Compare your correction rate to your motor lab Task 7 bang-bang data
Leaderboard
| Rank | Name | Lap Time (ms) | Speed | Kp | Strategy |
|---|---|---|---|---|---|
| 1 | |||||
| 2 | |||||
| 3 |
Debug Challenge: Spot the Embedded Bugs
Bug 1: "Robot follows the line but wobbles constantly"
SPEED = 90
KP = 80 # High Kp for fast response!
W = [-2, -0.5, 0.5, 2]
while True:
s = read_line()
active = [W[i] for i in range(4) if s[i] == 0]
if len(active) > 0:
error = sum(active) / len(active)
correction = int(KP * error)
robot.set_motors(SPEED + correction, SPEED - correction)
time.sleep(0.02)
Answer
Kp = 80 is way too high. At error = 2.0: correction = 160. Left motor = 240 (near max), right motor = -80 (full reverse!). The robot slams hard, overshoots, slams the other way. With only ~5 discrete error values, high Kp just makes a louder bang-bang. Fix: use Kp = 20-35.
Bug 2: "Robot loses the line on curves and never recovers"
W = [-2, -0.5, 0.5, 2]
while True:
s = read_line()
active = [W[i] for i in range(4) if s[i] == 0]
if len(active) > 0:
error = sum(active) / len(active)
correction = int(25 * error)
robot.set_motors(80 + correction, 80 - correction)
else:
robot.set_motors(80, 80) # Lost line — drive straight
time.sleep(0.02)
Answer
When no sensor sees the line, driving straight is wrong — the robot drifts further off. It should spin toward the last known direction of the line. Fix: save last_known_direction and spin that way when the line is lost. This is a pure SW solution to the "I only have 4 sensors" problem — memory compensates for limited sensing.
Bug 3: "I added OLED display and now the robot drives badly"
W = [-2, -0.5, 0.5, 2]
while True:
s = read_line()
active = [W[i] for i in range(4) if s[i] == 0]
if len(active) > 0:
error = sum(active) / len(active)
correction = int(25 * error)
robot.set_motors(80 + correction, 80 - correction)
oled.fill(0)
oled.text(f"E:{error:.1f}", 0, 0)
oled.show() # Display current error
time.sleep(0.02)
Answer
oled.show() blocks for ~10ms (I2C transfer). Combined with time.sleep(0.02) = 30ms per loop = 33 Hz. The control loop is too slow for sharp curves. Fix: update OLED every 10th iteration, or remove it from the control loop entirely.
Bug 4: "Mysterious crash after 12 days"
start = time.ticks_ms()
while True:
elapsed = time.ticks_ms() - start # Calculate elapsed time
# ... control loop ...
Answer
ticks_ms() wraps after ~12.4 days (2^30 ms). Direct subtraction gives wrong results after wrap. Fix: always use time.ticks_diff(time.ticks_ms(), start) which handles wraparound correctly. You learned this in the timing lab!
Recap
Bang-bang control snaps between fixed states -- the same problem you saw in the motor lab. Threshold steering (if/else on the error value) is still jerky because it throws away the error magnitude. Proportional control (correction = Kp * error) finally uses the error's size to produce smooth, proportional corrections. The gain must be tuned -- too low and the robot drifts, too high and it oscillates. Meanwhile, the hardware does the heavy lifting: PWM counters run the motors at 1 kHz while Python only updates a register every 20ms. That one formula is the foundation of feedback control used across all of engineering.
Key Code Reference
from machine import Pin
from picobot import Robot
# --- Line sensor setup ---
line_pins = [Pin(2, Pin.IN), Pin(3, Pin.IN), Pin(4, Pin.IN), Pin(5, Pin.IN)]
def read_line():
return [p.value() for p in line_pins]
robot = Robot() # For motor control
# --- Sensor reading ---
s = read_line() # [0,0,1,1] — 0=black, 1=white
# --- Error calculation (you built this!) ---
W = [-2, -0.5, 0.5, 2] # Sensor position weights
active = [W[i] for i in range(4) if s[i] == 0]
if len(active) > 0:
error = sum(active) / len(active) # Weighted average: -2.0 to +2.0
else:
error = None # Line lost
# --- Conditions from raw values ---
all(v == 0 for v in s) # Junction: all sensors see black
all(v == 1 for v in s) # Line lost: no sensor sees black
# --- Motor control ---
robot.set_motors(left, right) # -255 to 255
robot.stop()
# --- P-control loop core ---
correction = KP * error
robot.set_motors(int(SPEED + correction), int(SPEED - correction))