A debugging tool I built for the maze-solving robots at my university. It was inspired by a talk I had seen a year earlier.
Problem
In my first year studying computer science, we had to program maze-solving robots. The end goal of the module was to write a program for the robot that will:
- Traverse and map a (previously unseen) 4x4 maze.
- Find its way to the start point of the maze.
- Go to the finish point via the shortest path (and in the fastest time).
While working on it, some issues repeatedly came up:
-
Sensor unreliability. There were multiple sensors on the robot: infrared distance sensors, ultrasound distance sensors, bumper switches and travel distance sensors on each wheel. Each type of sensor had different characteristics and errors. In order to create robust programs we had to understand the limitaions of the sensors and work around them.
-
Low-resolution runtime information. The robot program ran like any other: you’d upload it to the robot and execute it via the command-line. While the robot interacted with the environment, you could log messages to the console. If the robot behaves unexpectedly during a run, you’d have to browse through logs and reconstruct the robot state in your head.
What it does
The debugger collects regular state updates from the robot and visualises them. It can run live, showing the current state of the robot, and it also stores previous runs, allowing you to inspect them.
Inspiration
This project was inspired by Bret Victor’s Seeing Spaces talk. Bret mentions that in order to understand the behaviour of a robot, we need to “get inside the robot’s head and see what it’s seeing, and see what it’s thinking”. This tool only focuses on “what the robot sees” part, it’s interesting to think how behaviour could be visualised.