Computer Vision

Ever since the release of Kinect, I have been fascinated by the possibilities of computer vision. I always seize the opportunity to create with it, as long as it does not invade anyone's privacy. I have had the privilege of working on many projects, which I like to call "happy surveillance." Although I am able to see people, I never use this information in a harmful way. My goal is always to create delightful experiences, whether it be playing with computer-generated snow, dancing with robots, or walking through water.

Show image descriptions

These image descriptions were written by Claude. I provided my own code, project proposals, and notes as input so the descriptions could explain what you're actually seeing.

True Sight vision system
Gesture tracking tests
No items found.

The True Sight vision system, a custom computer vision pipeline for tracking and spatial awareness. The interface shows real-time object detection, centroid tracking, and contour analysis. This kind of system powered several projects, from interactive installations that respond to people in a room to robotic navigation.

Gesture and pose estimation tests in the lab. The colored skeleton overlay tracks body position in real-time, useful for everything from interactive art installations to making robots that can see and respond to people. This falls under what's best described as "happy surveillance": the system can see you, but only to create something delightful.

Perspective calibration work
Assembly of dual thermal and rgb cameras
Testing computer vision system on location
Hand tracking and gesture recognition software
No items found.

Fixing perspective calibration on cameras that were already installed for an interactive water fountain. Visitors could walk through the fountain, and the vision system needed to track exactly where they were to control the water jets. The screen shows two views: the raw camera feed of a board with green circular markers, and the processed output with each marker identified and labeled with coordinates. OpenCV handles the math, but getting calibration right on cameras you can't easily move is its own challenge.

An array of prototype devices lined up on a workbench, each equipped with microcontrollers, ribbon cables, and sensor boards. These are the building blocks for computer vision systems: the hardware that captures visual data before any software processing begins. The pegboard wall behind is stocked with tools and components for quick iteration.

Carrie and Bruno huddled around a laptop on a concrete floor, testing a computer vision system in the actual space where it will run. Calibrating vision systems in situ is essential because lighting, reflections, and distances all affect how the camera sees the world. What works perfectly in the lab can fall apart in the real environment.

Two software windows running side by side. On the left, a hand tracking system running at 27.70 FPS, showing an orange skeleton overlay mapping the joints and segments of a hand in real time. On the right, a reference image of a swimmer underwater. The goal was classifying hand positions in water for a sports startup. Tracking hands is hard enough in open air; doing it underwater, where refraction and bubbles mess with everything, is a different problem entirely.