Automating Mixed-Reality Capture

Defining the interaction model for machine perception and automated environment scanning on Quest.

Role Principal Product Designer
Context Meta Quest Platform
Outcome Meta Spatial Scanner SDK

01 // The Challenge

Designing for Imperfect Eyes

Mixed Reality (MR) depends on the headset understanding the physics of the room. Early Quest devices relied on users manually "drawing" their walls—a high-friction process that resulted in poor data.

The business goal was "Automated Scanning," but the technology wasn't ready. The computer vision (SLAM) teams were struggling with reliability. They needed a design strategy that could handle the ambiguity of the real world: open floor plans, messy rooms, and occlusion.

Key Requirements for MR Capture
FIG 01. DEFINING THE PHYSICS. Before designing UI, I aligned the engineering teams on the required output: watertight geometry for physics, semantic labels, and granular meshes for occlusion.

02 // The System Model

The Hybrid Capture Model

The Strategy

"We couldn't rely on full automation because the tech wasn't mature. I defined a 'Human-in-the-Loop' model: The machine suggests the geometry, and the human confirms or corrects it."

I broke the capture process down into four distinct primitives, allowing us to mix and match manual vs. automated steps depending on the hardware generation:

  1. Room Layout: (Walls, Ceiling, Floor)
  2. Architectural Features: (Windows, Doors)
  3. Object Volumes: (Couches, Tables - for collision)
  4. Object Meshes: (High fidelity visuals)
Key Requirements for MR Capture
FIG 02. THE COMPONENT MODEL. Breaking the environment down into layers allowed us to ship "Room Layout" first while "Object Detection" was still in R&D.

Designing for Failure

The most critical part of this work was not the "Happy Path," but the "Correction Path." What happens when the headset misses a wall? What happens if it thinks a couch is a table?

I designed a comprehensive fallback logic that allowed users to seamlessly take over when the algorithms failed.

Key Requirements for MR Capture
FIG 03. THE FALLBACK LOGIC. A detailed flow mapping how the system degrades gracefully from "Auto-Detection" to "Manual Correction" without breaking the user's flow.

03 // The Impact

From Experiment to SDK

This work moved environment capture from a research demo to a shipping platform capability.

Key Requirements for MR Capture
FIG 04. PRODUCTION UI. The final shipping experience on Quest, utilizing the "Machine Suggests, Human Confirms" interaction pattern.