Have questions about buying, selling or renting during COVID-19? Learn more

Zillow Tech Hub

Zillow’s SkyTour is pushing the tech boundaries of real estate visualization

skytour illustration

Zillow’s latest innovation, SkyTour, introduces an immersive way for home shoppers to explore the exterior of a property. The technology is powered by Gaussian splatting, a 3D-rendering technique gaining traction in computer vision and graphics research. But SkyTour isn’t just a product feature — it’s a demonstration of what’s possible when product, research and engineering teams collaborate across the full technology stack to enhance the home-buying and -selling experiences.

Launched exclusively on Zillow ShowcaseSM listings, SkyTour allows users to interactively view a home and its surroundings from any angle, using high-quality drone footage transformed into a seamless 3D experience. The result is a dynamic, real-time visual interface that feels more like piloting a drone than browsing a listing.

But under the hood, SkyTour is a complex system that brings together advanced drone-based photography, AI-driven rendering, distributed systems and cross-functional tooling — all deployed at the consumer level.

We spoke with Steve Anderson, product manager at Zillow, about the architecture, technical challenges and collaborative innovations behind SkyTour — and asked why Zillow is uniquely positioned to build and ship this kind of technology.

Q&A with Steve Anderson on SkyTour

What is SkyTour and how does it work?

Steve Anderson: SkyTour is an interactive exterior-viewing experience for Zillow Showcase listings. It starts with drone footage, but instead of presenting that footage as a fixed video, we reconstruct the scene as a navigable 3D model using a technique called Gaussian splatting. The user controls the viewpoint, which means they can explore the property and its surroundings from any angle — zoom out to get a feel for the lot, or dive down to street level.

From a technical standpoint, it’s a combination of structure from motion (SfM) to create a dense point cloud; gradient-descent-based optimization to place the splats; and splat-based rendering to generate a smooth, photorealistic representation in real time. What makes it compelling is not just the data fidelity, but how effortlessly it runs in a browser. That performance is where Gaussian splatting gives us a huge experience win.

Where did the idea come from?

Steve: Our scientist Will Hutchcroft read the original research paper on Gaussian splatting and built a prototype using footage of his own house. When I tried it, I was blown away. It gave me a sense of space that no other digital model had — like I’d physically been there. That was the moment we knew we had to productize this.

Shortly after that, we tested it on exterior drone footage, and the results were even more promising. Outdoor scenes are ideal because there’s more light, more features for SfM and fewer motion-blur issues. It solved a real customer need, and it was a technically feasible entry point. That’s why exteriors came first.

What makes Gaussian splatting so innovative for this feature?

Steve: Traditional 3D modeling approaches — think mesh-based geometry or voxel grids — can be computationally expensive to render and can struggle to achieve photorealism. Gaussian splatting is different. Instead of polygons, it uses ellipsoidal primitives, or “splats,” to represent surfaces. Each splat has its own position, color, orientation and opacity, and the system optimizes their placement by minimizing a cost function tied to fidelity across input frames.

What’s remarkable is how well it captures parallax and spatial coherence with relatively low overhead. Our models often include millions of splats, and thanks to GPU-accelerated pipelines and browser-friendly shaders, the result is smooth motion and high frame rates even on consumer devices.

What technical challenges did you face getting this into production?

Steve: One major challenge was building a scalable processing pipeline. Gaussian splatting models require a lot of tuning — from point cloud reconstruction to splat optimization. We had to optimize performance without compromising visual quality. That meant using GPU-backed render farms.

Another challenge was quality control. Sometimes the models fail — due to poor capture conditions like heavily forested lots. To address this, we built a lightweight annotation system where humans can review and approve models. We also developed internal tools for photographers to validate footage before submission, helping us catch problems earlier in the pipeline. Another challenge is training: This is a completely different way to capture a space, and we had to teach our photographers a new way of thinking.

Why is Zillow uniquely positioned to build this?

Steve: Zillow is in a rare position because we control the entire pipeline — from capture to deployment and rendering. We have in-house photographers and drone pilots trained to collect footage in a way that supports high-fidelity modeling. We have applied scientists who built the core Gaussian splatting implementation and iteratively improved it for production. Our back-end and platform engineers built tooling to automate processing and associate models with listings. We have amazing designers who came up with an intuitive UX and we’ve got human-in-the-loop systems that provide quality control.

All of these pieces are tightly integrated. A change in design may trigger a change in how we capture footage. A model improvement might require a back-end processing update. That end-to-end ownership enables us to experiment, scale and ship quickly — which would be hard to do without all these pieces under one virtual roof.

We built SkyTour to show what’s possible when you combine cutting-edge research with strong product infrastructure and cross-team collaboration. Honestly, the ability to work across so many disciplines — photography, design, computer vision, real-time rendering, systems engineering, UX, even logistics — was a game changer. You don’t often get to be part of a project that touches hardware, software and science all at once.

It’s a showcase of what we can achieve at Zillow when we’re empowered to think big and ship fast. And this is just the beginning.

Why should consumers be excited about SkyTour?

Steve: We’ve built immersive interior tools like 3D Home tours and interactive floor plans before, but buyers also care deeply about the exterior — how the home sits on the lot, what the yard looks like, how far away the neighbors are. Before, they were limited to static images or drone videos curated by someone else.

With SkyTour, buyers control the experience — that sense of agency is powerful.

Because the models are photorealistic and smoothly rendered, the cognitive load is low — users intuitively understand the space. That kind of spatial immersion, especially delivered through a browser, is rare in real estate or retail experiences. It creates an emotional connection that static media just can’t match. When people try SkyTour, their reaction is visceral — “I feel like I’ve been there.” That’s incredibly satisfying from both a product and technology standpoint.

Zillow is where innovation meets impact

Zillow has long been a tech innovator — not just in real estate, but across the broader tech world. From launching the Zestimate nearly two decades ago to being an early adopter of advanced AI today, Zillow consistently embraces emerging technologies to personalize and simplify the home shopping experience for millions. Its early adoption of Gaussian splatting reflects a continued commitment to meaningful innovation and to shaping the future of real estate technology.

Interested in building the future of spatial experiences at Zillow? Visit Zillow Careers to explore open roles in machine learning, computer vision and engineering.

Zillow’s SkyTour is pushing the tech boundaries of real estate visualization