Blog

Envisioning the Future

Sometimes in the business of venture capital, we have the privilege of seeing the future.

Six years ago on my very first day here at NEA, Lytro’s founder Ren Ng bounded into our conference room with a remarkable camera his team had built on the principles of his groundbreaking ­­­ and award winning ­­­ Stanford dissertation on light field imaging. It was a computational camera that collected way more data than a conventional one, effectively turning optics into software (more on this in a moment). Shoot first, focus later. Stunning.

It’s natural to wonder (and wonder we did) whether refocusing-­after­-the-­fact was a compelling enough thing to move the market. But there was something much more fundamental going on: the very act of selecting focus in a traditional camera ends up throwing valuable information away. Effectively, a 2D plane (the focal plane) is selected somewhere out in front of the camera and is projected onto some beautiful multi-­megapixel sensor. Objects away from that focal plane, however, are progressively, and irretrievably, blurred. The world is of course 3D, so the lovingly selected slice is just that. A slice. (And if a small aperture is selected with everything in focus, then the depth information has been discarded: the whole 3D world has been collapsed onto that single 2D slice, too. It’s an unwinnable set of tradeoffs in today’s digital cameras.)

Light field cameras effectively record all of the 2D slices with a single exposure. This is not only somewhat magical but, if really possible, it’s a revolution. Why? Because such a camera coupled with enough computation could reconstruct the (parts you can see of) 3D objects that made up the scene you were shooting. This is much more than a refocusing parlor trick--it’s a capture­-the­-world-­around-­us instrument.

As long as we are dreaming, why stop at 3D? We live in spacetime, which is 4D. What about light field video? What if we could not only understand all of the objects in a scene and watch them as they ­­­ or the camera ­­­ moved? We’d be able to do a bunch of cool things. Yes, of course, we could decide after-­the­-fact where to pull focus. But we could also do much, much more, including removing objects in the background and replacing them with something else. Want to simulate stereoscopic pair (for a 3D movie)? Done. Want to include computer-generated (or other live-shot) objects in your 4D light field master, and then edit the scene as if they had been in the original live shot all along? Wow.

At this point you should be imagining enormous amounts of data and gigantic amounts of computation. Doing the math six years ago came out with scary large numbers on both accounts; it was just not possible at that time. Still, the potential was undeniable.

Now I was really excited! Here’s this revolutionary new camera that benefits from --- actually NEEDS ­­­--- Moore’s Law. A 100 megapixel sensor in a conventional camera is extreme, and for a lightfield camera it’s table stakes. Give me a gigapixel one, please. And the computation you could apply is effectively unbound. Yummy!

This fit NEA perfectly: long term differentiation, fundamentally disruptive, industry creating. So, we invested.

Like most audacious goals, the path is seldom linear nor predictable. This was especially true for Lytro. While video was clearly the future, only still cameras were practical at the time. With all of the advances in smartphone cameras it was tricky to find that compelling set of features for a consumer offering, and the technology needed more cooking in order to check all of the professional boxes.

The company persisted and continued to build an extraordinarily deep understanding of not just how to build light field cameras, but, perhaps more importantly, how to build the computational photography software that consumes their output. The business was good, but not great.

Late in 2014, CEO Jason Rosenthal made a courageous decision: build video cameras that were untouchable in the fundamentals of today’s image quality metrics AND brought to bear the full power of light field imaging. This was the Mother of all Pivots: to go from a $1K consumer-facing still camera company to a $100K+ professional systems company at the heart of top-production value in cinema and live­-action VR.

I’ve seen many big, hairy engineering programs over my career, but the magnitude of the task in front of the team was singular. Every single discipline, from mechanical, to optical, to real­-time computational, to software, is the leading edge. As in bleeding. What do you think of 755 megapixels at a few hundred frames per second? Do the math: 300 GB/sec of RAW data.

It’s borderline insane. And having seen the movie before, I was almost certain that something was going to go sideways.

But it didn’t.

The team has been positively heroic. There are no words that can capture the sheer magnitude of their persistence and hard work. Not just in building the cameras themselves, but all of the downstream software and post production creativity and sweat. You can see the fruits of their work in collaboration with (Academy Award winners) Robert Stromberg and David Stump in their short film, “Life”. You can see their views of Lytro Cinema, along with a some demonstrations of how the system works here.

We’ll look back and see that this was the moment that we left flatland and started capturing the world in all four dimensions.

Yes, sometimes we literally get to see the future. Thank you, Team Lytro.