Quest for Reality
Issue: Edition 2 2020

Quest for Reality

We increasingly have been hearing about light field technology as it pertains to simulating the "ultimate" visual experience. But, what exactly is light field technology, and why should we care about it?

In the purest sense, a light field is "just" a mathematical function to describe light flowing in all directions through all points in a volume. When applied to cameras, where a conventional image is a captured representation of light intensity, a light field image captures both the intensity and the direction of that light. When applied to display devices, the combined function of intensities and directions means the captured subject can be visualized from any perspective.

Here, Dan Ring, head of research at Foundry, provides a primer on this technology in a Q&A with Computer Graphics World Chief Editor Karen Moltenbrey.

Lightfield
Dan Ring

Why is light field technology important?

A conventional lens-and-sensor system collapses all of that dense light field data down into a two-dimensional image. This loses information and effectively makes final choices at capture time (about things like focus, parallax, and even camera position) that cannot later be manipulated in post-production, and also cannot be re-created for display to viewers later.

But, if the richer data could be captured and displayed to the user, it would provide a viewer with a natural sense of parallax from head movements, and also allow a viewer's eye to converge and focus naturally into different depths within the scene.

Deferring these photography decisions on capture allows greater freedom when visualizing.

The viewing experience would then be much closer to 'being there,' in a way that stereo 3D displays have only ever approximated. In addition to the viewing position and ability to focus on different depths, the added visual richness afforded to the viewer is in view-dependent surfaces. For example, metallic or speculative surfaces will appear far more 'real.'

Which segments of the industry will benefit most from light field tech, and in what way?

Firstly, light field tech could benefit live-action on-set capture, where the additional data could be widely exploited in post-production, particularly in visual effects where it could allow for easier or higher-quality integration of elements.

Secondly, if in the future display devices were to become available at scale, post-production could pass the data through to the final display to create an unequaled viewing experience. Well before this became practical or affordable for home use, it could be a compelling part of premium cinema, in a more persistent way than stereo 3D has managed. Light field displays could also have an impact if they were used instead of LED panels in virtual production environments, as they would enable natural parallax, lighting, and focus.

What about any segments outside of our industry?

Light fields already have uses in industrial applications, where the extra data can be used for detailed inspection (http://raytrix.de/inspection/, for example). Visualization of products in the industrial and automotive design space will particularly benefit from light fields, where the value of more physically correct light transport is important.

Light field technology is not new, is it?

The mathematical and physical concept of a light field dates at least back to the work of Michael Faraday in 1846 and has been continuously expanded on since then. The practical applications in photography are also not new and have been used experimentally for at least 100 years.

It seems that suddenly we are hearing more about light field tech - why is that?

Practical digital light field capture is a complex engineering problem as well as a complex computational problem. The explosion of available compute, storage, and networking over the last 20 years has brought the field to the cusp of being viable.

Visualizing light fields had been a problem up until around 2012, when techniques for compressive sensing of light fields on the GPU were developed and used by MIT, MPI & Nvidia, for example. This enabled more content to be delivered to multi-layer display devices at higher spatial and depth resolutions, and started putting light fields under people's eyes and driving interest in the technology.

How important are light fields to achieving a realistic visualization experience?

Simply put, if you could fully capture the complete light field of a space and fully display it back later to a viewer, it would be visually indistinguishable from the reality. In practice, of course, there will be limits on resolution and dynamic range, which make it an imperfect experience, but in principle, it still has the potential to deliver a visual experience that can't ever be achieved by two-dimensional images.

A key part of a 'realistic visualization experience' is obviously the actual experience. Light field display technology offers a different way of reproducing a captured or approximated light field than, say, a VR headset.

With the current volumetric display devices, the visualization is delivered through a physical display in the same world as the viewer - they're not being asked to be taken out of their current world.

Lightfield
A light field array built by the Saarland Informatics Campus.

What are some of the specific obstacles to achieving light fields, and why are they important?

The volume of data is orders of magnitude greater than conventional imagery, and the weight and size of capture equipment is significant compared to current camera technology. While there aren't any compelling reasons to support the technology (benefits on set, benefits in post-production, or benefits to a viewer), it is highly unlikely that adoption will happen any time soon. The movement of such large amounts of data is impractical, and the energy requirements (and heat output) of prototype displays are very large.

Assuming the above problems of storage, computation, and transport can be solved, there also needs to exist the production pipelines for working with light fields, from capture to post-production and delivery. On the post-production side, deep images - which are an image-based, camera-view-dependent projected volume approximation - have been used for VFX and animation for a while now. Even though the data rates are significantly lower than light fields, they are still substantial enough to make working with deep images challenging.

Again, the investment required and the timescales to produce are huge.

How does one go about solving these issues?

Like all technology advances, we see a lot of dogged persistence by innovators in capture technology and in display technologies. Once one of these finds a niche that is commercially viable, there's a chance that niche can expand. We may not need to solve capture to post-production to viewer all in one go.

When we finally do overcome these issues, what will the result look like?

I'd start from the answer to the above question, 'How important is light fields to achieving a realistic visualization experience?' If we have that, then there are some first things we can think about that might be practical enterprise applications before the tech could one day evolve to home use:

  • Large displays used for on-set backdrops, replacing LED walls with environments that have the correct perspective from multiple viewpoints and allow natural focus pulls into synthetic environments.
  • Large displays used for future cinema, to maintain a premium viewing experience for big-screen entertainment.
  • Large displays used for live events.

Why do light fields have the potential of eliminating viewing gear in VR?

Digital CAVE environments aren't new but could be greatly improved with light field display panels, removing the need for head gear while greatly improving the experience. This isn't likely to be viable in a home environment, but we think that the switch from headsets to an immersive wall display could be very compelling, adding a sense of immersion without necessarily needing a full wrap-around experience.

In either case, the ambition would be to move away from stereo display technology, which is a very poor approximation for a human visual system.

Is there any other information about light fields you would like to highlight?

Foundry has been developing light fields research as part of SAUCE, a collaborative EU Research and Innovation project between Universitat Pompeu Fabra, Foundry, DNeg, Brno University of Technology, Filmakademie Baden-Württemberg Animationsinstitut, Saarland University, Trinity College Dublin, and Disney Research to create a step-change in allowing creative industry companies to reuse existing digital assets for future productions.

The goal of SAUCE is to produce, pilot, and demonstrate a set of professional tools and techniques that reduces the production costs of enhanced digital content - in particular, targeting the creative industries by increasing the potential for repurposing and reusing content as well as providing significantly improved technologies for digital content production and management.

For more insights into light fields and related topics visit Foundry's Insights Hub at https://bit.ly/insightslightfields.