X-cellence
Issue: April-May-June 2021

X-cellence

Storytelling has existed since the dawn of time, evolving over the millennia since - from drawings on cave walls, to oral accounts passed down from one generation to the next, to lengthy narratives on paper and subsequently in digital format. However, ILMxLAB, Lucasfilm's immersive entertainment studio, is taking storytelling to another level, pioneering new interactive virtual-reality and mixed-reality experiences that transform storytelling into what the studio is calling storyliving.

Industrial Light Magic
ILMxLAB has been creating rich virtual worlds, like these shown above in the VR experience Vader Immortal.

Founded in June 2015, ILMxLAB is located in San Francisco, at the Lucasfilm headquarters, where it creates compelling, living worlds that people can visit and actually become part of. These worlds contain rich sound and visual detail, and the experience evolves based on choices the visitor makes.

With the goal of pushing new boundaries in this burgeoning medium, ILMxLAB has created several commercial projects, including at-home experiences for the Oculus Quest, a location-based arcade style experience, and four location-based entertainment (LBE) experiences - upping the ante with each new project. It also has created a number of experimental projects.

Here, CGW Chief Editor Karen Moltenbrey speaks with ILMxLAB Visual Effects Supervisor Tim Alexander about the studio, what it does, the technology used, and how the group continues to innovate in this space.

Please describe ILMxLAB.

ILMxLAB is about finding new forms of storytelling in the real-time space. Over the past five years, we've been mostly exploring in the virtual-reality space, but we're also very much into augmented reality, mixed reality, all of the different realities. If you look at the projects we've done, they've actually been much more about story, a nod to our background from both Lucasfilm and ILM, where we see ourselves as storytellers and filmmakers. So, we're really interested in exploring this new medium from a storytelling standpoint.

How many people work at ILMxLAB, and what are their backgrounds?

There's about 80 right now. And they come from various backgrounds. We truly understand that real time is definitely different from doing visual effects work, so there's a definite need for people (producers and artists) with a lot of experience in games and real time. Then we also have others who come from visual effects, like me.

We try to cross-train people in real time so that they can be useful in both the VFX realm and in the real-time realm. We're finding that there are certain jobs that really blend well or can make the jump over to real time pretty easily. So, for certain shows, we actually staff people from the visual effects side to do real-time work.

What was the nexus for forming ILMxLAB?

It was the advent of VR and real-time graphics being viable on higher-end PCs; we're not talking about $200,000 computers anymore. Lucasfilm formed the Advanced Development Group first, which spent years exploring real-time graphics. They didn't have to put out a product or anything immediately; it was about R&D and seeing where they could take these technologies. Although, in Rogue One (2016), there's a handful of shots of K-2SO that were rendered real time, which came out of that group.

What's the big-picture goal?

Our big picture is to come up with the best immersive storytelling that we possibly can. We are storytellers, and we're trying to make experiences for people and get them really immersed into our worlds, but giving them some decision capabilities once there. We use the word 'storyliving' now, as storytelling is kind of one way. Our goal is to look at each of these new technologies and discover how to do storytelling and storyliving in those.

How has ILMxLAB revolutionized storytelling, or storyliving as you say?

I'm not sure exactly how to answer that, other than to kind of say we've been given the opportunity to actually figure out how to do this in VR, as opposed to having to be monetarily successful. It's given us a number of years to really explore the VR space. And I think our success can be measured in the awards, accolades, and recognition that we've received, including an Academy Award (for CARNE y ARENA) for a new form of storytelling. The last time that was given out was for Toy Story in the mid-'90s.

It's hard to say how storytelling was revolutionized because we were kind of explorers, if you will. I wouldn't say we've totally revolutionized it yet, but I think we're leading the way, and with each experience, we're exploring the medium and trying to figure out the best ways to do storytelling. I just don't think it's solved yet.

Industrial Light Magic
ILMxLAB VFX supe Tim Alexander.

What factors have enabled ILMxLAB to evolve?

Probably one of the biggest trends has been the maturation of the VR headset and heading more toward the mobile market. So if you look at our projects, we've trended toward wherever the hardware's been going, because we've started seeing people who are into our at-home personal experiences. Our Vader Immortal series and Tales from the Galaxy's Edge is on the Quest, but we also have releases on PlayStation 4, the Rift S, and Steam VR, which support a number of headsets. The most accessible is the Quest, but it's also the least powerful in terms of rendering.

The group has created location-based and at-home experiences. Any others?

We basically operate in four work streams at ILMxLAB. One of those work streams is the home and daily lives, so that would be the Oculus-type class - something you could do at home and download. Then we have another work stream, innovation experiments, which is pretty much straight-up R&D - things that don't necessarily get productized right away. So right now those would be, for example, experiments in AR, since AR is not that mature yet, but we do believe in it. We think it's coming. LBE is the other one. So examples of those would be our VOID experiences that we did; things that you have to go to a place to do.

Then the last work stream that we have is next-gen film and streaming. We actually see a huge market in that area - with COVID, there's been a huge uptick in people streaming. We think the next gen of that includes interactivity - not just choosing your own adventure, like Bandersnatch, but actually true, real-time graphics that you're able to interact with, but still in a very cinematic way while at home in front of a TV streaming.

How has your technology evolved?

It's evolved quite a bit. Again, we've been primarily looking at VR projects. The hardware for that has changed dramatically; one of the biggest changes in that area has been inside-out tracking, like on the Rift S and on the Quest. It really frees you up from having to instrument a space and lets you navigate much larger areas. That's been sort of a big thing for us. Obviously computers get more powerful every year, so we can do more with the graphics.

One of our key tenets is high fidelity, which comes from Lucasfilm and ILM; everything we want to do is at the highest quality. So we always push for the most extreme and best-looking graphics we can. And every year we get to do more and more. Unreal Engine keeps getting better, and graphics cards keep getting better. The technology just keeps moving, and we keep wanting more.

Also, Oculus (Quest and Quest 2) has been a big platform for us. We've released the Vader Immortal series and now Tales from the Galaxy's Edge on that platform. I don't think anybody could have predicted that you'd have a mobile device doing VR at 90 frames a second at this point. It's a pretty great piece of hardware.

Describe the pipeline.

Engineering-wise, for the interactivity, we primarily use Unreal Engine, which is where we do all of our scripting, blueprinting, and so forth, and combine everything. The pipeline in terms of building characters, environments, and so forth is moderately similar to what we do in visual effects. There's some different tips and tricks that you would use so the models become real-time ready, but that process is fairly similar [to a typical linear film] in that we have people who specialize in creatures, do rigging and cloth-sim setups, and that type of thing. We have animators who animate, and they still use [Autodesk] Maya for that. That's our general tool for doing animation.

We still do a lot of mocap, too, just like we do in visual effects. That mocap comes in through Maya and then is put into Un-real Engine.

So probably the biggest piece of pipeline that we've generated is getting from the asset creation pipeline into Unreal, whereas before it was getting the asset creation into our other renderers, like [Pixar's] RenderMan or whatever we were using for VFX. A lot of the pipeline glue that we had to create was to get our assets into Unreal in an efficient way, and also in a way that we could update them. We use [Epic's] revisioning system and Perforce. So the way we store the assets, then check them in, and that kind of thing is very standard, but the pipeline glue is proprietary.

For example, when we do look development on an asset, we're able to use that same look development both on a VFX asset as well as on real-time assets. That's proprietary stuff, so our texture artists can create something and have it actually work in both sides - in an Unreal and VFX pipeline.

Industrial Light Magic
Some projects are experimental, others commercial.

What's the biggest difference when building assets for VR versus VFX?

The assets are built dramatically different. Some visual effects use geometry as detail, which can't really be done in real time yet. So, if you actually look at the geometry, they're very different; therefore, when you go to texture, you're also doing some things quite differently -for instance, we use normal maps for real time versus displacement maps for VFX.

So, the actual assets are different from a technical standpoint, but when they're finished, they look pretty similar. Of course, the fidelity of the VFX off-line renders are still arguably higher. But if you put a lot of effort into a singular asset, you could make a real-time asset look just as good as a visual effects asset. But, you have to be able to run that at scale.

There's a lot of technical [hurdles] in how much you can push through the real-time engine and at what fidelity. You have to balance all of that. Whereas for visual effects, you just make everything at the highest fidelity; it just takes longer to render. In real time, you only have so long to render, and if you have a lot of content in there, then you have to downgrade the fidelity to get it to run. There's different balancing tricks, too. Artists have to learn how to model for real time; they have to understand budgets and so forth.

How much content is created for an experience?

Each chapter in the Vader Immortals series had about 40 minutes of content. It's more story-driven and linear, so it has more of a fixed time frame. The new one we just did, Tales from the Galaxy's Edge, is more exploratory, with quests - somewhat like an MMO, if you will - and the content keeps running and changing as you go. So the time can be dramatically different for each person. I've heard of some getting through in a few hours; I've also heard of others taking longer.

There's a lot of R&D involved?

Internally, it's our R&D groups. There's the ADG group, the Advanced Development Group, and our engineers at ILMxLAB in that. When we begin a project, we go into it asking ourselves, 'What's the idea, what do we want to do?' Then we'll figure out how to do that.

As a result, we do heavy R&D into rendering techniques for every single project. For example, for Tales from the Galaxy's Edge, which came out on the Oculus Quest platform in late 2020, there was quite a bit of work done from a rendering aspect to get it to look as good as it does on the Quest 2. That's a mobile piece of hardware, and if you look at the graphics, we think they look better than mobile, and that's due to a lot of rendering engineers actually getting in there and making some base changes to the rendering code so we can use different tricks to make the content look better.

In terms of hardware, going back to CARNE y ARENA, it was very hard to navigate VR more than say, 15 feet, and we needed to go 50 feet. This made us do a lot of work hardware-wise in order to use motion-capture systems to capture a person's location in the VR headset and trick the VR headset into thinking it was tracking itself. So, we look at the projects and see what the creative is, then we try to figure out, backtrack, in how to do that, rather than coming at it from the hardware standpoint and saying, 'This is what we have now.'

What other technology has helped advance these projects?

Right now, for animation, we rely heavily on motion capture and facial capture to make the most believable characters we possibly can. To make the story worlds look great we rely heavily on the GPU hardware, Unreal, and on good headsets (those that truly are 90 frames or faster).

Do you approach an LBE experience differently from an at-home one?

Yes, very, very differently. We consider location-based premium because we tend to use custom hardware, meaning it's potentially more powerful, and we can control the environment almost 100 percent. So if you were to go to a location and do The VOID experience, for example, we would have mapped that out very precisely. And you would get a much better immersive, higher-end experience because we control everything in that space - where walls are, where doors are, how we track you, what computer is being used for rendering, all those kinds of things.

That technology isn't quite there yet, so there are many variables to in-home - the hardware, environment, lighting…. Until those technologies come around, the in-home experience, while still great and totally immersive, it is not quite what we consider to be premium, like going to a location.

Industrial Light Magic
Image above is from the LBE Avengers: Damage Control.


LMxLAB Projects

Commercial (starting with most recent)
  • Star Wars: Tales from the Galaxy’s Edge
    Platform: Oculus Quest (at-home experience)
  • Vader Immortal: A Star Wars VR Series
    Platform: Oculus Quest + Rift, PlayStation VR (at-home experience)
  • Vader Immortal – Lightsaber Dojo: A Star Wars VR Experience
    Platform: Location-based arcade-style experience
  • Avengers: Damage Control
    Platform: Location-based entertainment
  • Ralph Breaks VR
    Platform: Location-based entertainment
  • Star Wars: Secrets of the Empire
    Platform: Location-based entertainment
  • CARNE y ARENA
    Platform: Location-based entertainment
    (Oscar-winning project directed by Alejandro G. Iñárritu)
Experiments (starting with most recent)
  • Sith Jet Trooper Experience
    Platform: Location-based interactive experience utilizing real-time performance
  • Star Wars: Project Porg
    Platform: Magic Leap headset
  • LiveCGX London Fashion Week project
    Platform: Real-time, mocap-driven augmented fashion presentation at London Fashion Week
  • Trials on Tatooine: A Star Wars VR Experiment
    Platform: Steam VR
  • Holo-Cinema
    Platform: Immersive installation
  • And numerous other experimental projects pushing the boundaries of technological and narrative innovation

What's next?

We've been working in VR for over five years now, and we have a fairly good handle on how to make a good experience there. We are really looking at augmented reality. We think AR is coming; there's a number of companies working on headsets, and we hope to see something in the next couple of years. The biggest break in AR is going to be a good headset. Right now, people are holding up their phone to do an experience. It's fine for mapping and heads-up display-type stuff, but it's not great for an immersive experience.

We are still really interested in location-based - people going to places for the experience. You don't need a big group, but you could go somewhere and see content, or the adventures could be staged across the city. Going somewhere makes the experience more unique for the person. For that to happen, now we're talking about low-latency streaming and edge technologies for rendering. The big word everybody always uses is '5G,' but we're talking about high-bandwidth, low-latency transmission. We think that's another big linchpin in all of this.

For the work stream we call 'home and daily lives' to work well - to be compelling and immersive - especially for augmented reality, we need to understand a person's environment and be able to change the content to match that environment. Object recognizers, mapping indoor spaces - those are things people have started doing, but no one has shown a massively compelling version of it yet. But I do think it's coming, and that is another technology we really need - to be able to look at an environment, segment it, and understand each object so we can mesh our content to it.

What are the goals for the next five years?

As I said, AR is a big thing for us. Also, in those four work streams is something we call multi-layered experiences or singular--story worlds. We are interested in creating a unified story world that has different entry points from a number of different platforms. So, we could have a Star Wars story world going on, and somebody at home in a VR headset is interacting with that story world in a certain way. Then there might be someone on the street in AR interacting with that same story world, but because they're in AR, that needs to be different than the person at home in VR.

Karen Moltenbrey is the chief editor of CGW.