The Next Big Step in AR from Nexus Studios
Karen Moltenbrey
April 10, 2020

The Next Big Step in AR from Nexus Studios

Not long ago, Nexus Studios unveiled Gilda, a proprietary suite of augmented reality tools and techniques.   Gilda, fueled by the advent of 5G, solves the critical challenge of augmented reality by creating “AR-ready” locations that have been mapped in order to experience multiple realities on top of that location. Gilda delivers believable hyper-realistic experiences, evidenced by its launch of Samsung’s epic AR activation at AT&T Stadium, home to the Dallas Cowboys Stadium.

Gilda is pioneering the use of new Visual Positioning System technologies such as Scape and 6D.AI to accurately locate us within the real world. The platform’s innovation is in scanning the environment to create an accurate digital twin from the resulting photogrammetry. Then, in-engine tools allow Gilda to bring to life engaging content and experiences in the real-world location.

“Imagine you could use any environment as a blank canvas for a compelling and interactive digital experience,” Chris O’Reilly, co-founder, Nexus Studios, said. "With Gilda, you can do just that. Gilda enables users to experience the next level of augmented reality through captivating storytelling, which includes hyper-realistic characters and environments. Whether it be utilized in gaming, sports, theme parks or museums, the potential for ‘Enhanced Location’ activations is now endless, with Gilda as the key to unlocking the future of location-based augmented reality.”

Gilda is the only platform that enables all these elements to combine and create “Enhanced Locations,” a term coined by Nexus Studios that describes the essence of a true AR experience. 

Here, Liam Walsh, creative technology director at Nexus Studios, discusses the technology with CGW .

Gilda is Nexus’ latest AR innovation. Can you break down what it is for our audience of pro-VFX artists?

Gilda is a combination of many things that all come together to help us tell better stories in real-world spaces. It comprises a suite of processes, tools, technologies, and practices, and encompasses our domain knowledge for making an augmented-reality experience that is explicitly and intrinsically tied to a specific location. The technologies themselves are varied and include machine learning and computer vision, visual positioning systems, compression algorithms for volumetric video, lighting estimation – even figuring out the weather and sun position to create better shadows. 

“Enhanced Locations” is another concept Nexus has pioneered. How do “Enhanced Locations” and Gilda work together?

We can use some of the tools from Gilda to enhance a location, turning it into a three-dimensional canvas that we can tell stories upon, or we can use an enhanced location somebody else has created.

Literally billions of dollars are being spent creating this 1:1 map of the world – it's ‘On Exactitude in Science,’ made real. Once we have an understanding of location and place, we use the other tools from Gilda to better ground our content in reality and better understand the real-time environmental factors that are happening.

How do graphics and CG animation inform the creation of an AR experience?

Graphics and CG animation inform everything we do. Any AR experience is essentially an illusion, and a brittle one at that. The craft and sensibilities of crafting cohesive worlds in our traditional CG stories has helped to make experiences that are aesthetically pleasing and consistent within themselves (not necessarily consistent with reality or photorealistic, but believable and alive). 

We're always bringing the artificial to life, whether in AR, VR, or traditional animation – AR definitely brings its own challenges, but our longstanding commitment to creating an illusion of life definitely informs the creation of these experiences. 

People have a fixed idea of AR in their head that is perhaps not representative of where the technology is today. What are the types of AR you guys are working on? How is it better than what we’ve seen in the past?

There have been so many different views about what AR is or will be – many come from science fiction; from Terminator to Iron Man, there's been a pervasive idea of AR as an overlay of contextual information over the top of reality. In movies, this is often portrayed as flashing text, numbers, and graphs. There probably are times when this would be helpful, but in practice this makes for an uncomfortable and noisy experience, since it's not how our brains process information, and it's not how our actual vision system works. 

There is, of course, a place for this; in productivity and enterprise systems, it makes sense to overlay data situationally and contextually. The biggest gap between where this technology (head-mounted displays) is and the technical feasibility we have right now, is the field of view and the near and far clipping planes. The narrow field of view that’s possible right now makes it feel as though you're wearing a cone on your head. 

The types of AR experiences we are working on right now are an evolution of what we've always done: We're trying to use AR to make meaningful experiences. We're currently combining what people would consider traditional AR with visual positioning systems, computer vision, and machine learning. Yet, even our older projects that used 'last-generation' AR technologies (image markers) like President Obama’s ‘1600’, the New Yorker's 'Innovator's Issue,' or the Gruffalo Spotter, striv ed to make a meaningful connection between 'reality' and the virtual content overlaid on top. This is something that's better than what people have seen previously and moves the experience past the novelty experiences where AR is just 3D content being displayed in the real world on some other surface, with no sense of context. We're getting better and better at understanding that context and making connections between the two. When people can experience that, it's really powerful.

Where do you see the most potential for it? Pure entertainment, advertising?

It's difficult to say. If you were to judge potential purely on where the most money is being spent, it would be as a productivity enhancer or utility. If you judged potential based on where people are spending their time, it would be entertainment and self expression. Once you can alter or augment reality, there are so many potential use cases that it's difficult to pinpoint which has the most potential.

I believe that advertising will have to be entertaining to survive the next few years, regardless of platform, but AR will definitely be an important vector in the next generation of advertising experiences. AR has so much potential to make the world around us appear more magical and interesting, without detracting from or replacing the actual wonder of reality itself (which VR does). Shared experiences will be hugely important and will come about naturally as AR lends itself to that easier than VR.

Is 5G necessary for these next-level AR experiences, or are those without 5G out of luck, so to speak?

5G isn't necessary, but it definitely doesn't hurt. It already allows us to do things we couldn't do before and will become more and more of a requirement as time goes on. 

It's easy to dismiss 5G as an incremental change rather than something revolutionary, as were 3G and 4G, it's only with hindsight that we can look back and see the transformations they afforded. You could browse your social media feed via 2G, but scrolling through Instagram would not be the same experience without those images and videos loading quickly. It's still too early in the life cycle of AR + 5G to really know what the new experiences will actually be. Currently, mobile edge computing, remote processing, and volumetric video are all feasible while streaming content and that means we'll be able to create all new types of content.

What do you see as AR’s greatest strength, weakness, opportunity, and threat going forward?

AR's greatest strengths for me as a storytelling medium are that it affords an incredibly strong sense of presence. Stories exist in the user's real world, contextually adapted to that specific person in that specific location, and that is an incredible weapon in any storyteller’s arsenal. 

For us creators of digital art, it allows us the concept of scarcity for the first time ever in our work. Previously, any digital media was infinitely distributed, but even real-time stories could only ever be truly bespoke when they had a high level of interactivity (which can actually be a barrier to storytelling if overused or misused). The specificity and physicality of AR lets digital art be a truly ‘one off’ experience in a destination, and there's a real sense of perceived value that comes along with that.

AR's greatest weakness right now is the form factor – via the phone or the current head-mounted displays there are limited fields of view, it's not fully immersive, and it's not comfortable to do for long periods of time. I think the kinaesthetic weaknesses that come with the current technology are far greater than the aesthetic ones. 

The greatest threat to AR is sadly the same threat we'll face with any new media platform going forward; there are real ethical and privacy concerns that will come with people using their camera as a primary interface. The user's location, who they are, who they're with, and the objects surrounding them are all valid data inputs for a tailored AR experience. However, misusing this data could quickly erode trust and turn people off the whole concept of turning their phone camera on before we've even established the vocabularies and grammars of this new medium. 

What AR project of yours best represents how computer graphics are being deployed for good AR storytelling?

We have some great AR projects coming down the pipeline that are still in the works. At Nexus, we always strive to make each project better than the last. There's a location-based project launching soon where we're literally bringing the past to life in a spectacular way, and it's a great example of AR for storytelling. There's something about bringing historical and mythological stories to life in the place where they actually happened that is really powerful. Stay tuned! 

Last year, we brought together a plethora computer graphics wizardry and storytelling skills to bring to life the story of the Big Bang in AR in partnership with CERN and Google Arts and Culture, we even had Tilda Swinton narrating. We had to tell a complicated story accurately and seamlessly that starts from holding nothing in the palm of your hand, to holding millions of particles, to then walking around a plausible approximation of the known universe. The computer graphics and computer vision skills deployed in the service of that story really stretched us as a studio and we're very proud of the final product.