TED Talk: Digital Domain Delivers Realtime Digital Human
May 24, 2019

TED Talk: Digital Domain Delivers Realtime Digital Human

FARNHAM, UK — Digital human. Virtual character. Virtual assistant. VTuber. Terms fast evolving the mind-set for interactions of the digital age. Communicating with digital replicas of ourselves, loved ones from afar or with virtual assistants will soon become mainstream within our personal and professional lives. However, significant criteria that is required to achieve hyper-realism are the infinite intricacies found within facial and body movements – the non-verbal cues that can make or break believability and engagement between us and ‘them’.
At this years’ TED Talk, Digital Domain’s Head of Software R&D, Doug Roble who leads internal unit Digital Human Group embarked on the challenge (in essence) to develop digital humans which move, interact and respond as humanly possible, not only physically, but emotionally. In other words–exactly like us. To achieve this mammoth task, they brought onboard our real-time production solution, IKINEMA LiveAction to enable accurate transferral of Roble’s natural body movements on to his digital self DigiDoug and protagonist Elbor, where he demonstrated the results in front of a TED audience.

Roble and Digital Human Group conceptualised, “Do you suppose we could create a photo-realistic human, like we’re doing for film, but where you're seeing the actual emotions and the details of the person who's controlling the digital human in real time? In fact, that's our goal: If you were having a conversation with DigiDoug, one-on-one, is it real enough so that you could tell whether or not I was lying to you?”

Using LiveAction, the team achieved natural body behaviours on their characters because at the core is an advanced whole body solver–essential tech to deliver production quality results in real-time. During the demonstration, LiveAction was crucial to accurately retarget Roble’s captured body movements directly onto his digital avatars, syncing his performance with his computer-generated counterparts to move in unison in real-time. On the fly, LiveAction automatically fixed foot penetration and sliding so DigiDoug and Elbor remained ‘anchored’ to the stage floor opposed to passing through, eliminated limb occlusions, and jitters during the data-stream to deliver a seamless performance. Roble was kitted in the Xsens motion capture suit and Manus VR gloves for body and fingers respectively, along with DI4D capturing facial expressions, whilst Vision and Graphics Lab used machine learning to drive facial expression, and all rendered inside the Unreal Engine. Roble successfully demonstrated high-fidelity results whilst exploring future usability of ‘Digital Humans that look just like us’ for live events and concerts, digital celebrities for real-time movies, and communications in VR as examples.

Although creating the ‘higher’ digital human poses its own complexities, IKINEMA’s vision is for simplicity, “Fictional characters have always played a role in our everyday life–take for example Mickey Mouse, Spiderman, Darth Vader; even figures from Greek mythology or recognised masterpieces such as the Mona Lisa. However, through the aid of rapid technology advancements, we find society is now at an intersection where virtual personas are interactive, and soon, intelligent and AI-driven to co-exist and accompanying our daily real-world activities”, said Alexandre Pechev, IKINEMA CEO. “We are increasingly getting used to communicating with voice-only digital assistants like Siri, Cortana and Alexa and soon those will have a body, and importantly–body language to fit the persona. At IKINEMA we constantly push the boundaries of character animation, aiming for the highest detail of believability and realism. By combining our core technologies with our next-generation developments in machine learning, the eventual reality is to have every form of virtual persona co-existing eternally among us”.

Deeply embedded in the development of digital characters we see today, IKINEMA's cutting-edge technology has been adopted globally by teams who create projects for VFX, games, live broadcast, virtual YouTubing, VR training and simulation, high-end immersive VR experiences and theatre, advertising, virtual pop stars and live shows. Due to the flexibility of IKINEMA’s animation software’s, means any 3D character style whether human, fantasy or creature can move and traverse believably and with full immersion within their environments and virtual worlds. IKINEMA continues to bring characters to life for high-end real-time productions such as Mr. Peanut for instant social content during the Superbowl, numerous blockbusters for example Bladerunner: 2049, and Doctor Strange II, interactive simulations for Lockheed Martin’s CHIL VR Lab, characters in games–take Kingdom Hearts III and LBVR Star Wars: Secrets of the Empire, real-time production sets for The Future Group, and Stiller Studios, and not forgetting AR World Cup heroes interacting onset and televised to Brazil’s nation by TV Globo during FIFA 2018.

Collective pioneers forge ahead to evolve next-gen digital humans, where we sit at the bleeding-edge of supreme believability–for the human brain to identify fake from real will become indistinguishable–perhaps sooner than anyone ever imagined.