Creating lifelike CG humans is an extremely difficult endeavor. Over the years, many have tried, carrying the ball forward but failing to reach the end goal of crossing what has become known as the Uncanny Valley.
In 2001, Square Pictures spent countless hours and dollars creating a cast of digital humans for the movie Final Fantasy: The Spirits Within. Its work was state of the art, but many criticized the results, calling the CG actors “creepy.” In other words, close but not close enough. Let’s also not forget the digital steps forward by Robert Zemeckis, with his pioneering advancements in performance capture, used on films such as The Polar Express (2004), Beowulf (2007), and A Christmas Carol (2009).
Digital Domain took some giant leaps forward by re-creating an aged Brad Pitt in The Curious Case of Benjamin Button. In fact, there have been many attempts before and after these projects. And more recently, studios including Digital Domain have been able to break new ground by generating holograms, resurrecting deceased actors and musicians, including Tupac Shakur and Michael Jackson.
These are just a few examples of the extraordinary work being done with digital humans in entertainment alone.
In an attempt to cross the Uncanny Valley, a number of industry leaders have formed the Digital Human League (DHL) to study and share their knowledge of digital humans. The goal of the endeavor – called the Wikihuman Project – is to open source the findings so that the community can share and learn from the information.
Here, Chaos Group’s Christopher Nichols, who started DHL, discusses this collaborative project with CGW Chief Editor Karen Moltenbrey.
When did the Digital Human League form?
Officially, in August 2014.
How did the idea originate?
The creation of believable digital humans is something artists have been paying attention to for a long time. But, traditionally, the challenges and problems surrounding this space have been only tackled by individuals (people or companies), with little knowledge shared across the board. We saw an opportunity to form a group that shared in this interest and were pursuing the same goal, but from different disciplines and perspectives.
What is the group trying to achieve?
The Digital Human League was formed with both an artistic and scientific motivation in mind. Together, as a group, DHL will embark on a large-scale project called Wikihuman. The goal of the project is to study, understand, challenge, and, most importantly, share knowledge of digital humans.
Are the members artists or do they represent developers?
Several members of DHL are dedicated to the science of acquiring complex and detailed data of humans for the use in CG. The league also includes a number of high-level artists who have tackled digital humans and know some of the pitfalls involved. We have software developers that continue to create the tools that have driven and inspired the artists to the advancement of CG characters. Finally, some of our members are dedicated to researching the challenges of digital humans and how people act and react to them.
The Wikihuman project will provide a central location where DHL will share data as well as the process with which that data can be used so there is a benchmark for understanding the balance between the art and science of representing computer-generated, believable humans.
What would you personally like to see achieved?
By studying digital humans and how they succeed or fail, we will have a better understanding about our own feelings as to what works and what doesn’t. While there is an interest in overcoming the Uncanny Valley, I also have an interest in why the Uncanny Valley exists at all. The more we understand what makes a digital human work, the better understanding we’ll have about ourselves.
Who founded the group?
There were 14 of us to start. It was a wide-ranging group that included independent artists, scientists from ICT, university researchers from the UK and Australia, visual effects professionals, and software developers.
The original 14 are: me, Mathieu Aerni, Nick Gaul, Mike Seymour, Paul Debevec, Steve Preeg, Graham Fyffe, Dan Roarty, Jay Busch, Vlado Koylazov, Rusko Ruskov, Lukáš Hajka, Angela Tinwell, and Stephen Parker.
Have others since joined DHL?
Yes, there are six others since we made our initial announcement: Danny Young and Michael McCarthy, both working on hair; Oleg Alexander and Koki Nagano, both from ICT; Jason Huang, who is working with the ICT team and is also from Tesla; and Luc Begins, who works on digital humans.
What are the criteria to joining?
We always want to know what new members are bringing to the table, whether that’s experience, insight, past projects, time, and so forth. We want to know their interests in this project so we can help them make advances in their own work. Anyone who is interested in joining can contact us at email@example.com. Meanwhile, all our work and data will be available on Wikihuman.org. We welcome feedback.
When we say “digital humans,” what are we referring to?
We are focusing on faces at the moment, but we may continue to explore other parts in the future.
Do you do research collaboratively or individually?
We are very collaborative. Updates, ideas, and materials are generally shared in an online depository. We also keep a discussion board going that everyone adds to.
What are you doing to achieve your mission?
I’ll give you one example. Over the years, USC’s Institute for Creative Technologies (ICT) has used its patented Light Stage equipment (from LightStage, LLC) to conduct high-resolution scans of an actress called Emily. The team’s first face scan resulted in Digital Emily seven years ago.
Through new advancements and team input, the data for Digital Emily 2 has just been released. This is our first major milestone in the Wikihuman project.
The latest round of data is remarkably more detailed than what they were able to produce before, giving artists even more information to grapple with when it comes to subsurface scattering, single scatter maps, microgeometry displacement, and so on for a face. Right now, it can be viewed using [Autodesk’s] Maya 2015 and [Chaos Group’s] V-Ray 3.0 for Maya. We also provide an Alembic file and an OSL shader.
Since this research is ongoing, we’ll release our findings in stages, so the community has plenty of time to interact, comment, and contribute. As the data develops, so do our action plans, which helps this whole process build on what came before.
What are some of the technical challenges you face?
Getting a good, solid dataset and defining what is correct and what isn’t are both very important. The problem with the Uncanny Valley is that we generally know when something is wrong but don’t know what that something is. If we are able to scientifically determine what parts of the equation are right, we need to reduce the variables that we have to change.
Do you have a timeline in mind for achieving various goals?
We set goals, but we have an understanding that this will always be a work in progress.
What are some of the big technical achievements that have gotten us to this point in time in regard to digital humans?
The history of digital humans is really about making the most of what you have. A new technology will emerge, and artists will push it to its limits. Improvements are made until a new tool is released, and then we start seeing even more advancements.
For instance, Final Fantasy: The Spirits Within was possible because a certain type of modeling became available to artists at the time that let them attempt something others couldn’t do before. Today, scanning technology at places like ICT is allowing us to add more detail than ever before to digital models, specifically down to 10 microns.
Advances in motion-capture systems, facial animation, subtle animation such as sticky lips, blood flow design, and the dynamics of hair have also played a big role in helping artists advance their craft. There are a lot of artists who have now been studying and applying knowledge around digital humans for a long time, too. Digital humans created from stills today, for instance, are miles ahead of where they were just a few years ago.
What have been some of the sticking points to date?
If a trained visual effects artist sees a bad VFX shot, he or she will immediately know what’s wrong with it and why it’s bad. The general population can see the same shot, and most times they’ll think it looks fine. For digital humans, however, everyone knows that something is wrong, but no one knows what it is. That’s the challenge.
What is currently missing in order for us to cross the Uncanny Valley?
The truth is, no one really knows, and that’s part of what we are trying to find out.
The Uncanny Valley has become such a powerful way to describe the negative reaction that people feel when something is ‘almost human,’ but not human enough. When I first told people I wanted to form a group that tackles the issues around digital humans, many reacted with the sense of repulsion. That is how strong the emotional response is and why people would rather avoid the subject completely than face the challenges.
One of the main issues with creating a digital human is that, because of the strong emotional response based on the Uncanny Valley effect, people often act and react emotionally instead of analytically. Those actions and reactions are often extreme, which makes it very hard to hit the mark.
Which areas (face, lips, eyes) present the biggest hurdles?
We honestly don’t know yet. If I look at a digital human shot that wasn’t successful, I will often see something different in terms of what’s wrong than another artist would. Part of this process will be determining how important individual features are and what makes them important. Is it more detail that’s needed? Where? Why? These questions lie at the core of Wikihuman and what we want to achieve.
What is your vision for digital humans in the future?
I’m a lot more interested in learning what makes someone look human and how our brains compute that, than in bringing back dead actors. Mike Seymour and Mark Sagar are also doing fascinating work in which they are using digital humans to build better interfaces for human-to-computer interactions. The way this subject advances artificial intelligence is something I gravitate toward, as well.
Has the surge in holographic characters impacted the interest in digital human technology?
I think people are more interested in the idea of bringing a dead person back to life than the technology itself.
It should also be noted that DHL is actively discussing the ethics around digital humans. Is it right to bring a person back to life or to impersonate a person? The latter presents a particularly thorny problem in that, if technology advances to a certain point, you could make someone say or appear to do something they never did. As we advance the art, it’s going to be important to think about these things and act responsibly.