Nvidia's Top Model
Issue: Volume: 30 Issue: 10 (Oct. 2007)

Nvidia's Top Model

A technology test results in a near-photoreal replica of model/actress Adrianne Curry
By Karen Moltenbrey
 
Nvidia has made a habit out of turning heads at industry trade shows with its short animations. First, it did so with the fetching fairy Dawn, a semi-realistic real-time character created by the graphics card vendor to highlight the use of programmable shaders and vertex processing on its GeForce FX line. Four years ago, Dawn began sharing the spotlight with her pixie pal Dusk, also created by the Nvidia development team to draw attention to the same programmable features used for Dawn­—customized skin, hair, and wing shaders—this time with the addition of real-time shadow effects in its GeForce FX 5900 Ultra. Following on the wings of Dawn and Dusk were other characters, including Nalu the mermaid from the ocean and Luna from the sci-fi realm.

In a bold move this year, Nvidia’s internal development team decided to up the ante in terms of real-time realism, abandoning its fantasy characters for an actual person—actress/model Adrianne Curry (America’s Next Top Model). “The goal was to create a hyper-realistic model based on a real person. In this case, ‘hyper-real’ meant photoreal, with some artistic license,” says Mark Daly, senior director for content development at Nvidia.

Like her Nvidia CG predecessors, the CG Adrianne Curry was created using standard tools: Autodesk Maya, Pixologic ZBrush, Adobe Photoshop. The 3D model also is rendered in real time, this time to illustrate the programmable shaders in Nvidia’s 8000 family of graphics cards.
 
Virtual Modeling
How did Adrianne get this unique modeling assignment? Curry, extremely popular with the current generation, runs one of the most popular blogs and online promotions in the industry. Her husband, Christopher Knight of The Brady Bunch fame, holds an interest in computers and CG, and has been longtime friends with a marketing exec at Nvidia. When approached for this “job,” the affable Curry agreed.
Curry’s demanding schedule, however, forced the Nvidia development group to approach the project slightly differently than originally planned. Because of their limited access to Curry, the artists had to embark on the modeling and texturing work using photographs and other media sources as their main references. “For a few months, that’s all we had to work with,” Daly notes. Eventually, motion capture (face and body) was done by House of Moves (HOM); Gentle Giant assisted with the head scans.

Prior to the scans, the artists used Maya to create the geometry by hand for Curry’s head and body, then tweaked the model based on the scan data, which served as another reference source. As Daly points out, the scan provided the shape of the head, but not the topology information.
 


“When you build a model, you need to check the orientation of the triangles. There are nine envelope lines that surround the eyes and mouth, so when you do facial expressions and you have to deform the mesh, the lines deform around the dimples and smile lines,” Daly explains. “The topology lines need to match up with the smile lines around the eyes.” This must  be done manually, and “is no trivial thing to do,” he adds.

Later, the group textured the model, which took a good deal of time. In fact, a large portion of the engineering efforts went toward the creation of a new real-time skin shader. “A light-skinned female is hard to do; it tends to look plastic,” says Daly. “Often people remark on the realism of the scruffy-looking males with scars and bling on their faces that appear in video games. You can see the pock marks, stubble, and imperfections in the face, but they tend to have a darker complexion [to help with the shading issues].”

In the end, Curry’s skin shader alone contained 1500 instructions for each pixel in the model’s face. It takes into account the way light passes through the skin, so for her fair complexion, the light penetrates the outer layer of skin and passes through to the inner layers, bounces around, and exits back out to account for some transparency.

“When the light is behind her ear, you can see the light pass through,” says Daly, “an effect achieved with the subsurface scattering.” Approximately 10 passes were required to achieve the right look. 

The more the group played with the lights, the more it discovered that it is not enough to simply take white light and let it pass through the skin. “You need different parameters, masks for the red, green, blue channels of light as they pass through the skin because each of them has different subsurface scattering,” Daly adds. “Subsurface scattering was the key technique for the skin shaders and the hair.”

The group employed a similar yet different subsurface scattering technique for the hair. Human hair is somewhat transparent and rough rather than a smooth tube, and that’s what gives it highlights. To get a nice specular look coming off the shafts and angles of the hair, the team modeled the strands as jagged-edged tubes, so they would refract and reflect light. The overall hairstyle was sculpted in Maya, though most of the hair is pulled tightly into a bun. Yet, the hair volume still presented a concern, requiring individual hair strands.

In all, the Nvidia development group created 7000 hairs generated from 600 guide hairs defined in Maya. For the flyaway hairs, the team applied real-time physics so when the virtual Adrianne moves and spins, the strands near her bangs move dynamically.

All the assets were exported through a custom Maya exporter to Nvidia’s real-time engine for rendering.

“Re-creating the human face is very difficult, not just because it is hard to get right, but because everyone is an expert on what it needs to looks like. You see it all your life and can pick up every nuance that’s not correct,” explains Daly. “So, we had to get everything right—the skin tones with the lighting, the shape of her face, the deforming of her cheeks when she smiles.”
 
The Way She Moves
Complicating the rendering process is the fact that the CG Curry is not a static model. The animation is skeletal-driven, with mocap data at the core. In order to acquire the motion-capture information, Curry had to wear a spandex mocap suit with markers—quite a contrast to the couture she usually wears in photo shoots. “I am used to camera shoots with just one camera. This was very different, with cameras all around me,” says Curry. “It was a long, grueling day; they kept asking me to do this and do that. But it was fun, and the end result looks cool.”

During the session, Curry performed a variety of movements—sitting on a box, speaking lines, performing provocative facial expressions. She even did her catwalk stride and poses. Unfortunately, though, the Vicon setup was restricted to a 10x10-foot area, which didn’t allow for much of a runway gait. “The catwalk was an afterthought, and we liked it so much that we used it,” says Daly.

The group encountered the usual issues that pop up in skeletal-driven animation, with vertex weighting around the shoulders, elbows, and so forth. To stop the pinching of the joints in the skinned animation, the group did a fair amount of work with corrective blendshapes, employing sculpt deformers to define the shape of what the elbow should look like, folding it over, and using the blendshapes as targets for the vertices.

“It wasn’t simple vertex weighting, though,” says Daly. “For the shoulders, we used a sculpt deformer and kept the volume in the shoulders so they wouldn’t collapse as she bent her arms.”

The facial animation was a combination of mocap and keyframing. During the mocap session, 25 markers were placed on Curry’s face, with only two or three placed around her mouth. In the end, this proved to be too few. “Her entire personality is her mouth,” says Daly, noting that more markers would have captured this type of subtlety.

Later, the team exported the head and body models into Softimage’s XSI for animation. Initially, the group looked to Softimage’s Face Robot as the final animation driver for the face movement, but changed course due to the lack of fidelity in the data. 

In the end, the development group used Face Robot for blendshape targets—in essence, moving the face into a particular position, saving that position, and pulling the information back into Maya, where a blendshape was defined based on that position. Here, the team did some manual tweaking around the mouth region, creating a blendshape definition and interpolating between the blendshapes.

Similarly, integrating Curry’s dimples presented yet another challenge. According to Daly, her smile extends ear to ear, and three sets of dimples appear—three dimples on one side and two on the other. “That required a lot of work along the topology of her face,” says Daly, but more was needed. “Unfortunately, when our model smiles, she gets a nice crease around her mouth but not the three sets of dimples she has in real life.”

“When re-creating a real person, you can get the geometry perfectly correct, but if you do not capture the personality and the facial expressions, you fail,” Daly says. “We captured a lot of Adrianne’s personality through her facial expressions, but we could have done better. I give us a B+. We didn’t nail it. There are some nuances that you can’t always pinpoint but you know something is not quite right.”
 


The final demo, featuring a swimsuit-clad CG Curry, is 45 seconds long. Several Nvidia employees were dedicated to the project during an eight-month period, with two people working on it at any given time. Although Nvidia originally planned for a talking model, the lip sync was scrapped due to time constraints.

“The biggest challenge was the real-time factor. That is how we push the boundary at Nvidia in terms of re-creating a real person who is recognizable to so many,” Daly says. “[Our model] does have a little CG look to it and is not exactly photoreal, but we are pleased with the outcome, and it lays the groundwork for the next person.”
 
CG Me
What does the real Adrianne Curry think of her digital self? “I am very pleased with the results. My expectations were of a Grand Theft Auto-like character,” she says. “Sometimes, though, I get creeped-out, especially when she looks at me. It’s Twilight Zone-ish.”

Curry agrees that no one will be mistaking the CG model for live video. Nevertheless, she is impressed with the level of detail that’s present—like the shoulder blades moving under the skin in the animation.

“The whole gamer generation is my kind of people; I am an ex-gamer. When this was first introduced, I made the comment that now the geeky gamers can have a crush on a virtual person who is real and not some Final Fantasy chick that never existed.”

Joking aside, Curry is quick to recognize the technological milestones involved in this project. “It was great to be part of something so important and technically innovative,” she says.

While the CG Adrianne is considered groundbreaking today, Daly knows that she, too, will age—just like the virtual Nvidia models that came before her. “Dawn and Dusk were milestones. Now we look back and think, well it was good,” he says. “I am sure that in a few years we will look back on this and think the same thing.”   
 

Karen Moltenbrey is the chief editor of Computer Graphics World.