Messiah:animate 3.0 from pmG Worldwide has been redesigned as a standalone animation tool to replace its former self, messiah:animate the LightWave plug-in. It also now runs in conjunction with Discreet's 3ds max, Alias|Wavefront's Maya, and Maxon Computer's Cinema4D, as well as with LightWave.
"No one does everything the best. We're willing to admit that," says co-president Lyle Milton, explaining why pmG decided to have messiah:animate work with many different animation packages. Since artists and animators have favorite programs, and favorite features within programs, he says, pmG designed messiah:.animate so that it could be used both as a standalone and as a sort of conduit between applications.
Other new features and enhancements include a 20 to 200 percent speed increase in most aspects of the program, a new nonlinear animation editor called messiah:compose, faster bone deformation and cloth dynamics, animatable effects parameters, and interactive creation of muscles. Also included is messiah:script, pmG's new embedded scripting language, which is largely based on C.
Messiah:animate runs on Windows 95/98/ME/NT/2000/XP systems. A Mac OS X version is being developed. The introductory cost is $595. —Jenny Donelann pmG Worldwide; www.projectmessiah.com
Version 4.1 of e-on software's 3D natural scenery program, Vue d'Esprit, adds a number of import/export features that make the product more compatible with NewTek's LightWave, Maxon Computer's Cinema 4D, Curious Labs' Poser, and Caligari's trueSpace. Other improvements include better import of low-resolution pictures into the terrain editor, enhanced rendering of skies, and better resolution when rendering enlarged materials.
E-on has also released Mover 4, the latest version of the animation, effects, and rendering add-on package for Vue d'Esprit. Version 4 comes with support for Poser 4 animations, so that Poser characters can be easily imported into Vue d'Esprit scenes. A set of Poser models is included as well. Other new animation effects include vibration, spin, and twinkle. Network rendering has been added too.
Ozone 2.0, released at the same time as Vue d'Esprit 4.1 and Mover 4, is e-on's atmospheric effects plug-in for LightWave. Version 2.0 features faster rendering, new algorithms that model the behavior of the earth's atmosphere, better cloud and sunset animations, and a set of more than 100 predefined atmospheres, including bright daytime, bad weather, and sunset. Vue d'Esprit costs $199. Mover 4 is $99, and Ozone, $249. —JDn e-on software; www.e-onsoftware.com
Version 5 of Poser, Curious Labs' popular character animation tool, comes with features including dynamic cloth, strand-based hair, and photo-based facial mapping. Some additional new features made possible by Curious's new FireFly Renderer, a hybrid micro-polygon subdivision surface and raytracing engine, include rendertime smoothing of facets, displacement mapping, 3D motion blur, depth of field, procedural textures, and luminosity, refraction, and reflection.
Although Poser has long been popular with hobbyists, the product has also found a place in artists' and animators' studios because of its numerous character tools. Rounding out these are some other new features, including the company's FacePutty for interactive sculpting, 3D human figures with fully articulated hands and feet, collision detection, and a MorphPutty Tool.
Poser 5 is available now for Windows-based operating systems. A Macintosh version will be announced later this year. The cost of the program is $349. —JDn Curious Labs; www.curiouslabs.cominfoNOW 52
The VectorStyle plug-in from Eovia Corp. offers 3D vector rendering for Carrara Studio 2, Eovia's modeling and rendering program. The ability to convert Carrara scenes to scalable, vector-based animations should provide users with a streamlined and seamless way to publish their content to the Web.
VectorStyle uses Electric Rain's proprietary RaVix II vector technology to render 3D scenes while retaining the original values for color, lighting, camera views, and animations. VectorStyle comes with a variety of vector output options, including three levels of outlines, five levels of cartoon shading, two levels of gradient shading, and shadows and specular highlights.
VectorStyle is available for Windows and Mac OS 8.1, 9.1, and X platforms. The plug-in costs $129. —JDn
Though a variety of technologies exist for viewing 3D data "in the round" without glasses, many are still in the development stage. One 3D display that is already shipping commercially is Actuality Systems' Perspecta, which looks like a crystal ball within which 3D objects appear to float. Users can circle the display to view the objects from any angle. The Perspecta illuminates 100 million volume pixels, or voxels, and allows animated 3D imagery to be controlled from a standard workstation. No stereoscopic goggles or special headsets are required.
Potential applications for Perspecta include game development, surgical planning, air-traffic control, and information display in museums and other educational settings.
Pricing for the Perspecta spatial 3D platform starts at $40,000 per station and includes the display and the Perspecta operating software. A Perspecta SDK is available free. —JDn
Many consider space to be the final frontier—an untamed, unsettled realm holding the promise of adventure. In the futuristic-themed feature film The Adventures of Pluto Nash, the first city on the moon is untamed indeed. But, it is far from unsettled. In fact, the metropolis—nestled in a canyon and encased in glass—makes New York City look as congested as a backwater town in Maine.
Building the film's complex, megacity of the future fell to a team of digital matte artists, computer animators, and visual effects specialists at Riot (Santa Monica, CA), who created a series of urban landscapes filled with trains, buses, holographic signage, office towers, and throngs of lunar residents.
"Our challenge was to create paintings that suggested all the complexity and life of a real city," says lead matte painter Rocco Gioffre. "It had to look alive with a lived-in feel, and it needed to suggest scale. Yet, it had to feel claustrophobic and have the character of a self-contained colony."
|Pluto Nash contains dramatic flyovers of the virtual cityscape that reveal expanses of the city that would have been impossible to create without digital effects.
The size and scope of the city is suggested largely through a series of richly detailed matte paintings created in Softimage|3D, and composited with Discreet's inferno. In one shot, the camera travels over a four-square block section of the city, looking down at buildings, roadways, subway tunnels, and pulsating neon signs.
"We added numerous layers of details—far more than viewers will ever notice—and we melded them into a natural environment," notes animator Hans Payer. For instance, the team added puddles, steam, and overall grit, giving the cityscape a realistic look.
Many of these matte paintings had to be integrated with live-action elements. One such shot, which was set inside a bus station as two actors entered the terminal, was particularly challenging because of the intricate camera tracking that was required. The scene begins as the camera focuses on the actors' knees, then pans up to reveal their faces and the domed ceiling of the terminal behind them as it makes a 90-degree turn. While the actors and the platform on which they are walking are real, everything else in the background, including the domed structure and a bus that passes by, is digital.
"Tracking the matte painting to the live-action camera move was difficult, since the camera covers a lot of ground," says compositor Kelly Bumbarger. The team tracked the motion-control information with Science-D-Vision's 3D-Equalizer, while additional 2D tracking was done in inferno.
As Bumbarger recalls, the on-set markers often became covered or were lost because the camera panned beyond their locations. Also, the practical set was so wide that three blue screens were needed to mask the area behind it, and variations in the screen surfaces complicated the process of pulling matte elements. "We had a 574-frame sequence in which virtually every element had to be rotoscoped by hand using inferno," he says.
"Modeling an entire city, most of which is seen close up, was daunting," says Payer. "It's a massive illusion made up of hundreds of buildings, roads, and people. However, we couldn't make it too complicated, or it would have been impossible to work with. The secret was placing the detail where the eye goes." —Karen Moltenbrey
Softimage|3D, Softimage; www.softimage.com
Realm Productions (Santa Monica, CA) helped pave the way for an invasion of Earth by pint-size creatures bent not on world domination, but on getting a hamburger and fries. The hungry aliens appeared in a 30-second TV spot for Burger King in which a mom orders 67 Big Kids Meals: "Two for my kids and 65 for their alien friends," she says. The clerk, playing along with what he thinks is a joke, asks the kids about their friends' green skin. "Actually, they're kinda pink," replies a girl as the camera pans down to reveal a row of little pink aliens.
The Realm team modeled, textured, and animated the creatures using NewTek's LightWave. For compositing, the group used Eyeon Software's Digital Fusion. Steele VFX in Santa Monica used Quantel's Henry for the postproduction editing. —KM
LightWave, NewTek; www.newtek.com
Would you know what action to take if you were faced with a life-or-death scenario? For instance, would you know the correct way to land if you fell from a rooftop, or how to tuck and roll if you leaped from a moving vehicle?
Every Wednesday evening, the cable TV show Worst-Case Scenario tackles many of the most extreme situations and demonstrates how best to survive them as stuntpersons perform the incredible feats. However, illustrating certain techniques with live actors is too dangerous. Instead, the action is frozen, and the live actor is replaced by a 3D model. This enables the director to illustrate "the correct 'textbook' move from different angles, which is impossible to achieve with regular cameras while keeping the technique within the hazardous environment," says Leslie Allen, creative director at Cinergy Creative (Studio City, CA).
Working within a tight cable budget, Cinergy produces the 3D effects using mainly Discreet's 3ds max and Character Studio software running on an Intel Xeon-based PC. The computer-generated imagery is then composited into a reconstructed environment using Adobe Systems' After Effects, which is also used for the motion tracking. "We replace the actual backgrounds with ones that have been stitched together using images taken from the original source tape and enhanced in [Adobe's] Photoshop," explains Allen.
When the action is frozen, the group replaces the video frame with the digital background. Next, the artists match the virtual character to the real actor's position in each frame. Then, they rotoscope the person out of the frame using Photoshop.
In one episode, a stuntwoman demonstrates the lifesaving technique for jumping from a 120-foot cliff into a body of water (see image sequence above). After she leaps, a digital stunt double takes over. For this scene, the artists also used After Effects to color-correct the shot and to stabilize the original footage. —KM
After Effects, Adobe Systems; www.adobe.com
|In this scene, the 3D camera freely spins around the CG model, zooming in on her body as the narrator advises the ideal positioning for her arms and legs.
It all started when someone hung a photograph of the World Trade Center attack in an empty storefront in downtown Manhattan during the dark days of last September. Crowds gathered to see the photo, and people began posting their own pictures of September 11-related events. Soon, the walls were covered and lines began to form with onlookers and those wanting to post their own images. Born from this impromptu image gallery was Here Is New York (www.hereisnewyork.org), a not-for-profit charity organization.
The original intention of the postings was to sell some of the photographs for charity. In time, the storefront became a full-fledged gallery, and a Web site was created to display the images as well as serve as a memorial and permanent record of that tragic day.
Through Viewpoint's ZoomView technology, the organization is able to enhance the online images, showing the greatest detail at the smallest file sizes possible. "It was an easy, two-minute process to covert about 100gb of images to the ZoomView format, which was done through Adobe's Photoshop," says Aaron Traub, Internet director. So far, the group has converted about 5000 images.
"The images really show the human experience," Traub says. "When people ask others what it was like to live through that day in Manhattan, they sometimes just reference an image online. It shows what people didn't see that day—the details, the faces of the people. For some, it's cathartic.
"We've received an influx of unsolicited.. e-mails from users of the Web site, some saying they've even found themselves in photographs," Traub continues. He notes that the Federal Emergency Management Agency has used the online gallery to help re-create events from the day, while New York firefighters and others have e-mailed the organization to say the gallery has helped them emotionally connect with what had happened that day. People have even left messages saying they spotted someone they knew in a photo while searching the site for a lost friend, he adds. —KM
ZoomView, Viewpoint; www.viewpoint.com
|Through ZoomView, Internet users can see these highly detailed photos.
A new approach to lighting the live performance of an actor and compositing him or her into a real or virtual set has been developed by a team led by Paul Debevec at the University of Southern California's Institute for Creative Technologies. The process, designed to precisely replicate the illumination of an actor in virtually any desired environment, entails having the person perform inside a so-called Light Stage, a large sphere containing 156 inward-pointing, red-green-blue light-emitting diodes, each of which can be set to a desired color and intensity. A digital infrared matting system is then used to composite the actor into the background plate. The photos show a model inside the Light Stage (left), illuminated with an accurate simulation of the lighting in San Francisco's Grace Cathedral (bottom, left), and after she was composited into the background with the matting system (below). Movie clips made using Light Stage can be seen at http://www.-debevec.org/-LS3. —PL
A new interactive display device developed by researchers at the MIT Media Lab could redefine window shopping as we know it. The new technology enables knocks or taps to be characterized by type and localized to within 2 to 4 square cm across a 4-square-meter glass surface. The device contains an array of acoustic pickups that detect where and how people tap on the window. Interactive projected graphics are then mapped onto the glass according to input from users. The research team, led by Joseph Paradiso, tested the concept by installing the Interactive Window device on the main display window of an American Greeting store near Rockefeller Center in Manhattan (above), allowing passersby to interact with information about the store's products simply by knocking. —PL
The Nvidia Quadro4 900XGL graphics card edged out 3Dlabs' Wildcat III 6110 in the latest SPEC/GPC Lightscape benchmark tests. Both cards in the top two positions were paired with a 2.8GHz IBM M Pro workstation. For information about the Lightscape benchmarks, visit the SPEC/GPC Web site at www.spec.org/gpc.
(Redmond, WA) has acquired game developer Rare (Warwickshire, UK) for $375 million. Rare will develop game titles for Microsoft's Xbox... Game developer Factor 5 (San Rafael, CA) has released the DivX for GameCube SDK, which allows developers to use DivX formatted video cinematics in GameCube titles.
Siggraph 2003 Call for Participation
The Siggraph 2003 Conference Committee is now accepting submissions for papers, panels, courses, and more for the next annual Siggraph conference and exhibition. The event will take place in San Diego, California, from July 27 to 31. For information on how to place submissions, visit www.siggraph.org/s2003/cfp/index.html.