VFX Require High Speed and Capacity
Issue: Volume: 29 Issue: 9 (Sept 2006)

VFX Require High Speed and Capacity

Storage in the Studio - Advanced storage systems and networks enhance work flows at digital content creation studios

VFX require high speed and capacity 

  • Pixar puts the pedal to the metal 
  • The Orphanage accelerates work flow 
  • DNA meets the need for speed 
  • Tweak relies on RAID

Storage technologies meet studio needs

Facilities can benefit from relatively new technologies and network interfaces

  • Shadowtree Studios gets Mac relief

Produced in conjunction with INFOSTOR magazine

VFX Require High Speed and Capacity
Studios such as Pixar, The Orphanage, DNA, and Tweak Films leverage a variety of new storage technologies to meet the high-end requirements of recent feature films and animation projects
By Barbara Robertson

All animation studios rely on superhighways to transport the huge amounts of data needed to fill 24 frames a second (fps) in a 90-minute feature film. But Disney/Pixar’s animated blockbuster Cars pushed that studio’s system to the limit. At Pixar, a 3000-CPU renderfarm comprising 64- bit Intel Nocona-based “pizza boxes” reads data in, runs the algorithms, and generates the new data—that is, the rendered frames.
The data comes from a model farm that’s typically 3TB to 4TB in size. “That’s our most valuable data,” says John Kirkman, Pixar’s director of systems infrastructure. “It’s where we store the hand-built models, the shaders, the textures created by the technical directors, and the animation data.” In other words, the model farm is where the characters live, and in Cars, all the characters are vehicles—race cars, transport trucks, family sedans, sports cars, and even tractors.
 
So, when Lightning McQueen, the star of Cars, appears in a scene, the renderfarm needs to read his data, and herein lies the potential traffic jam. As McQueen screeches around a curve during the Piston Cup Championship race in the beginning of the film, stadium lights strobe off his flashy red paint job. The camera follows his battle to the finish line while thousands of cars in the crowd cheer and lightbulbs pop; thousands of points of light bounce off fenders and hoods. “If McQueen is in most of the frames, you have to read his data across all 3000 CPUs,” says Kirkman. “The challenge is providing data for 3000 CPUs all trying to go after the same piece of data.”
 
For previous films, Pixar relied on Network Appliance’s NetCache systems, which worked well. But to reproduce the reflections bouncing off the chrome, glass, and steel bodies of Cars’ stars, Pixar used a raytracing method of rendering that simulates the paths of light rays hitting an object from various sources and angles to reproduce the effect of real light in a scene. “With Cars, because we were doing raytracing, the number of reads needed to calculate a frame increased dramatically,” Kirkman explains. “When you’re tracing rays of light, sometimes you’re reading data that’s not in the frame. You’re reading light hitting a mailbox a mile down the road before it hits McQueen’s fender.”
 
The data needed for the renderfarm to do its work at any point in time is usually between 100GB and 200GB. With the previous technology, Pixar was limited to 1.5GB of internal memory. But, by switching to Ibrix’s Fusion parallel file system software, Pixar could pull more data out of RAM. “We’re very sensitive to having to wait for data,” Kirkman says. “We would much prefer to get data out of memory than off a disk drive.”
 
Pixar installed a 12-node Ibrix cluster. Eight servers fed the renderfarm, all talking to the same SAN storage device to get data, and four servers maintained the metadata for the file system. Each of the eight “heads” had 32GB of memory. That meant the working set of data could fit in RAM.
 
“We got a huge multiplier from being able to serve data out of RAM,” says Kirkman. “We expect 100 percent utilization of our CPUs at all times. If we’re waiting on I/O, we see the difference between the machine time and the wall clock. If something we think should take one hour to render takes three hours, we know we’re wasting time waiting for I/O. Before [we installed] the Ibrix system, the wall clock time was six to 10 times what we expected because we didn’t have enough memory. With Ibrix, we reduced that down to 15 percent. We want to complete reads in less than half a millisecond, and we were achieving that as long as we could get data out of RAM.”
 
For Pixar’s next film, Ratatouille, scheduled for release in June 2007, the studio is installing a second Ibrix system, this one a 16-node cluster. “We’re recycling parts of the Cars system, but we’re going to end up with two [clusters] that we rotate between our current and next films,” says Kirkman. The Ratatouille system has eight servers feeding data to the renderfarm: four serving users and four managing metadata.
 
“The thing we find most attractive is that Ibrix is a software solution,” Kirkman says. “With traditional NAS, you have to buy a big box. If you’re only interested in adding memory, you’re paying for other stuff you don’t need. With Ibrix, we can scale everything independently, whether we want more CPUs, memory, networks, or spindles.”
 

Behind the sophisticated effects in Cars, produced by Disney/Pixar, is a 3000-processor renderfarm and Ibrix’s Fusion parallel file system.
Image © 2006 Disney/Pixar.
 
In addition to the parallel file system (also called a segmented file system), Ibrix’s Fusion software includes a logical volume manager and high-availability features. The software allows users to build file systems that can scale up to 16 petabytes of capacity in a single namespace. Fusion runs independent of specific hardware and/or network platforms, and supports the CIFS and NFS protocols. Ibrix claims aggregate performance of as much as 1TB/sec.
 
The Orphanage Accelerates Work Flow
Superman might fly faster than a speeding bullet, but when The Orphanage needed to aim a bullet right at the Man of Steel’s baby blues, the studio’s need for speed sent them looking for a new storage solution.
 
“Our artists and our renderfarm machines were starved for file system I/O,” says Dan McNamara, vice president of technology at the San Francisco based visual effects studio. In addition to the bullet shot, The Orphanage handled a bank job and wild car chase. “The complex scenes for Superman Returns really tested [the system]. We had lots of complex elements—lots of pieces that had to be woven together.”
 
At the same time that The Orphanage artists were leaping over tall data requirements for Warner Bros.’ Superman Returns, a second effects film—the South Korean monster movie The Host— had its own set of fiendish requirements. The Orphanage created the film’s Han River mutant, a 45-foot-long digital creature that looks like a cross between a T. rex and a fish. The film, which received rave reviews at the Cannes Film Festival and broke box-office records in South Korea, made its North American debut at the Toronto Film Festival this month.
 
“It was intense,” says McNamara of the visual effects work. “We had complex scenes with people firing weapons at the CG creature, and the shots were really long. We wanted to make sure that the large files the artists required loaded as fast as possible.”
 
Now the studio’s 11.5TB of data sits behind a BlueArc Titan 2000 series storage system. The Titan system’s open SAN back-end views the studio’s existing SAN storage as a shared network resource. McNamara says the studio is getting 340MB/sec to 360MB/ sec throughput. “We haven’t had to add storage; we just move the data faster to the [artists].”
 
The Orphanage had planned to evaluate several systems, and Blue- Arc’s Titan was the first one they tried. “I wish I could tell you we evaluated several systems and here’s all our raw numbers,” says McNamara, “but the Titan exceeded our expectations. It supports CIFS natively [as well as NFS], so that was fine. It met our needs.”
 
The studio hasn’t regretted the decision. “When you push some storage systems, you hit a cliff and fall off,” says McNamara. “With this system, you don’t have issues when you really push it.”
 
As The Orphanage’s needs grow, McNamara expects the studio will purchase a second Titan storage server and evaluate the new clustering software BlueArc is developing. “Our biggest [concern] is giving the artists interactivity,” says McNamara. “We don’t want them to wait for scenes to load. This business is about creativity, and we want to make sure our artists are happy.”
 

The Orphanage’s work on Superman Returns involved more than 11.5TB of SAN-based data behind a BlueArc Titan storage system.
Image courtesy Warner Bros. Pictures and The Orphanage.
 
BlueArc’s software that runs on the Titan storage servers includes a file system with a cluster namespace for a unified directory structure and global access to data for CIFS and/or NFS clients. The object-based file system supports up to 512TB of data in a single pool. The disk array can be configured with high-performance Fibre Channel and/or low-cost, high-capacity Serial   ATA (SATA) disk drives to create a tiered storage architecture. BlueArc claims performance to 10Gb/sec.
 
DNA Meets the Need for Speed
When DNA Productions moved from creating the episodic animated TV show The Adventures of Jimmy Neutron to the full-length animated feature film The Ant Bully, everything changed. In Warner Bros.’ The Ant Bully, a boy takes out his frustration on some ants (see “Faces in the Crowd,” pg. 24). The ants fight back by shrinking the boy to their size and teaching him the ways of the ants. Ultimately, the boy helps save the colony. Creating a few CG characters for a full-length feature, plus backgrounds and props is difficult enough, but creating an entire colony of characters would tax most server/storage systems.
 

Rendering for The Ant Bully feature film required a 1400-processor renderfarm and a 42-node clustered storage system and software from Isilon Systems.
Image courtesy Warner Bros. Pictures.
 
“We changed our whole infrastructure,” says Rich Himeise, director of network operations at DNA. “We had to upgrade everything to go from the TV show to a movie.” That upgrade included converting from a Windows based system to a Linux-based system, buying a 1400-processor renderfarm, and installing a new 42-node Isilon Systems clustered storage system that provides 80TB of raw storage capacity.
 
“We run our entire production on the Isilon IQ systems,” says Himeise.
 
“We render to the system, and our assets live on the system. The entire movie lives on the system.” With each frame of the animated film requiring from 1MB to 10MB of data—some even more— throughput and load balancing were critical. Isilon’s OneFS clustered, distributed file system spread the load across the 42 nodes. “The clients, the renderfarm, and the artist workstations all mount across that cluster,” Himeise explains. “Mounting the clients across the cluster increases the throughput to the system.”
 
Each node has a processor, 4GB of memory, and a Gigabit Ethernet connection. That, in effect, gave the studio a 42-processor computer with 168GB of memory and a 42Gb/sec connection to the file system. “You can think of the cluster as one big, robust machine,” says Himeise, “with 42 Gigabit [Ethernet] pipes into the cluster.”
 
The processors handle the file transactions like a typical server, moving data to hard drives. At the back-end of the cluster, InfiniBand switches tie all the nodes together. “Node number one knows what’s on the hard disk of node 42 and what’s in the memory cache of 42, and 42 knows what’s in node one and everywhere else,” Himeise says. “It’s the glue that ties the cluster together.”
 
Each node handled between 25 and 30 clients. Because DNA used Isilon IQ’s SmartConnect feature, the number of nodes assigned to artists and to the renderfarm changed as needed. “We could dedicate 30 nodes to our renderfarm when it became busy and the remaining 12 nodes to the artists, and then, if the artists complained, we could give them more nodes,” says Himeise. “It was easy; it just took a couple of clicks.”
 
The power failed in DNA’s building twice during the production of The Ant Bully. “One time, we could shut down safely,” says Himeise, “but the other time the system went down hard. With a cluster this size, it was a nerve-racking experience, but the system came back up with no problem. We didn’t lose a file.”
 
Even with the new system in place, by the end of production the studio began   delivered 15TB of storage to get us through the final months, and when we were done, we shipped them back,” says Himeise.
 
Isilon’s IQ series of storage arrays uses Gigabit Ethernet for front-end connections and can be configured with either Gigabit Ethernet or InfiniBand connections for intra-cluster communications. The storage nodes (IQ 1920, 3000, 4800, and 6000) are available in a variety of models, enabling users to meet capacity/performance requirements. The OneFS distributed file system creates a single, shared global namespace and supports NFS and CIFS. Isilon’s SyncIQ replication software distributes data between clusters.
 
Small Shop, Big Jobs
Cutting-edge technology developed by two-time technical Academy-Award winner Jim Hourihan helps Tweak Films, a San Francisco-based visual effects studio that Hourihan recently co-founded, compete with larger, well established studios.
 

Tweak Films, which has done work on movies such as The Day After Tomorrow, uses Apple’s Xserve RAID array for most of its storage needs.
Image courtesy Fox and Tweak Films.
 
Some small-effects studios survive on scraps thrown to them by the major studios—easy wire-removal shots, paint “fix-its,” and so forth. Not Tweak. This studio gets the hard shots: water simulations, rigid-body simulations, fire, and smoke. For example, Tweak created a tidal wave that surged through the streets of New York in The Day After Tomorrow, which won the Visual Effects Society’s award for Best Single Visual Effect of the Year. The studio also crashed military tanks on an aircraft carrier deck for a sequence in XXX: State of the Union, helped destroy Barad-Dur in The Lord of the Rings: The Return of the King, and, more recently, worked on water-simulation shots for Superman Returns and Monster House.
 
Thus, even though the studio is small, the shots are big. Simulating nature takes huge amounts of data and processing power. And that means the small studio needs smart storage solutions. “When you’re dealing with images, the data adds up fast,” says Mike Root, a compositing supervisor and software engineer at Tweak Films, “but when you’re a small shop, you can’t buy massive network bandwidth.” A standard film frame, he explains, is about 12MB; the 5464x4096-resolution IMAX films, rendered with 10 bits per pixel rather than 8 bits, require approximately 100MB per frame.
 
For centralized storage, Tweak uses an Apple Xserve RAID server with 5TB of capacity. “We also have a hard drive on each render node and on the desktop machines,” Root says. “In some of the work we did for The Day After Tomorrow, the textures and geometry added up to gigabytes. If we had 50 machines trying to suck from one server all at once, we would have had a giant bottleneck. So we sync render data to all our render nodes.”
 
Each of the Linux-based render nodes has a processor, 4GB of memory, and an 80GB or 160GB hard drive. The goal is to have each render node access its local drive rather than access data on the server. “Say we have a 5GB dataset of texture maps for rendering New York City that all the rendering nodes need to access,” explains Root. “Rather than having all 50 machines try to suck that data all at once, each machine gets its own copy.” That speeds the rendering process. It also means the data that rendering nodes access isn’t “precious,” it’s only a copy.
 
Root uses Rsync, an open source utility, to manage the file transfers. “Rsync checks on the server and local drives of the render nodes,” he says. “During the process of rendering, it picks up local information off the local drives. If anything has changed, it copies and moves only the changed part.”
 
For distributing the render jobs, the studio uses Condor, a queuing system developed for academic and scientific computing at the University of Michigan. “It gives us fine-grain controls for selecting which machines to run on,” Root says. “When we have a big job to render, we turn all the desktop machines into render machines as well.”
 
Eventually, Tweak Films plans to move to a SAN, hooking multiple servers to more Xserve RAID arrays. “Then, our render nodes and desktop machines would all talk to the servers,” Root explains. “We’ll still have the same philosophy: Rather than having all of our machines talk to one server, we’d have one server group for render nodes, another for our desktops, and so forth, and all those servers would talk to the same data storage on RAID with extremely high bandwidth. We’ll still be as efficient as we can.”
 
Apple’s Xserve RAID arrays can include up to 14 Ultra ATA disk drives, and Fibre Channel external connections, for a total capacity of up to 7TB. Pricing is typically less than $2 per gigabyte.
 

Barbara Robertson is an award-winning writer and a contributing editor for Computer Graphics World. She can be reached at BarbaraRR@comcast.net.
 

 
Meeting Studio Needs
Facilities can benefit from new technologies and network interfaces
By Mark Brownstein
 
Today, all aspects of filmmaking and distribution are changing. In fact, much of the ongoing “film” production uses film at only two stages—shooting the original footage and creating the final reels. Also, original footage can be captured using digital cameras, and processed, edited, and delivered digitally.
 
For movies still using film (and most do), dailies are scanned in at 2K (2048x1556 pixels) or 4K (4096x3112 pixels) resolution. One second of 4K images (24 frames) can require more than 1.2GB of storage capacity. And for special effects or image overlays, the working files can be much larger. “Creating a movie involves far more digitized frames than are used in the final release,” says Bob Eicholz, vice president of corporate development at EFilm, a DI studio in Hollywood. “At 4K, a two-hour movie might require more than 40TB.”
 
A completed film, created at 4K, typically requires 9TB to 10TB, whereas a film done at 2K (in the decline) only requires 2.5TB to 3TB, Eicholz notes. The result is huge storage capacity requirements at studios of all sizes.
 
Storage Challenges
The new digital studio must create an increasing number of large files faster than ever, and one basic issue becomes where to store those files. The answer depends on how the file is to be used. For files coming from a scanner, there’s little need for extremely fast storage devices on the write path. However, there may be a significant need to access these files rapidly so they can be reviewed, processed, or viewed in real time. This often requires a “tiered” storage hierarchy that includes relatively low-cost, low speed storage for frames that are being held but not worked on, and a transfer to more-expensive, high-speed storage systems for work that is performance intensive. It may also involve moving the required frames onto direct-attached storage for fastest access.
 
Because rendering and scanning are relatively slow processes, the need for a fast pipe to move the files to storage is relatively low. However, reads from the disks and transferring the frames to editors and others working on the film are considerably more demanding. Storage of the rendered frames does not have to be on expensive storage devices, and often a NAS or SAN, or both, is used.
 
Providing fast access to files is another story. Pacific Title & Art Studio in Hollywood upgraded its storage infrastructure to 4Gb/sec Fibre Channel in July, says CTO Andy Tran. “We added a 4G Brocade 48000 Fibre Channel switch and an S2A 8500 storage device from Data Direct Networks (DDN),” he says, “with the goal of maintaining several 2K streams playing simultaneously.” Pacific Title is using DDN’s S2A 8500 storage system primarily for DI and real-time playback, and uses disk arrays from LSI Logic for secondary storage. According to DDN, the S2A 8500 storage servers can deliver up to 3GB/sec, while the DataDirect storage arrays support Fibre Channel and/or Serial ATA (SATA) disk drives.
 
A variety of storage vendors now support 4G Fibre Channel host connections. For example, iQstor’s iQ2880 disk array provides four 4G Fibre Channel ports on the front-end and four 4GB/sec Fibre Channel loops on the back-end. The iQ2880 allows users to mix Fibre Channel and SATA drives in the same system. With 500GB SATA drives, total capacity is about 120TB.
 
Furthermore, digital assets usually must be accessible to a variety of file systems on different operating systems. To this end, SGI’s CXFS 64-bit shared- file system supports virtually all operating systems, as the journaling system makes it appear to users as if all storage is local and available, and file recovery can be done in seconds. The system is also scalable to millions of terabytes.
 
Coraid also supports many operating systems, albeit in a different way. The company has developed a protocol it calls “ATA over Ethernet,” or AoE. “AoE is similar to iSCSI; it’s basically direct storage over Ethernet without TCP overhead,” says Glenn Neufeld, who used AoE when he was the computer graphics and digital supervisor on the animated film Hoodwinked.
 
New Interconnects
Today, 4G Fibre Channel is the primary interface for production storage at most studios. Most Fibre Channel disk array vendors support 4G front-end connections, and 4G host bus adapters (HBA) are available from vendors such as ATTO Technology, Emulex, LSI Logic, and QLogic. ATTO, which specializes in the entertainment market, has been shipping 4G Fibre Channel HBAs   for about a year. Sherri Robinson Lloyd, ATTO’s director of markets, reports that there is a rapid shift to 4Gb/sec, particularly in the DCC market, and during the past three months, ATTO’s HBAs sales were about 83 percent 4Gb and only 17 percent 2G. “The digital content creation market has been enabled by 4Gb/ sec because it gives studios the bandwidth to run high-definition video and audio,” she says, “and most have moved to HD and are moving to 4K.”
 
Although the front-end may be Fibre Channel, the disk drives can be Fibre Channel, SATA, SATA-II, SCSI, or the newer Serial Attached SCSI (SAS). SAS is the successor to the parallel SCSI interface and an alternative to Fibre Channel disk drives. SAS shares connectors with SATA. “SATA has medium performance but a low cost. With a SAS backplane, a company can run high-performance SAS drives and/ or high-capacity SATA drives in the same environment,” says Tim Piper, director of marketing at Xyratex.
 
Although SATA disk drives don’t have the performance or reliability of Fibre Channel drives, they’re now available in capacities up to 750GB and are inexpensive relative to Fibre Channel or SAS drives. “SAS will generally replace SCSI and erode Fibre Channel’s market share,” says Michael Ehman, CEO of Cutting Edge. Recently, Cutting Edge introduced storage systems that use the emerging InfiniBand interconnect.
 
“A big advantage of InfiniBand is price,” says Laurent Guittard, product manager for infrastructure at Autodesk, which is using InfiniBand internally. “The price per port is advantageous compared to 10Gb/sec Ethernet. The throughput is also high compared to 10GbE. And the latency of InfiniBand is much lower than Ethernet.”
 
Storage systems vendors such as DDN and Isilon Systems offer InfiniBand connections, increasingly becoming a viable choice for DCC studios. In the case of Isilon, InfiniBand can be used to cluster the company’s storage nodes. “InfiniBand allows us to network 16 CPUs in a single room with very high bandwidth,” says EFilm’s Eicholz. “With networked InfiniBand systems, we can do complicated color manipulations, hit play, and it plays. InfiniBand allows artists to be more creative, to do more ‘what-ifs,’ and not worry about waiting for the computer to do the work.”
 
Some storage vendors provide a variety of interface choices, whether it’s Fibre Channel, SAS, or SATA for disk drives, or Ethernet, Fibre Channel, or InfiniBand for external connections. “We have InfiniBand on our Infinite Storage 4500 line,” says Louise Ledeen, segment manager (media, global marketing) at SGI. “We support 10GbE, too, but on storage devices we primarily offer 2G and 4G Fibre Channel or InfiniBand. The whole idea is to offer users a choice.”.
 

Mark Brownstein is a Los Angeles-area writer specializing in storage and technology. He can be reached at mark@brownstein.com.
 
Sidecars Expand Power Macs
Although Apple is becoming one of the leading storage vendors, many Mac users need storage expansion options that aren’t available from Apple. Recognizing this need, Applied Micro Circuits Corp. (AMCC) this month began shipments of the 3ware Sidecar, a high-speed external disk subsystem that can store up to 2TB of content.
 
Stephen Burich, owner of Shadowtree Studios and Maya Productions (San Jose, CA), uses the Sidecar disk array, attached to a Power Mac G5 system, for primary storage of audio and video files, as well as for backing up those files. Before installing the Sidecar storage subsystem, Burich used the Power Mac’s internal disk drives along with external, stand-alone disk drives.
 
“I had to back up to either DVD or a separate PC with a tape drive,” Burich says. “I had drives all over the place. It was a mess.” Also, some of Burich’s video files exceeded 25GB, “making it difficult to back up to tape,” he says.
 
As his primary storage device now, Burich uses the Sidecar array for recording and editing, and has the device set up in a RAID-5 configuration. (Sidecar also supports RAID 0, 1, and 10.) Compatible with the Mac OS X operating system and PCI Express host bus, the Sidecar storage array includes four 500GB, 3Gb/sec Serial ATA (SATA) II disk drives, and a 4X multi-lane connector cable. It costs $1299.
 
Burich cites the primary benefits as high capacity (2TB) in a single device, as well as high performance. “I’ve recorded up to 16 24-bit tracks, and I haven’t exceeded what it can do,” says Burich.
 
 
AMCC—which acquired storage controller manufacturer 3ware last year—claims performance of more than 200MB/sec with RAID-5 read operations and more than 150MB/sec with RAID-5 write operations on the Sidecar arrays. This is due to a four-port SATA-II RAID controller, which resides in the Power Mac, and AMCC’s non-blocking switched fabric architecture, dubbed StorSwitch.
 
The 3Gb/sec (approximately 300MB/ sec) performance of SATA II exceeds the performance of the Firewire (800Mb/sec) and Hi-Speed USB (480Mb/sec) interfaces. In addition, using a hardware-based RAID controller frees up the Power Mac CPU, as opposed to software-based RAID approaches that tie up the host CPU and memory. —Dave Simpson is the chief editor of InfoStor magazine.