Storage Essentials
By Christine Bunish
Issue: Volume 37 Issue 5: (Sep/Oct 2014)

Storage Essentials

VFX and animation studios require robust, reliable, and fast storage solutions - systems they can depend on to get the job done while artists work their magic with little or no thought to what's happening in the background. The marketplace offers more storage options than ever before, and studios are choosing from a wide range of hardware and software to meet their needs today - and tomorrow.

ToonBox

Toronto 3D animation studio ToonBox Entertainment (www.toonboxent.com) has seen explosive growth since it opened six years ago with 10 employees. With the studio's animated feature The Nut Job, released earlier this year, and The Nut Job 2 and Spark features now in production, ToonBox is preparing its staff of approximately 200 for the company's second move to larger quarters.

Storage needs have grown quickly, too. ToonBox's original 150tb BlueArc system experienced performance challenges during the production of The Nut Job, which required greater efficiencies in the workflow.  BlueArc was merging with Hitachi Data Systems at the time (it has since become HDS), and Greg Whynott, manager of systems and IT at ToonBox, considered both adding more spindles to the existing system or migrating to another storage solution, like Isilon or NetApp.


In just six years, toonbox has grown, with one feature film, the nut job, released and two others in production, and has called on avere to help with its growing needs.

Then another possible solution presented itself. "Our desktop vendor is Dell; they went into partnership with Nexenta, which has a hardware-agnostic, software storage solution. I'm a longtime advocate of open-source solutions; our purse strings were pretty tight at the time, and Dell approached us with a great deal, so we went with them," Whynott explains. 

Whynott admits that with ToonBox working on the final stages of The Nut Job, the high-pressure environment put a lot of demand on the new system. "Everything eventually worked out, but there had been some shipping problems regarding parts [in order] to deliver the performance we required," says Whynott. "The render jobs on the HPC (High Performance Computer) took a long time, artists were frustrated with slow loading of scene files and assets, and review stations struggled to maintain frame rates. Many times we had well over 5,000 simultaneous requests to the storage server in a very short period of time."

Whynott had used Avere Systems clusters before and discovered that the company had several demo units nearby. He shut down ToonBox's file server at lunchtime and quickly installed a pair of Avere FXT Series Edge filers to speed workflow performance and enable fast, cost-effective scaling. "An hour or two later, people were coming to my door saying, 'Whatever you did, everything's beautiful now,'" he recalls.

With its workload diminished, the file server gained "lots of breathing room." Desktop users continued to hit file server storage directly, while the HPC accessed the storage servers via the Avere cluster. This provided more resources on the storage system, which were used to quickly service the interactive desktop needs. And The Nut Job was a wrap.

Whynott believes ToonBox has found the sweet spot by separating the storage solutions into two autonomous components. "If we need more capacity, we add disks to the Dell storage chassis and expand the file system live without downtime. It's commodity hardware, which takes $400 disks rather than disks that cost up to $2,000 [each] from one of the proprietary vendors or [another] $100,000 storage node every time we need either more storage or performance. With our current solution, if we need more performance, we add another Avere."

More recently, ToonBox rolled over to Linux from Windows for Spark, and with a second feature also in-house (The Nut Job 2) and a possible TV series based on The Nut Job, Whynott is hoping to augment the studio's storage with a third, and perhaps a fourth, Avere Edge filer. "I expect to add over 200 more clients to the system; we'll be working on multiple productions going forward, so I'd like to see a couple more Averes as part of the solution," he says.

ToonBox is launching a collaboration with Redrover Co., Ltd. in Korea, which will need to install centralized storage of its own. Whynott will be involved in determining the storage solution, and Avere will likely play a role. "We'll have a big need for file synchronization between our studios, and I suspect the Avere units could play a role there."

Milk

London's Milk (www.milk-vfx.com), a boutique VFX studio for high-end television and features, launched in June 2013 with Pixit Media's PixStor software-based storage solution running on Dell hardware.

Head of Systems Dave Goodbourn, who was brought on board to plan and design Milk's infrastructure, had used PixStor at two other facilities - uFX in London and Belgium, and Spov - and had been impressed. Kudos were also forthcoming from users at other VFX houses. "All the recommendations led us to our decision," says Milk CEO/Executive Producer Will Cohen. "It was a no-brainer," adds Goodbourn.

Cohen says PixStor "represents a very modern, very now solution, instead of hard-disk storage. The generation of companies that came before us looked for brand names and flashing lights in boxes. They were reassured by the industrial look of the kit. But PixStor is reliable, easy to use, quickly expandable, and its support is second to none."

The present configuration of PixStor's centralized nodes offers about 150tb across three storage pools. The system was expanded twice in the company's first year from approximately 80tb to 96tb, then again to its current size. "We may expand it again by another 100 or 150tb," Goodbourn says. "In theory, PixStor is infinitely expandable. We'll max out on power and cooling before we reach capacity." Milk mixes enterprise-class and off-the-shelf disks in the system.

Cohen notes that Milk's animators, modelers, artists, compositors, and online editors don't think about the storage system, "which is a good sign. Many probably don't know what it is - a handful of people maybe looked in the cupboard. They're just happy that our pipeline runs quickly and efficiently."

In just its first year, Milk has kept storage running at capacity with VFX for Doctor Who, Sherlock, 24: Live Another Day, Hercules, the upcoming Dracula Untold, and the new mini-series Jonathan Strange & Mr. Norrell.  "A really good test of PixStor was David Attenborough's Natural History Museum Alive 3D, for broadcast and IMAX, which was pretty much our first job," says Cohen. "It was stereo 4k with lots of furry creatures. So being able to handle that was a good introduction to the system."

Goodbourn says UK-based Pixit Media is constantly upgrading PixStor with new features. This fall, Milk will beta-test Flash acceleration caching designed to increase I/O. "It's claimed to increase performance up to six times," he reports.

Milk hasn't seen the benefits of the new Active File Manager (AFM) feature yet but will if the company spawns multiple locations. "Main storage would be kept here and a small storage unit installed in the other building, with AFM caching the two together," Goodbourn explains. "The off-site location would feel as if it were on our network - it's very streamlined."

According to Goodbourn, he talks to Pixit Media two or three times a month and finds the company "very forward-thinking. A lot of things we were talking about have already been implemented."

Synaptic

Burbank, California's Synaptic Studios (www.synapticvfx.com) runs a Supermicro 72tb Windows server with a 10-gig fiber connection as its central server for compositing and rendering visual effects. Color-grading suites have their own high-speed direct attached storage.


Synaptic anticipates an expansion of its storage solution (now supermicro) as its work amps up for sleepy hollow.

VFX Executive Producer Stephen Pugh says the Supermicro system "performs well" and "didn't bog down" when at least 20 artists worked on VFX shots for the first season of the hit Fox series Sleepy Hollow. "Some artists have even commented how much nicer this server is than the one they used at their last place," notes Pugh. 

A typical workflow at Synaptic involves plates delivered on FireWire drives or via Aspera or other file-transfer software, which are ingested into the server and conformed to the proper directory structure in the database. Shotgun records are created and work assigned to artists. They use JPG proxies for general review and 3D shots, while final composites are completed with DPX frames off the server.

Pugh says the Supermicro server was a very cost-effective choice for Synaptic, a studio that was just starting out. "It takes enterprise-class disks for a cost savings, and it's easy to maintain ourselves."

Future expansion could mean adding another disk array to the system or using larger capacity drives. But Pugh admits that if circumstances allowed, he'd check out Isilon or HDS storage. "They have a reputation for reliability and have proven themselves," he says. "I like that they offer an extra layer of product support behind them - that's not critical, but it's nice."

Pugh is not sure what storage will look like in the long term, though. "People are talking about the cloud, but I'm not entirely sold on it from a security and dependability point of view," he says. "We'll see if I can be convinced."

The Mill

With facilities in London, New York, Chicago, and Los Angeles, The Mill (www.themill.com) is known for award-winning moving image, design, and digital projects for the advertising, games, and music industries. The company strives to standardize its storage for visual effects across all locations, and the flagship London facility selected BlueArc storage solutions about eight years ago.

"I wasn't there for the decision but heard that BlueArc outperformed the competition," says Mattias Andersson, systems manager at The Mill in LA. "Since then, it has served us very well." After BlueArc was acquired by Hitachi Data Systems, the configuration at The Mill's LA studio was replaced by a Hitachi NAS 3090-G2, powered by BlueArc. It has approximately 50tb of fast disk storage and 70tb of slow, or nearline, disk storage. Similar systems are now in place at all The Mill's locations.


The Mill in LA looks to a hitachi nas system powered by bluearc to handle the large.

Andersson says The Mill in LA uses the Hitachi NAS for computer graphics, compositing, project files, and renders. All the setup files for the Autodesk Flame are also saved into Hitachi storage.

The storage solution of The Mill in LA needs to offer resilience, network performance, ease of management, and the ability to migrate data across fast systems. "We need to be able to work through failures; the Hitachi NAS has dual heads, so it's still fully functional if one goes down," Andersson notes. "Network performance is great too: Each head has two 10gb  Ethernet connections."

The company also uses a Rohde & Schwarz DVS SpycerBox SAN for FilmLight Baselight color grading; network backup is done to LTO-6 tape.

Recent projects using the Hitachi NAS include the Halo 5: Guardians multiplayer beta teaser trailer for E3. The Mill teamed with 343 Industries and Microsoft on the action-packed teaser, which features detailed character animation, clear lighting, and strong rim-lit silhouettes.

Also, Mill+, the concept, design, and animation arm of The Mill, worked with a team of designers and VFX artists at The Mill in LA and London to contribute cinematics to Activision's Call of Duty: Ghosts. The highly-stylized look is a hybrid of live action and CG with contrasting sharp, shard-like shapes for the enemy, and light and smoke elements for the underdog ghosts.

Andersson is now demo'ing the Avere FXT Edge Filer 3800 to boost performance. "It sits in front of the Hitachi hardware and acts as a caching server for higher throughput and less stress on the NAS," he explains.

According to Andersson, he expects the company to continue with the Hitachi NAS but says it's probably time to step up its capacity. "I wanted a system that you can easily add more storage to," he says. "And the Hitachi NAS is it. I can buy another 20tb and quickly grow the system as projects get bigger and demand increases."

MPC

With eight facilities around the world (London, Vancouver, Montreal, New York, Los Angeles, Mexico City, Bangalore, and Amsterdam), the Moving Picture Company (www.moving-picture.com) is striving to "create a global, integrated studio" with similar VFX storage solutions in every facility. "But the reality is that not every studio was built at the same time, so they have slightly different generations of storage technology," says Nick Cannon, director of technology and operations for MPC Film. "But broadly, there's a similar storage architecture for all of them."

MPC Film is a longtime EMC Isilon user. "We tend to need very large renderfarms. We need to cope with quite extensive workloads, and not all storage can do that," he notes. "We've used Isilon since 2005 or 2006 because we could easily add more capacity, as it was running with minimal impact to the production. Other storage systems do that now, but Isilon was one of the first, and we've been very happy with it."


MPC requires storage that can handle its large workloads across television and film. a longtime isilon user, the studio began using avere accelerators to scale performance.

For low-performance, high-­capacity mass storage with a low cost per terabyte, MPC Film has opted for Supermicro or Dell systems. Long-term archiving is done to LTO (Linear Tape-Open).

Recently, MPC Film began adopting accelerators from Avere Systems for its Isilon storage so the company can scale up performance and capacity independently. "When you buy Isilon, if you need more capacity, you can add another node," Cannon explains. "Sometimes you just need performance. Avere caches are put in front of the Isilon to buffer it for very heavy renders. In some locations, like London, our render­farms are remote from the main location, so there's some latency. But Avere accelerators speed up things quite dramatically. Any new facility with a large renderfarm will have Isilon and Avere moving forward."

That's the case with MPC Montreal, which launched nearly a year ago. "The Montreal studio represents our latest generation of storage technology with about 750tb of Isilon storage and Avere, with backup on Dell disk arrays. The renderfarm is on the premises," Cannon points out.

MPC Montreal was the lead VFX studio on X-Men: Days of Future Past. Its storage solutions came into play for building CG assets, animation, rendering, lighting, and compositing.

"There's no magic storage configuration that works every time," Cannon notes. "Every shot in every film has a different storage and I/O profile, but we can change the mix of Isilon and Avere to fit the challenges of the day, week, or month."

For example, MPC Montreal initially tapped Isilon for its scaling capacity and Avere for performance, so all the workstation and render traffic went through the Avere cluster. But some shots for the X-Men feature had very large datasets of 5tb to 10tb. In those instances, "the render would flush the cache in Avere and cause performance problems for the whole studio," Cannon recalls. "So now, certain large datasets go straight to the Isilon cluster."

While Cannon is happy with the storage solutions in use, he continues to evaluate the landscape. He adds, "We're at an interesting point in storage technology where we're looking at object storage and how to do global file systems. Requirements are always evolving, and new companies are coming along that could potentially offer advances in cost, performance, or features."

Christine Bunish is a veteran writer and editor for the film and video industry. She can be reached at cbunish@gmail.com.