Second Life
Issue: Volume: 31 Issue: 6 (June 2008)

Second Life

The workstation market is flying under the radar no longer. Dismissed by some as an inconsequential remnant of technology past, today’s workstation bears little resemblance to the machine that forged the concept of high-performance, graphical computing on the desktop back in the 1980s. Although the physical makeup of the workstation has changed dramatically, demand for it hasn’t, and the market is now experiencing a very healthy second life.

It’s true: The traditional proprietary workstation of the past—machines built on homegrown RISC/Unix platforms—has all but disappeared, accounting for a miniscule 1 percent of systems shipped in 2007. That share will only decline further, following HP’s decision to drop its venerable PA-RISC line at the close of last year (see “Workstations Unplugged,” March 2008). Today, only Sun and IBM are selling traditional workstations—UltraSPARC/Solaris and POWER/AIX, respectively—and, at this point, neither vendor appears to be pushing its line for anything but strategic reasons, to serve legacy applications, or spur software development for its more profitable server lines.

As the traditional workstation declined, a new breed stepped in to fill the gap, and then some. Now dominating the market, today’s PC-derived workstation—built upon core silicon components from vendors like AMD, Intel, and Nvidia—is thriving, eclipsing its predecessor by a large margin, in numbers if not mind share.

Jon Peddie Research (JPR) closely tracks the market as part of its “Workstation Report” series and finds the workstation market not just scaling with the ebb and flow of the economy, nor simply keeping pace with the broader PC market. The workstation market in 2005 through 2007 showed anything but moderation, with year-over-year unit increases of 22.1, 21.2, and 20.5 percent respectively, exceeding the growth seen in the overall PC market.

A Vibrant Spawning Ground

So no, the workstation market isn’t dying, and it’s not just chugging along with the broader client computing markets, either. Rather, it has developed its own momentum, buoyed by a growing pace of homegrown innovation, not solely relying on trickle-over technology from the PC industry.

Yes, today’s workstations leverage the economy of scale of x86 CPUs from Intel and AMD, along with 3D GPUs from Nvidia and AMD. They don’t necessarily use exactly the same components as their corporate and consumer counterparts, but they are derived in large part from the same base technology. Given the incredible economy of scale the PC industry commands, doing so makes sense in virtually all of today’s client-based markets, not just workstations. Witness the transformation of Apple’s Mac lines, all of which are now built from the same core components as PC-based desktops, workstations, and servers.

But today’s workstation is about more than just a simple repackaging of PC components. It continues to seed highly visible technology trends that extend influence far beyond its own borders, such as multi-GPU architectures and GPU computing.

Nvidia’s SLI and AMD’s CrossFire multi-GPU solutions grabbed the spotlight as innovations to speed gameplay. But scalable hardware architectures for 3D graphics didn’t emanate from the gaming community. Pick up a few SIGGRAPH proceedings from the ’80s and ’90s, and you’ll realize that while feasibility has dramatically improved (thanks to the economics of silicon manufacturing), the technology of scalable graphics really hasn’t.

Workstation vendors, such as Dell, have to keep in step with technology trends, such as the use of multi-GPU architectures.
 
And GPU computing is no longer just a curious niche of interest to only a handful of academics with access to a workstation and a little extra time on their hands; it has been the launching point for a fundamental wave of change in emerging system architectures. No longer does the conventional paradigm apply, in which the subservient GPU is limited to 3D and 2D rendering, while the CPU handles everything else. The line of processing responsibility between the GPU and the CPU is blurring, and vendors on both sides of that fence have taken notice.

For evidence, just take a look at AMD’s Fusion and Intel’s Larrabee programs. Both look to combine the best from the conventional x86 CPU with the floating-point intensive, stream processing prowess of the GPU. Nvidia hasn’t yet stuck its toe in x86 waters, but it is placing big bets on a heterogeneous computing future, as well. Of course, none of these industry leaders are making such substantial investments simply to chase today’s relatively small opportunities in research and academia. Rather than simply an end to itself, GPU computing marks the beginning of a fundamental shift in computer architecture.

The Multi-core Software Gap

Despite the welcome and surprisingly strong upswing in growth, the workstation market is a mature one. And with that maturity comes a growing dependence on the replacement cycle to sustain volume. If customers judge the latest features and performance levels as offering a cost-effective boost in productivity, they’ll replace equipment more often, thereby shrinking the cycle and raising volume. But if they see little benefits with the industry’s latest round of new products, they’ll be more likely to hang on to their old systems and software longer, thus extending the cycle and lowering volume.

And therein lies the risk for the workstation industry, as it makes its way into the new age of multi-core computing architectures. Raising the bar with every new hardware generation—enough to entice customers to throw out the old and replace with the new—has never been easy, but with the advent of multi-core, it’s gotten that much harder. Once largely responsible and accountable for raising performance from one generation to the next, hardware vendors now find themselves increasingly dependent on the software industry to realize substantial performance gains.

According to fi gures from JPR, workstations are experiencing a resurgent growth.
 
That’s a new, uncomfortable position for hardware vendors such as Intel. While it toyed with modest forms of Simultaneous Multi Threading (SMT)—for example, Pentium 4’s HyperThreading—for the most part, the company could focus on single-threaded architectures, designing in more elaborate superscalar features and dialing up the gigahertz. Compilers needed to stay in sync with architectural improvements, but largely, the hardware provider alone determined how much a jump in performance the user would see.

And for a while, things were good. If Intel did its job correctly, last year’s x86 binaries ran on this year’s beefed-up processor, everything got substantially faster, and buyers could justify forking over the dollars to upgrade to the new platforms.

But Pentium 4 marked the beginning of the end for that paradigm. Clock rates began hitting a wall, and achieving even small boosts in frequency meant significantly more complicated designs and dramatically more watts. Power consumption was spiraling out of control, beyond the ability to effectively cool the chips and beyond the sensibility to pay for the electricity to power them.

So, en masse, the computer industry revisited the problem and arrived at a new answer: multi-core architectures. Rather than try and double the clock rate of a CPU core twice the size, vendors integrated twice the number of cores running at roughly the same frequency. That brought power under control and still achieved twice the theoretical performance as the previous generation.

But with every engineering decision comes a trade-off. And in the case of the migration to multi-core, that trade-off meant hardware vendors now had to more evenly share control over performance and productivity with application developers. In the multi-core era, raising performance depends much more on the software developer: to explicitly draw out parallelism and to implement multi-threaded code efficiently enough to keep more cores busy more of the time.

So today’s celebrated next-generation processor with twice the number of cores as last year’s model might deliver a significant boost running the end user’s key application, or it might not. It now depends a lot on what code is running. Installations that rely heavily on carrying unmodified legacy code forward from platform to platform may be disappointed in the performance boost delivered by a new quad core, for example, compared to its predecessor, dual core.

Legacy x86 code may perform better on a newer processor. Yet, this is likely not due to the presence of more cores, but rather from other system resource enhancements, such as faster memory or I/O access, or through clever designer tricks. For example, AMD and Intel—both keenly aware of the need to focus attention on improving single-thread performance—have introduced power-averaging techniques that allow the system running single-thread code to turn off one core and use the saved power to crank up the frequency of the operational core. So older single-threaded code will get some boost out of a new platform, but very possibly not enough for a manager debating whether to invest serious dollars (out of a tight IT budget) to upgrade staff workstations.
 
Slow Going
So how is the ISV community doing in regard to keeping up the pace of multi-threading with the hardware industry’s pace of multiplying cores? While the situation, of course, varies dramatically by application, the answer is generally—and unanimously—not good enough. Vendors not only aren’t getting the extended linear-type scaling they want, but instead are seeing performance too often trailing off after only two or three cores, with diminishing returns beyond (sometimes drastically so).

But there is hope. The industry understands how critical it is to improve programming on more massively parallel platforms, and companies are allocating more money, staff, and PR to address it. Alert workstation vendors, like HP, are working with the ISV community to help promote and stimulate more effective multi-core programming.

Unlike its competitors, Boxx Technologies has stayed solely focused on the professional market, attracting power users with its PC-derived solutions.
 
Vendors such as RapidMind have seen the need and are developing the tools to better map code to the more massively parallel architectures of the future. Hardware leaders AMD, Nvidia, HP, IBM, Intel, and Sun are establishing research sites like the new Pervasive Parallelism Lab at Stanford, chartered to pursue new, more effective models for the future’s massively parallel architectures. And the vendors with the most at stake in this battle—Intel and Microsoft—are kick-starting research and development of new tools and new approaches to multi-core programming, with the two companies already putting up more than $100 million in funding.
 
Don’t Forget the GPU
Exploring beyond their traditional boundaries, GPU vendors, such as Nvidia and AMD, are stepping up to help in the battle to scale system performance (though Intel, of course, isn’t particularly enthusiastic about that proposition). With the transition to unified arrays of massively parallel, programmable engines, GPUs are moving well beyond simply rendering triangles, instead tackling stream-intensive, floating-point-heavy general-purpose compute problems other than graphics.

While a processor vendor might be shooting to see the next-generation quad core deliver 50 percent more performance than last generation’s dual core, GPU computing applications running on Nvidia’s Tesla and AMD’s FireStream hardware are delivering some eye-popping numbers for well-suited applications. In work performed with the University of Illinois (and presented at the Hot Chips conference in 2007), for example, Nvidia claims a wide range of speedups for its targeted applications—anywhere from 1.5X to more than 400X, depending on the application.

An increase in system throughput is an increase, whether it comes via the CPU, the GPU, or anyplace else. And the promise of better throughput will always spur more upgrades and entice new buyers.
 
User Multi-tasking
Beyond relying exclusively on improving application multi-threading, the workstation industry fortunately has another avenue to pursue in its goal to translate more cores into better end-user productivity: the actual end user.
 
The thinking is this: If a task for one application can scale performance efficiently by taking advantage of two cores, then a user kicking off two tasks in parallel ought to effectively consume four cores. And the best part is that it’s simply taking more explicit advantage of what we’ve all been doing already, consciously or not: partitioning a project into distinct tasks, sorting out which can execute in parallel versus those which must be worked sequentially, and then adapting our workflow to match.
 
Resourceful engineers and artists quickly develop their own techniques for overlapping iterations and batching up jobs to run in parallel. Take the illustration from HP, Intel, and component car manufacturer Factory Five depicting an iterative workflow for CAD: render, review, test, analyze, adjust, and repeat (see graphic, this page). Tweak the design and then kick off a detailed rendering and an FEA run, while at the same time visually reviewing the modified assembly.
 
Similarly, digital content creators naturally overlap tasks in the pipeline: adjust a model, render a scene, tweak the animation, render a rough sequence, review, re-render, and so forth. Whatever the space, resourceful professionals have always adapted their own workflow to a parallel process whenever they can. It’s just that now with multi-core architectures, those available compute cycles will be more plentiful and better suited to handle discrete tasks.
 
CAD professionals multitask and adapt their workflow to a parallel process whenever possible. This depiction from Factory Five is a prime example of such a process.

So just as the ISV needs to raise the tempo of application multi-threading, users will need to pick up the pace in workflow multi-tasking. Fortunately, users are already getting some help to do just that.

Workstation vendors have their suppliers—of processor platforms, graphics cards, and displays—to thank for helping users juggle more and more tasks in parallel. Two dual-link DVI interfaces trickling down the add-in card lines from vendors AMD and Nvidia, along with recent platforms from both Intel and AMD, now let workstation vendors populate two of those cards in a single system. Throw in dramatically lower prices on high-resolution LCDs, and it’s become both easy and inexpensive to deploy two, three, or even four high-resolution displays on the desktop.

More screen real estate lets us manage more tasks all at the same time, keeping more hardware resources busy, and getting more work done in the process. And, ultimately, that’s what it all boils down to: delivering a meaningful boost in productivity. Multi-core workstations can get more done in less time than their predecessors, but ensuring so will mean an increased emphasis not just on getting the application to do more in parallel, but giving the user the tools to do the same. 

Influx of New Buyers
As the workstation market matures, growth has become less about expanding the total available market, and more about penetrating more of the market already available. The professional population engaged in CAD, digital content creation, electronic design automation (EDA), medical, and other spaces will likely grow, but not at a pace which, alone, will satisfy vendors of workstations. Instead, growing the number of workstation users will have to come from drawing in more of the existing pool of professionals.

Arguably, the single largest pocket of untapped users well suited to workstations is the Autodesk AutoCAD community. Large in number, AutoCAD users have not traditionally been buyers of professional-class hardware, be it workstations or professional-brand graphics hardware. But that’s looking to change, and in the very near future. A recent move from Autodesk to raise the minimum system requirements for AutoCAD 2008 (running Vista or 3D modeling) should spur major growth in the market for workstations and professional graphics.

As the prices for professional-class hardware have fallen, Autodesk, for one, has taken notice. CAD independent software vendors have consistently identified fewer support issues (per user) for tested, certified professional cards than their consumer-focused counterparts. Now with certified cards available at prices accessible to all, Autodesk has taken action, raising its base-level hardware requirements for its popular CAD program. No longer is just any card or motherboard graphics solution with a DirectX or OpenGL driver blessed with support.

Now, for installations running 3D on AutoCAD 2007 (or running any package on Vista), the company’s minimum system requirements explicitly specify “workstation-class graphics cards with 128mb or greater.” Autodesk’s motivation and intent are clear from the company’s product literature: “Autodesk-certified graphics hardware is better suited for the 3D display features of AutoCAD 2007 and AutoCAD 2008. Non-certified graphics hardware may not support these new features or may cause problems during use.”
 

As Factory Five illustrates, more screen workspace lets users juggle more tasks simultaneously, keeping more cores busy.
 
AutoCAD users moving to certified hardware have two choices: upgrade with a workstation-caliber graphics card (like Nvidia’s Quadro FX or AMD’s FireGL), or choose that card when ordering their next computer. With certified cards available almost exclusively in branded workstations, Autodesk’s move to raise system requirements will have a considerable impact on future workstation sales.

How big will that impact be? Today, JPR estimates a total AutoCAD installed base of about 7.4 million users, of which just shy of 60 percent run some full-fledged version (with the remainder using AutoCAD LT). With three million workstations shipped in 2007, even a relatively small migration of AutoCAD users will mean a big percentage rise in unit volume over the next few years.

But we don’t expect the portion to be small. It’s unlikely we’ll see an overwhelming majority, but we expect far more than a trickle. And that influx of new buyers bodes well for a strong future, despite a shaky and uncertain short-term economic outlook that will see a tightening of belts. JPR expects the workstation market to significantly outpace the broader, PC-based markets for client computing, and by 2011 approach six million units per year, a near doubling of shipments relative to 2007.

High Demand
Those who may have written off the workstation as a dead-end platform technology have missed the big picture. Rather than focusing on the fall of yesterday’s proprietary systems, the real story has been the growth of the new breed. While technologies, architectures, and business models may change, the demands of the workstation professional—higher reliability, application-tuned performance, and application-specific features—have not.

Today’s workstations continue to fit the bill, and the industry’s more PC-like business model has only improved the situation, enabling not only higher performance levels, but dramatically lower prices as well. And lower prices mean that more professional-caliber hardware is accessible to the professionals who need it.

As always, the industry faces the incessant challenge of delivering substantially better productivity with each new hardware generation. But new opportunities will emerge as well, and the resilience and resourcefulness of technology suppliers and users themselves should continue to position the workstation as a vital tool for the most demanding client applications. 

Alex Herrera is a senior analyst with Jon Peddie Research and author of the “JPR Workstation Report” series. Now in its fourth year, JPR’s “Workstation Report” has established itself as the essential reference guide for navigating the markets and technologies for today’s workstations and professional graphics solutions. Based in Tiburon, CA, JPR (www.jonpeddie.com) provides consulting, research, and specialized services for a range of digital media-related technologies and markets.