Guest Editorial
Issue: Volume: 30 Issue: 7 (July 2007)

Guest Editorial

There’s nothing quite like big anniversaries to make one feel old. I just ran across the expression “age rage,” and I think I know all too well what that means. So when I was contacted by the editor of this magazine and reminded that I had founded it nearly 30 years ago, I took the news with mixed emotions.

Thirty years is a long time by nearly any standard. It’s almost unheard of in magazine publishing, and even more so in the technology magazine business, so it’s a remarkable testimonial to the staying power of CGW—and to the enduring value of computer graphics technology, which continues to awe as it increasingly adds value to our lives and our livelihoods.

It’s hard not to reflect back a bit, though I know that one person’s trip down Memory Lane might be his audience’s interminable journey down Boring Boulevard . But bear with me a bit; after all, this is about computer graphics, so how can it be boring?

The Attraction

I wasn’t the only graduate student in physics in the early 1970s who was faced with the fact that there were no jobs for physicists then. It was a time when the confluence of a national recession, the waning days of the Vietnam War, and a glut of post-sputnik science graduates posed a rather bleak job outlook. Like many, I took refuge in a research institute programming early computers, since computer science as an academic discipline was in its earliest stages, and we science and math types had to step into the breach.

I found computers and programming boring and tedious, though, and could hardly bear to come to work in the morning. That is, until I discovered a little subset called computer graphics, which was little more than a concept then, a subject of speculative interest outside of the high-powered research labs it was confined to. But it was the one thing that looked really interesting about computers, and seemed something worth really digging into and committing to—unlike the mechanistic number crunching and report generating that accounted for almost all of computing back then, which often seemed like something we could teach monkeys to do.

Computer graphics conjured up a rich palette of visionary ideas, of a seemingly infinite range of possibilities, grand concepts with the promise of big payoffs. It was hard not to imagine all that data that was then being digitized for the first time, having coordinates assigned to it so it could be displayed on a screen or plotted out, illuminating the underlying relationships within the data. There were so few people working in the area that everyone knew one another, and there were only a handful of companies developing products. A few key developments triggered the revolution, most notably the emergence of low-cost RAM, which made it possible to rapidly refresh a screen full of pixels and create dynamic images in color. Useful systems began to emerge into everyday industrial use, and the pioneering graphics terminal began to quickly evolve into graphics workstations.

Starting a magazine at a time in which there seemed no more than a dozen vendors and a few hundred actual users seems wildly irresponsible, and at the time it was certainly a risk-laden proposition. But it was a seductively exciting time as well. And we were all, of course, younger then! I will spare readers the gory details, the adventures and misadventures of the magazine start-up, but thanks to industry sage Carl Machover, it has been chronicled in the SIGGRAPH annals and can be read at http://www.siggraph.org/publications/newsletter/v32n4/columns/machover.html

By 1981, after several years of horrendous struggle, it was clear that computer graphics was becoming a major industry in its own right, with more than 100 manufacturers, a dazzling SIGGRAPH show that was now blossoming into a major commercial event and not just a gathering of gifted programmers, and a market that was then estimated to be worth more than $2 billion. For a sense of perspective, that was the same amount that Americans spent on going to the movies that year—an interesting benchmark.

Propelling much of this early development was computer-aided design, or CAD, and its fully integrated industrial version, CAD/CAM. The productivity gains associated with CAD systems were immediately obvious and compelling, prompting large-scale investment in these tools and technologies.

In the July 1985 issue was Robert Abel’s Sexy Robot, which
marked two TV firsts: an image that seems to have been made
from reflective metal and one with human-like motion.

The CG Revolution
 
In June 1981, I shared the podium with then-Congressman Al Gore at a congressional hearing on productivity and technology, in which computer graphics and CAD were the star players. It was a clear indicator that computer graphics was no fluke or transient phenomenon. Al Gore may not have invented the Internet, but he scored important points by putting his political weight behind a very new technology at the time. When it was my turn to speak, I remember commenting, off the cuff, that “In a few years, wherever you find computers, you’ll find computer graphics,” and went on to predict that CG would soon be so commonplace that it would hardly be worth writing about, essentially predicting the demise of my own magazine.

Thankfully, I was dead wrong on the latter, because CG has continued to relentlessly carve out new frontiers and find new ways of maintaining the excitement of a fresh and dynamic technology. But on the former, I was pleased to be right, and much sooner than I had expected. Within a year, PCs were being mass-marketed, and by the time Apple rolled out its first offering, the term Graphical User Interface (GUI) was a common bit of everyday jargon. It was not only commonplace, but it was the feature that really set the Macintosh apart from the PC and launched a movement that saw computer graphics become the motive force behind an enormous games market and, ultimately, the CGI-driven movies that we now take for granted.

There are few phenomena that can match the breathtaking speed at which mini-revolutions spread under the impetus of computer graphics. One such revolution, during which I had a front row seat, was in magazine publishing. Until somewhere around the late 1980s, magazines were produced from typewritten copy that was then typeset on a special compositing machine. Production artists literally cut strips of copy and art, ran them through a hot wax machine, and pasted them onto boards. This endearingly quaint process, with all its specialized equipment, simply vanished over the course of a handful of years, a time compression that may be unmatched in modern industrial history. The only vestiges of this still remaining are the “cut and paste” terms that have carried over in their new digital guise.

Featured in the July 1982 issue was this computer simulation
of tree growth, using color to distinguish live and dead tree trunks.
 
That same issue also featured this image from Blade Runner,
accomplished using model shots and painted backgrounds.
 
In a similar fashion, we watched the transformation of architecture and industrial design. Drafting tables and blueprints gave way overnight to high-resolution displays and CAD workstations. One by one, our creative disciplines have adopted a radical new tool set. It is hard to think of another factor that has caused so much sweeping change as computer graphics. Imagine where the personal computer would be without computer graphics—a world in which our screens were composed only of monochrome text, running Version 568, say, of MS-DOS. What a bleak vision this suggests!

Now it is not possible to imagine a world without computer graphics. Even without a 30-year historical perspective, one has to appreciate how far the technology has come and how it shows no sign of lessening the furious pace of change. One only has to look at how we have ported live graphics and video to tiny mobile phones in the last few years.

Thinking retrospectively, it is sorely tempting to invoke the names of the people and the companies that were instrumental in getting us from there to here, but I know I would overlook some and would always regret the oversight. I will say that it was an enormous honor to be able to be part of both the early days and a kind of “golden age” of computer graphics as it morphed into more or less its current state. It was a great privilege to know the intellectual giants who single-handedly advanced the technology, the courageous entrepreneurs who risked so much to create useful products, and the companies (many of which no longer exist) whose names on glowing banners at a SIGGRAPH conference were a beacon to so many of us.

I used to conclude speeches all through those heady days of the 1980s with the observation that I couldn’t imagine a better place to be than in the thick of the milieu of CG technologies and applications. Today I would have to say that this could be a thoroughly contemporary sentiment as well as the musings of an old-timer. Computer graphics—what a wonderful place to be!

 
Randall Stickrod, a magazine publisher, media executive, and technology executive, is best known as the founder and publisher/editor of Computer Graphics World magazine, which helped launch the vibrant computer graphics industry. He went on to help found Wired magazine, and was involved in the launch of many others, most notably Dwell.