Issue: Volume: 23 Issue: 10 (October 2000)

Graphics on the internet
Part 1: a brief history




Exploring the past, present, and future of computer graphics and Web development, this special series, "Graphics on the Internet," is divided into three parts:

Part 1: A Brief History (October)
Part 2: 3D on the Web (November)
Part 3: Internet 2 (December)



Computer graphics practitioners usually know a great deal about the history of their field-from its beginnings in 1963 with Ivan Sutherland's debut of Sketchpad to today's photorealistic simulations. They're also familiar with the development of the Internet-from its origins in 1969 as a Department of Defense experiment in creating an attack-resistant network to today's omnipresent World Wide Web. However, most computer graphics professionals aren't aware of how these two histories have intertwined, often intimately, over the past three decades.

Here we will highlight some key facets of this little-known relationship. This historical account, never before documented, is neither neat nor linear. Rather, it's messy and episodic-and thus it's a true reflection of the Internet. It's also a story that is important to capture now-while the people who made this history are still here to tell it-for by showing us how graphics on the Internet has evolved, it will help us understand where this brave new confluence of technologies is headed.

The Internet's first incarnation, known as ARPANet (Advanced Research Projects Agency Network), was devised by the Department of Defense as a research project to see if a distributed, packet-based architecture could function reliably-even if parts of it were destroyed in some future war. But the computer graphics researchers who had access to the network in the first decade of the Internet-1969 to 1979-weren't interested in warfare. They wanted to experiment with this new type of network to see how it could be used to advance graphics technology.

What "advance" meant wasn't too specific-the network was new and no one really knew what it could do. So the researchers' first goal was simply to find out what was possible. Before they could do that, however, they first had to figure out how to share-in some interactive and preferably real-time manner-the computing and graphical display devices that existed at the sites where ARPANet's computers were hosted.
The first online graphics, like this familiar image, were created with simple ASCII characters. (Image courtesy of James D. Murray, co-author, The Encyclopedia of Graphics File Formats)




In the first decade of the ARPANet, these researchers faced three major problems in sharing equipment across the network. The first was that the network was small and the computing resources were scarce-in fact, from 1969 to 1974 there were fewer than 50 host computers on ARPANet.

The second problem was that there were a limited number of sites with a graphics orientation, and the resources were typically unique. "If MIT had an Evans and Sutherland display gadget, there might be one or two others like it in the world and none on the network," says Jim Michener, who was working at MIT in the early 1970s and was among the first ARPANet graphics researchers.

The third problem was that no way existed to exchange data and software between these graphics devices. "All the different [hardware] manufacturers had device-specific software designed for their products. They were interested in locking people into their systems," Michener says. Thus, equipment at point A couldn't transmit data or programming to equipment at point B.

At least two simultaneous attempts were made to overcome these challenges. It is typical of the time and of the Internet that neither research team knew about the other.
Created in 1966, prior to the development of ARPANet, "Nude-Study in Perception," by artist Kenneth Knowlton, is representative of some of the more complex ASCII images that proliferated on the Internet after the invention of email in 1972. (Image courtes




One group involved researchers at Harvard and MIT, led by Harvard computer scientist Danny Cohen. He was simultaneously researching the problem of sharing ARPANet computer resources and exploring what could be done with them by developing real-time flight simulations. The simulations-which were for demonstration purposes only-involved PDP computers at MIT and Harvard and an Evans and Sutherland LDS-1 display system at MIT. All input data (pilot commands, instrument readings, and the like) were created at Harvard on its PDP. This was shipped via ARPANet to MIT, where that PDP executed the physics calculations and the LDS-1 computed the simulation images. These were then shipped back across ARPANet to Harvard for the pilot to view.

The significance of this simulation, which ran for the first time in 1971, was that it was "the first real-time application over a packet network," says Cohen. "It also demonstrated how geographically distributed computers could cooperate over a network to accomplish tasks through real-time resource sharing that none of them could have performed alone."

Cornell student Eric Haines employed Usenet, an electronic "bulletin board" developed in the 1970s, to post his raytracing benchmark models, which included this "sphereflake" containing 7381 spheres. Illustrating the effectiveness of using the Internet to share graphics software and models, a group at AT&T downloaded the code and used it to make models for their raytracer in time for the Siggraph '87 conference-one week after the posting. (Image courtesy Eric Haines.)

The simulation is a prime example of both the episodic nature of Internet research and of work that has no immediate result yet turned out to be crucial years later. Indeed, once the concept had been demonstrated, Cohen moved on to non-graphics research. But the demonstration-which required that his team devise a means for real-time data transmission-influenced the development of a real-time Internet data transmission protocol in the 1980s. Without this protocol, what we know of today as streaming (or play-as-you-download) video and audio would not be possible.




The other approach to overcoming the challenges of sharing resources entailed a direct effort to set standards and write a protocol that could run on any graphics device on the ARPANet. This work, which began in 1969, was conducted by the Network Graphics Group (NGG), a contingent of computer graphics researchers headed by Michener.

The NGG worked diligently to create a Network Graphics Protocol (NGP) that would translate commands and data between graphics hardware devices in different locations. The goal was total device independence such that a researcher at any location could perform graphics research using the equipment at any other network location. After performing experiments-some successful, some not-the NGG realized it was facing a huge obstacle.

In 1987, Graphics Interchange Format (GIF), one of the most popular file formats on the Internet, was created by CompuServe to enable subscribers to view graphics files as they were downloading. These images of the old Freedom Space Station show how an image could be interlaced, or broken into rows and downloaded in stages, with each pass revealing progressively more of the image. (Images courtesy NASA.)

"We had an 'n-times-n' problem," says Michener. "We had 'n' different hardware-specific application programs and 'n' kinds of hardware. We'd need 'n-squared' converter programs." The researchers realized that "n" was going to keep increasing with each new program, each new device, and each new upgrade. Despite this obstacle, a "Level 0," or baseline, NGP was written and presented in 1974 at a meeting at the National Bureau of Standards.




At the meeting, the NGG discovered that others in the mainstream graphics community were grappling with the same problem of device independence. "There was a convergence of people who hadn't been so aware of one another," Michener says. So the NGG was absorbed into the Graphics Standards Planning Committee (GSPC), which five years later published the Core (ACM Siggraph's 3D Core Graphics System) specifications, aimed at device independence.

Apart from GSPC's work in developing the Core specs, from 1974 to the mid 1980s, researchers essentially lost interest in the idea of "doing" computer graphics across the ARPANet. Instead they turned to expanding the capabilities of their own in-house facilities. When computer graphics re-surfaced online in the '80s, the goals of the participants were dramatically different from those of their predecessors.

Rather than trying to use the Internet to advance computer graphics, researchers in the 1980s focused on using the network to share ideas and distribute software, models, and even the occasional image. This new ability to communicate instantly would break down the barriers surrounding academic and other researchers, who had always been limited to conferences and journals as their sole means of exchanging in formation.

The new capabilities were based on foundations laid in the 1970s: the development of email in 1972, file transfer protocol (FTP) in 1973, and Usenet (known as newsgroups or netnews) in 1979. But the developments of the 1980s would go beyond simply realizing the achievements of the previous decade. This era would also usher in the development of the first standard file formats for the device-independent exchange of images.

In the mid 1980s, researchers at the University of Utah developed the Utah Raster Toolkit (URT) to store images such as these in a single, standard format. URT was one of the first sets of programs used on ARPANet for exchanging images. (Images courtesy John Peterson and the University of Utah Computer Science Department.)

The first of the 1970s technologies to bear fruit in the 1980s was email. Its efficiency in communicating information is now obvious. But in the '80s, exchanging email was not easy. "Email addresses had to be explicit," says Eric Haines, who was studying computer graphics at Cornell in the late '80s. "Mine was ...!hpfcla!hgpfcrs!eye!erich, and it was your job to figure out how you could get to hpfcla. Still, through just email and netnews, many ideas were exchanged and developed rapidly."




FTP was the second technology from the '70s that came into its own in the '80s. FTP sites were computer facilities that served as large, archival storage sites for software and data. They were rare, but where someone had access to one, the simple placement of software or data on the site could draw attention to the developer. This was due largely to the fact that material stored on an FTP site could stay there indefinitely. For example, when an extremely popular public-domain raytracing package called "MTV" was posted on an FTP site by the program's creator, Mark VandeWettering, a graduate student at the University of Oregon, the site quickly became a mecca for raytracing information.

The third 1970s "sharing" technology was Usenet. This system-which appeared to users much like a collection of electronic bulletin boards-involved a store-and-forward method for posting messages. Users could access specific newsgroups, read the messages, and post replies. Though the "life span" of any given message was a few days, at most, Usenet proved to be a valuable source of graphics information. The first graphics newsgroup, net.graphics, was formed early in the '80s. By the end of the decade, it had become comp.graphics, and it was a nexus for graphics discussions. It was, according to Haines, "how many people learned much of what they know about computer graphics."

In 1994, writer and educator Mark Pesce introduced the notion of virtual reality to the Web community, which immediately called for the development of a language to create virtual worlds online. What soon followed was VRML-Virtual Reality Markup Language-which Pesce used to create a 3D rotating model of Earth that is still being fed today by real-time satellite data. (Image courtesy of Mark Pesce)

In addition to providing "bulletin board" environments where users could post messages to each other, Usenet also had newsgroups dedicated to the posting of source code for programs, models, and images in plain-text format. The posts had to be short because most users could not access long messages. "So people would post code in a series of news postings to a source code group," Haines says. "You'd get them all and 'glue' them back together."




Haines's own Standard Procedural Database (SPD)-a compilation of model programs designed for testing the speed of raytracing algorithms-is an example not only of how Usenet could be used to post source code, but also of how fast Usenet worked. Haines placed SPD on a Usenet source code group one week before Siggraph '87. AT&T's Pixel Machine group picked it up and used it to make models for its raytracer in time for the conference. Afterward, Haines's models quickly made their way from Usenet-where postings lasted only a few days-to an FTP site. The models are still available on the Web today.

The Usenet source code groups would also see the occasional posting of actual image code. But because the image code was typically huge-often much longer than program code for creating an image-and because the network's backbone speed at the time was 56kbps, such occurrences were rare. However, that it was possible to post image code at all was due to the emergence of graphics file formats designed for sharing images.

These new file formats, unlike the developments cited thus far, actually began in the '80s. Even though the Internet would benefit from their development, that wasn't the reason the file formats were devised. "The Net was not a significant motivator to solving the graphics file-format problem," says John Peterson, a computer science graduate student at the University of Utah during this period. "The main issue was the chaos taking place right within your own lab or office. It seemed like every new application, window system, and piece of hardware introduced yet another file format, and just getting the simplest tasks done-like displaying an image on your screen-required a format conversion."

This chaos led to the development of formats such as the Utah Raster Toolkit (URT), one of the earliest file formats/graphics toolkits used on the ARPANet for exchanging images. Peterson, who co-authored URT, says, "As the Net began to take off in the academic community in the mid to late '80s, many of these tools were already in place to facilitate sharing images."

In the 1990s, VRML was used to create a variety of online virtual worlds, including this early example of a house, which can be seen by the viewer from above (left), from the front while walking to the front door (middle), and from the inside (right). (Images courtesy Sandy Ressler, About.com Guide to Web3D.)

Ironically, the 1980s file format that would ultimately be one of the most popular on the Internet was not developed for ARPANet or even for the graphics community as a whole. Called Graphics Interchange Format (GIF), it was created in 1987 by CompuServe to enable its subscribers to view graphics files as they were downloading.




By 1989, ARPANet had been taken over by the National Science Foundation (NSF). Renamed NSFNet, the network had considerable influence in broadening communication between computer graphics researchers and practitioners. But without the final significant event of the decade, it is likely that even today the Internet would still be the province of academia and government. This was a proposal by Tim Berners-Lee to CERN (European Organization for Nuclear Research) in Geneva for something called the World Wide Web, a concept that would dominate computer graphics and everything else on the Internet in the network's third decade.

Initially, the World Wide Web did not make much of an impression on the NSFNet community. One reason was that few Web sites were being created. For Berners-Lee's concept to work, multiple sites had to be "hyperlinked" so that they could be instantly (or at least quickly) connected. Hyperlinking was accomplished by embedding the means to access remote sites within the text of a page using hypertext markup language (HTML)-the source code for the Web. All a user had to do was click on the link to be transferred across the network to the specified site.

HTML allowed users to make connections between Internet sites without knowing how to find them-a huge change from the days when users had to specify the entire path to any place they wanted to go on the Net. Users didn't have to know HTML, either. Web browsers-software that read the HTML source code and converted it into readable text-took care of all that. But there weren't many Web sites at first and, besides, they-and the first browsers-were text-based, and therefore were neither graphical nor exciting.

In the last half of the 1990s, the research orientation of graphics on the Internet gave way to business applications. For example, this e-commerce-oriented 3D visualization was created with Shout Interactive's 3D Web technology to show customers how all the elements of an Ascend Communications network are related. (Image courtesy Shout Interactive.)

But in 1992, all that would change when news of the Web caught the attention of University of Illinois Champaign-Urbana graduate students Marc Andreessen and Eric Bina. Dissatisfied with the text-based Web browsers that existed at the time, they decided to build a free, easy-to-use, graphical browser based on X-Windows. The browser, named Mosaic, was released in January, 1993. In March of that year, Web traffic was 0.1 percent of all NSFNet traffic. By September, it had reached 1 percent-the first of many order-of-magnitude leaps. In fact, by the end of the year, Web traffic was increasing at an incredible annual rate of nearly 350,000 percent. Mosaic was the "killer application" for the network, and nothing before or since has matched its impact.




Although text and graphics were seamlessly mixed in Mosaic, Bina was initially reluctant to include graphics support. "I was concerned about the misuse of images," Bina says. "Marc thought that people would only use small, iconic images. I was afraid that they would use big ones that would waste a lot of bandwidth." Bina wanted to limit the size of supported graphics, but Andreessen convinced him otherwise.

Once Mosaic's popularity exploded, Bina's nightmares came true. "First, people were loading up full-page images. But that wasn't the worst of it. They were also printing out PostScript documents, scanning them in as images, and then sending them out. That was over a million bytes for a text document." When Bina confronted Andreessen with this horror, Andreessen simply laughed and said he'd known it would happen all along, but that graphics were necessary anyway. With Mosaic and a faster network, graphics had become a reality for everyone with Net access.

Other commercial applications of Web graphics from the last half of the 1990s included the first online 3D fashion show for Macy's and Excite@home. Created with Shout Interactive software, the site provided user interactivity and featured sophisticated graphics, such as real-time skeletal deformation, anti-aliasing, and 360-degree panoramic backgrounds. (Image courtesy Shout Interactive.)

Though the Web overshadowed every network event-graphical and otherwise-during the '90s, several other online graphical events were occurring. One notable event was the development of a new online graphics format. Entitled JPEG (Joint Photographic Experts Group), the format-really a family of algorithms-was created by the Independent JPEG Group (IJG), a body of volunteers not affiliated with any formal standards-setting body. They decided that a more efficient alternative to GIF was needed, and in 1991 they introduced the new JPEG format. Unlike GIF, which is a straight bitmapped format that puts pixels of specific colors in specific places and has limited compression capabilities, JPEG uses its complex algorithms to let a user "tune" images and balance the results between resolution and compression. The greater the compression, the lower the resolution and vice versa. JPEG became just as popular as GIF for exchanging images across the network, and the two are still the most popular formats for static graphic images.




A second major event occurred in 1994, when the First International Conference on the Web was held at CERN in Geneva. At the conference, writer and educator Mark Pesce introduced the notion of virtual reality (VR)-computer graphics "worlds" that resided in "cyberspace"-to the nascent Web community. Attendees were fascinated, and they called for the development of a language similar to HTML to describe virtual reality so that these "worlds" could be created and browsed online-just as Web users could browse text and static graphics from site to site. Pesce, who had been working on the concept, agreed, and soon thereafter Virtual Reality Markup (or Modeling) Language (VRML) was born.

After the 1994 Web Conference, Pesce and other VRML devotees began to promote the concept in a somewhat evangelistic fashion. It worked. From 1994 through 1996, VRML experienced a golden age, as companies sprang up hoping to make a mint in the exciting new world of 3D on the Web. However, the expected opportunities didn't emerge, and most of the 3D companies folded or changed business or ownership.

After the ensuing disarray, VRML was condemned as a failure and seemed to disappear. But that was an illusion. In 1996, a new body arose to promote VRML-the VRML Consortium. Two years later-with Pesce's blessing-the body became the Web 3D Consortium, and its approach changed from evangelism to a quiet, business-oriented approach, which continues today. The consortium also changed the original VRML concept from one of isolated, virtual worlds to one of omnipresent 3D. Neil Trevett, the current Consortium president, says, "[We] want 3D capability to be fundamentally woven into the fabric of the Web. We don't want it to be a plug-in. On a PC, 3D is a second-class data application. In Windows, 3D is in a little window. I want true 3D elements embedded in and floating behind my 2D work pages. 3D must become as easy and as pervasive as 2D."

The late 1990s witnessed the advent of "immersive" virtual meeting places, including Advopolis, a 24-hour European business-to-business Web site designed with Blaxxun Interactive software for the legal, financial, and accounting communities. Company representatives and clients, all of whom are represented by avatars, meet in an auditorium or virtual conference room, review files, and view streaming videos via RealNetworks's RealPlayer. (Image courtesy Blaxxun Interactive.)

The Web 3D Consortium is not the only evidence of VRML's survival. Pesce points out that the entire 3D part of the latest MPEG 4 (Motion Picture Experts Group) video specification-used by everyone doing video whether they are online or off-is really VRML. Pesce also emphasizes that this segment of MPEG is the only ISO-approved standard for 3D. Under the new name "X3D," VRML has also merged into the effort to develop eXtensible Markup Language (XML)-a derivative/extension of HTML that lets applications interact with data and other content on Web pages. VRML is thus alive and well-it has simply become the basis of a general effort to bring 3D into the Web's mainstream.




Another significant Web event occurred in 1995, when, in the midst of the VRML golden age, NSFNet completely turned the network-already being called the Internet-over to commercial entities. That marked the end of the research orientation of the network and signaled that the Internet was now dominated by business.

This business included computer graphics, which, in the last half of the '90s, has itself grown into an immense industry on the Internet. For example, Netscape-a graphical Web browser derived from Mosaic-went public in 1995 with the third largest initial public offering on record at the time. But that was just the beginning.

As the 3D-formerly VRML-community moved away from its early evangelistic approach, it too began to focus on business in the last half of the '90s. For example, Shout Interactive began selling its 3D Web technology to e-commerce companies for use in applications such as interactive product display, demonstration, and assembly that give users a better understanding of products than simple flat images. Because Shout3D doesn't require special browsers, it avoids the problem of downloading and plugging special software into a Netscape or Internet Explorer browser, and it lets users painlessly enter a 3D world as easily as they access any other Web page.

Another example of the current business focus of Web graphics technology is Blaxxun Interactive, whose specialty is 3D "immersive" worlds-complete environments where users can move around and interact with people and objects. Here users are represented by avatars-computer graphical manifestations of their physical selves. Blaxxun's technology has already been put to use to build an online meeting facility named Advopolis in Europe, where lawyers, accountants, and financiers-or rather their avatars-meet and discuss real business with the avatars of real clients.

A final example of how computer graphics has become big business on the Internet is the case of streaming or real-time video, partially based on Danny Cohen's work at Harvard some 30 years ago. The current interest in streaming video stems from new and enormously efficient compression schemes, higher backbone and local-loop bandwidth, and an entirely new business of placing content-caching servers closer to end users so data needn't travel across the length of the Net in response to user requests.

Another brave new immersive world developed in the late '90s for the Web is Cybertown, a virtual community for science-fiction enthusiasts also created with Blaxxun Interactive software. The site includes a business center, community auditorium, spaceport, news center, shopping mall, town plaza, beach, and recreational facilities such a park with a pool. Cybertown is currently populated by some 350,000 avatar "residents" who have homes, jobs, and social lives in the virtual community. (Image courtesy Blaxxun Interactive.)

In the wake of these advances in streaming video technology, entertainment content providers have created specialty streaming-video niche content and what might be called entertainment "metadata" such as movie clips or celebrity interviews. On the business-to-business side, streaming video can make Web-based software rentals a reality, change the nature of business travel, and put the "conference" back in video-conferencing.




Over the past three decades, computer graphics on the Internet has evolved from its academic, research-based origins to a Web-based infrastructure technology that modern e-commerce can't live without. Online and offline computer graphics have merged into a seamless whole where the Internet is the communications and development medium of choice. As for the future, if the past is any indicator, the "killer app" of tomorrow's graphics-dominated Internet is certainly under development at this very moment.

Bridget Mintz Testa writes about telecommunications and networks for a variety of telecommunications publications. She became interested in graphics while working in space station robotics at NASA-Johnson Space Center. Based in Houston, she can be reached at btesta@hypercon.com.

Next month, in Part 2 of our series, "Graphics on the Internet," senior editor Barbara Robertson brings us into the present and examines 3D graphics on the Web-an area in which some of the most dramatic innovations in computing technology are currently taking place.