Issue: Volume: 23 Issue: 12 (December 2000)

Graphics on the internet
Part 3: Tomorrow's Internet







This series examines the convergence of computer graphics and Web development. It is presented in three parts:

Part 1. A Brief History (October)
Part 2. 3D on the Web (November)
Part 3. Tomorrow's Internet (December)




Big challenges need big solutions. And when it comes to the Internet and how visionaries hope to eventually utilize it, big challenges needing big solutions abound. For instance, interacting with high-resolution visual simulations of complex physical phenomena in collaboration with researchers nationwide is a big challenge. Engaging in full-screen, movie-quality video conferences from a laptop computer is a big challenge. Remotely controlling a robotic device to perform laparascopic surgery on an astronaut in space, guided only by 3D scan data being acquired, processed, and transmitted to Earth in real time is a really, really big challenge. "Big," however, does not mean "impossible." Thanks to high-powered academic and government research efforts, solutions to these and a multitude of other Web-based challenges may be right around the bend.

Since 1996, two separate research and development initiatives have been fueling the technology innovations needed to build a new and much-improved Internet. Both the Internet 2 (I2) and the Next-Generation Internet (NGI) are dedicated to the development of advanced networking technologies that will enable the higher bandwidth, multicasting, and overall quality of service that the current-generation Internet cannot offer, but which ambitious multi-user, interactive, real-time applications demand.

The Internet 2 project is a nationwide effort by more than 180 universities and 50 corporate affiliates to build and implement high-speed networks that are 100 to 1000 times faster than today's Internet. Similarly, the NGI, which operates on the federal level and is funded by various government agencies, promotes university research on advanced networking technologies and collaborative re search and education applications.

The main distinction between the I2 and NGI endeavors is that the latter focuses on meeting the research and communication needs of the federal mission agencies, such as the Department of Defense, the Department of Energy, NASA, and the National Institutes of Health, while the goal of the I2 is to create a much faster, stronger, more reliable Internet for the general population.

In reality, that distinction is moot. There is a great deal of overlap between the two groups, and both are taking advantage of their synergistic relationship to realize their shared goals of advanced infrastructure and application development. In fact, a significant portion of NGI technology and applications development takes place at I2 member universities, where part or all of the respective researchers' I2 work is funded through NGI development grants. In addition, a joint I2/NGI engineering team meets regularly to coordinate projects, and federal agency representatives sit on many of I2's specialized work groups.
Detailed, three-dimensional representations of anatomical structures such as this knee model from the National Library of Medicine's Virtual Human project are at the core of numerous federally funded educational VR applications that are to be multicast am




At the core of this cooperative relationship is the access that each group shares to its respective backbone communication networks-the channels responsible for carrying major traffic between smaller local and regional networks. For example, the National Science Foundation (NSF), one of the key NGI agencies, has made hundreds of merit-based High Performance Connections awards to I2 universities through which the university researchers are allowed to link to the NSF's very high-performance Backbone Network Service (vBNS). Established in 1995, the vBNS is a national network that relies on advanced switching and fiber-optic transmission technologies known as Asynchronous Transfer Mode (ATM) and Synchronous Optical Network (SONET) to achieve speeds of up to 2.4 gigabits per second (Gbps), nearly 45,000 times faster than a 56K modem. Combined, the ATM and SONET technologies enable high-speed, high-capacity voice, data, and video signals to be merged and transmitted on demand-critical capabilities for the advanced visual-computing applications envisioned for the Internet of the future.

The I2 infrastructure has its own 2.4Gbps fiber-optic backbone network, called Abilene, which was established in 1999 by the University Corporation for Advanced Internet Development (UCAID)-the entity that oversees the research and funding of I2 activities. With Abilene, universities are linked in regional networks that connect to the main Web at network-aggregation points called gigaPoPs (Gigabit per second Points of Presence). In addition to being able to hook into Abilene through regional gigaPoPs, university members can also use the backbone to connect to the vBNS and other advanced federal networks, including NASA's Research and Education Network (NREN), the Department of Defense Research and Education Network (DREN), and the Department of Energy's Energy Sciences Network (ESnet). And the open nature of the gigaPoPs configuration allows federal backbones to connect to and build on university gigaPoP links. Ultimately, the communications collaboration between the NGI and I2 camps is expected to spawn a robust national infrastructure of high-performance network capabilities built on an interoperable backbone.

The fruits of such an advanced networking collaboration can be seen in the many ambitious proof-of-concept applications being built under both I2 and NGI auspices. One high-profile example is the Internet 2 Visible Human Advanced Concepts Technology project. The effort brings together five active government contracts held by researchers at various medical schools nationwide in which the datasets of the National Library of Medicine's Visible Human Project are being used to create educational virtual-anatomy applications.

The Visible Human datasets are complete, anatomically detailed, three-dimensional representations of male and female bodies that were created from computed tomography, magnetic resonance, and cryosection images. As the educational applications are developed and refined, the consortium of contract developers will multicast them to an end-user focus group made up of medical students in a shared test of the I2/NGI technologies.

"The purpose of the project is to test the ability of each of the consortium partners to broadcast educational prototypes to the other partners via the Abilene network," says I2 communication director Greg Wood. The vision is based on the premise that complex virtual-reality simulations of the structure and functions of the human body can be shared between universities over high-speed, high-bandwidth networks. It is also expected that the high-level collaborations among the government, academia, and industry engendered by this project will scale to enhance information-sharing among scientists, toolset development, and network deployment.

Success on this front will inevitably have broad implications for computer graphics and visualization developers. "Visual ization is everywhere. Nearly all applications that use advanced computing use visualization and graphics, particularly in the scientific research applications, where complex information can best, and often only, be understood visually," says Wood.

Unfortunately, the current Internet is often more of a hindrance than a help to sharing this visual data. The state of the underlying networking infrastructure defines the sophistication and usefulness of the applications developed to run on it; thus research and development are slowed by the need to focus on work-arounds to the high latency, low bandwidth, and general delivery and performance insufficiencies, says Wood. The goal of both the I2 and NGI initiatives is to build a networking infrastructure where these are non-issues, where developers' primary question will not be, "What kind of application can I build within the technology limitations imposed by the Internet?," but rather "What kind of application can I build?"
Applications relying on real-time interaction and collaboration with high-resolution medical models such as this Visible Human torso, while beyond the scope of the current Internet, are ideally suited to the Internet of tomorrow.




In fact, that is the question fueling the myriad advanced applications being developed to show the research community and the world at large that the novel networking technologies that will underlie tomorrow's Internet will drive collaboration and interactive access to information and resources in a manner not currently feasible. These applications generally focus on at least one, though typically a combination, of six core application/technology domains, including virtual laboratories, digital video, digital libraries, distributed learning, collaboration, tele- immersion, and distributed computation.

A quick glimpse into a handful of applications in these categories provides a window into the "big" solutions being developed to handle the challenges mentioned above and the countless others facing the Internet of the future.

Under the virtual-laboratory umbrella, researchers at Carnegie Mellon University, the University of Pittsburgh Medical Center, and the Pittsburgh Supercomputing Center are building a 3D brain-mapping application in which a patient's brain activity during visual and memory tasks that is being acquired with magnetic resonance imaging can be visualized remotely in real time. In this application, the high-bandwidth Internet 2 capabilities are employed to link a parallel analysis computer to the visualization computer.

On the digital video front, the International Center for Advanced Internet Research at Northwestern University (iCAIR), C-Span, Internet 2, and IBM have entered into a cooperative partnership to develop a prototype capability for providing high quality C-SPAN programming over I2/NGI networks, which will allow C-Span viewers to interact with video content and will allow programming to be incorporated into research and educational activities. The advanced networks will enable high-quality video presentation and a degree of interaction with digital video not currently possible.

Researchers at Boston University have developed a digital-library application with experts from the Massachusetts College of Art, the Rhode Island School of Design, and the National Center for Supercomputing Applications. Called ArtWorld, the application is a collaborative/distributed immersive environment showcasing visual and auditory works created by a number of artists. The application requires low latency for participant interaction and high-bandwidth to support audio streams and to update position and state information of objects that move and change in the virtual world.
Moving in and about a human head will be a no-brainer with the educational applications being developed as part of the Internet 2 Visible Human Advanced Concepts Technology project.




In the distributed-learning category, a number of applications are being developed by researchers at the National Biocomputation Center within NASA Ames. Among these is a distributed surgical planning application in which groups of geographically dispersed surgeons in a virtual workspace can visualize, interact with, and understand their patient's data and collaborate and consult with surgeons in other locations over the I2 backbone. Together with collaborators at the NASA Ames Center for BioInformatics, the system's developers demonstrated the first wide-area multicast stream over I2 (including vBNS and Abilene) last March. In addition, the team is building force-feedback surgical simulators that are distributed over the high-capacity networks. Toward this end, the researchers have developed a client-server system of distributed force-feedback surgical simulators with the goal of extending this system to work across a wide area so remote collaborators can assess how distance learning can be performed in surgical training. Ultimately, such capabilities will be used to help provide medical care and treatment to space-bound astronauts.

To demonstrate the enhanced collaborative potential of the new Internet, researchers at the University of California San Diego Supercomputing Center are relying on the high-performance networks for sharing molecular research using MICE, which stands for Molecular Interactive Collaborative Environment. The development team is creating new methods for visualizing and sharing complex, multidimensional scientific data over networks. With the project's novel visualization tools and high-bandwidth connectivity, multiple users at different physical locations are able to interact via the network to collaboratively examine and manipulate a shared three-dimensional representation of a macromolecule in real-time. Written in Java and Java3D, MICE is portable and "Web-deliverable," and lets users view molecular scenes on their own computer as well as distribute the scenes and interact with other users anywhere on the Internet.
Regional artists are able to exhibit their digital masterpieces in a collaborative/distributed immersive environment called ArtWorld, developed at Boston University. The application requires I2's low latency for participant interaction and high-bandwi




Tele-immersion is the focus of a project called the Virtual Temporal Bone developed at the University of Illinois in Chicago in which remote participants are immersed into the intricate anatomy of the middle and inner ear via a shared virtual-reality simulation, an experience feasible only through the quality of service and end-to-end bandwidth of the NGI/I2 technologies. The development team recently demonstrated a similar application called the Virtual Pelvic Floor.

Finally, in the distributed-computation category, the University of Illinois is also involved with the University of Pennsylvania and additional industry and academic partners in an application called the National Scalable Cluster Project (NSCP), the goal of which is to manage and mine massive (terabyte to petabyte) collections of geographically dispersed data by developing technologies that support distributed computing clusters. Ultimately, the project will provide scalable access to computing and data-management resources, so that smaller organizations in a range of application areas will be able to benefit from the same resources as those used by larger competitors.
The intricate anatomy of the middle and inner ear comes to 3D interactive life in an application called the Virtual Temporal Bone developed at the University of Illinois. The shared virtual experience demonstrates the tele immersion potential of I2 and NG




These examples are but the tip of a very large iceberg. What they and nearly all of the I2/NGI applications have in common, in addition to advance networking backbones and their potential to change the way information is used and shared, is their reliance on graphics and visualization. In fact, for most of these efforts visual computing is a critical component. In many cases, the need for advanced networking capabilities is driven by the need to satisfy the application's visualization requirements.

As federal, academic, and corporate R&D efforts mature, any one of these or the many other I2/NGI projects could prove to be the "killer application" needed to push the expanded networking capabilities out of the research domain and into the world at large. "The technology itself is fairly well established. The challenge lies in deploying it," says Wood. "Once campuses, with their heterogenous makeup, begin to do this, there will be proof of concept that these technologies make sense." And when those lessons start to filter to the commercial world, the Internet of the future will be upon us.

Diana Phillips Mahoney is chief technology editor of Computer Graphics World.






In addition to its impact on science and technology research and development, the new Internet is expected to play a major role in education. The Geology Explorer application perhaps epitomizes the best of what the new Internet will offer in terms of virtual, immersive, role-based learning.

Developed by a group of North Dakota State University faculty members who make up the school's World Wide Web Instructional Committee, Geology Explorer is a multi-user educational game that teaches the concepts and principles of physical geology. Partici pants visit a simulated world called Planet Oit to take part in a virtual field trip in which they perform experiments on rocks, minerals, and other geological specimens. The planet currently contains more than 50 locations, 100 rock and mineral types, 200 outcrops, veins, and boulders, and more than 40 types of tools and instruments for experimentation.

"This field-trip approach provides many opportunities for collaborative problem-solving and virtual interaction," says application developer Brian Slator, a computer science professor at the university. "It also provides opportunities for experiences that would otherwise be impractical due to danger, distance, expense, or physical impossibility."
Rocks, minerals, soil, and flora are all fodder for the virtual "hands-on" investigation of Planet Oit in the multi-user, Web-based Geology Explorer educational game.




Geology Explorer comprises a navigable VRML environment that connects to a LambdaMOO server (a text-based, computer-managed, multi-user world) through a Java interface. The virtual world resides on a server located on a 100 megabit switched network that connects to the university's I2 gateway.

In addition to Geology Explorer, the development team has developed other proof-of-concept applications to demonstrate the potential of the technology across disciplines. For example, the team has also created an application called the Virtual Cell, which immerses participants in a 3D model of a human cell. Students can move through and interact with the cellular components to learn about its structure and function in a way not possible by looking through a microscope.
Cellular adventures abound in a virtual journey through a human cell. Participants can explore and interact with the microscopic world on their own or with travel companions located remotely.




Because the volume of such graphically intense "field trips" will grow exponentially once access to the virtual world is available to other universities, and because participants need to be able to access large science databases efficiently from within the world as part of their hands-on experimentation, these applications demand advanced networking capabilities to achieve the bandwidth needed to effectively support many simultaneous connections. -DPM






The broad scope of the applications being developed using Internet 2/ Next-Generation Internet technologies is apparent in projects demonstrated at periodic member meetings and group events. For example, on the agenda for a recent Internet 2 meeting were demonstrations of such diverse projects as Virtual Environments for Role-based Education, Internet-Accessible Speech Recognition Technology, Colla bor ative Videoconferencing in Medical Education, and Streaming Video Data bases. Although nearly all the demo applications rely on graphics and visualization, some of the projects are fully defined by their implementation of visual computing. Two examples of such applications demonstrated at a recent member gathering were The Virtual Aneurysm and The Geology Explorer (see "A Virtual Field Trip," pg. 48)-both of which are already seeing practical application.

The Virtual Aneurysm is a simulation and visualization system being developed by researchers at UCLA that is helping provide physicians and surgeons with a better understanding of the blood dy namics of brain aneurysms-the sometimes fatal, often debilitating vascular upset that causes bleeding into the brain. The system uses scan data of a patient's brain to create a geometric model of an aneurysm as well as a mathematical model simulating the blood flow in the region, both of which are then visualized in a realistic 3D virtual environment.

The application was developed specifically to aid in the planning process for a new technique called endovascular therapy, developed by a UCLA doctor as an alternative to brain surgery for treating aneurysms. The minimally invasive therapy involves inserting a tube through a small incision into the problem blood vessel and filling the aneurysm with specialized coiled components to prevent a rupture. Success with the technique, however, relies on understanding the blood flow characteristics of the aneurysm, as well as the effects of the blood flow on the vessels and the endovascular components. Unfortu nately, no single medical imaging technique can provide such details. As a result, the pioneering surgeon sought the services of radiology and biomedical engineering professor Daniel Valentino, who had a background in 3D rendering of medical datasets, to build a system that could generate a virtual representation of a brain vessel and the flow dynamics.

The first step in the development process is to acquire pictures of the vessel structure, which are obtained using a medical imaging technique called CT angio graphy that takes rapid X-rays of the movement of a contrast dye through a patient's vasculature. Because the resulting images include surrounding tissue, segmentation software is applied to separate out the vessels. The researchers then apply triangulation software to create a 3D model of the vessel, from which a mesh is extracted for import into a computational fluid dynamics program from Fluent.
A geometric model of a potentially lethal aneurysm gives surgeons a larger-than-life vision of their nemesis. CFD visualizations further enhance this vision by depicting the effect of bloodflow on the local vasculature-all of which helps in surgical plan




Using an initial flow estimate as input, the software generates a simulation of the fluid movement through the object. To be useful for this application, the simulation data is visualized in a custom object-oriented virtual environment developed using Sense8's Worldtoolkit, in which surgical objects can be created and manipulated. Student Daren Lee developed specific visualization tools within the environment, including cutting planes and streamlining capabilities, that enable the tracking of particular phenomena in the flow field.

With access to such visual information, a surgeon will be able to determine if an aneurysm is the right shape for endovascular treatment, the type and number of coils needed to fill the aneurysm, and the general effects of the coils on the flow field. The health care implications of such insight are significant. "There's substantial risk as well as a high fatality rate with traditional large-scale brain surgery to treat aneurysms. With endovascular therapy, a patient literally could be treated in one day and go home the next," says Valentino.



The I2 enters the picture by providing an environment for remote planning, education, and consultation. "There are very few facilities that have the ability to do endovascular therapy. If we could educate physicians remotely about which patients are candidates, they would be in a better position to decide if the patient should be referred to a facility where the therapy is in practice, or whether the risks of moving the patient and delaying treatment outweigh the potential benefit of the therapy."

In addition, says Valentino, because the system uses a server-client architecture in which the server stores the fluid-flow data and the physician uses a visualization client to query the flow field, "the flow-field data servers can be distributed among various leading hospitals and institutions [using the new networking infrastructure], each of which will be able to provide real-time access to any visualization client."

As the high-performance networking infrastructure matures to make this kind of remote visualization more reliable and to guarantee display rates, says Valen tino, "a physician will actually be able to send a patient's data to us, and we will be able to render and evaluate it here, then display it back to them remotely." This would allow the UCLA experts to consult, via real-time telecollaboration, with re mote colleagues and potentially guide and supervise activities in a remote operating room. -DPM
Back to Top
Most Read