When Hemispheres Collide
Issue: Volume: 28 Issue: 6 (June 2005)

When Hemispheres Collide

Those who visit the Emerging Technologies gallery at SIGGRAPH next month may notice a novel twist. Each exhibition on display will put a creative spin on the technology to explain the innovation and it potential significance.

Why blend the art and science of computer graphics in this way? One answer is to attract the attention of people in the field, who are by nature at ease with the creative side of technology. But a more compelling reason is to try to spark ideas to further the concepts. Here are a few exhibits that underscore this approach:

Virtual Hang-Gliding: Researchers at the University of São Paulo combined head-tracking and stereoscopic head-mounted display technologies to create an interactive virtual hang-gliding experience, in which 3D imagery, sound, and wind sensations change according to the user’s head orientation and movements. Users soar over a detailed 3D model of Rio de Janeiro that was created for a tourism exhibition. The developers contend that instead of trying to make images for virtual-reality applications more photorealistic, the key to making the experiences more immersive and believable is to engage several of the user’s senses, in this case sight, hearing, and touch.

Scenting a Scene: Speaking of engaging other senses, a team at the ATR Media Information Science Laboratories in Japan has devised an apparatus that uses “scent projectors” to waft small regions of scented air toward users at specific times and locations. The SpotScents device uses an air cannon mounted on a panning and tilting platform that launches a series of aroma rings that are timed to cancel out distracting air currents at the target point. The developers envision that with the technology users will be able to smell what’s happening in movies, games, advertisements, and the like.

The Ultimate SteadiCam: Inventors from ViewPlus Inc., the University of Tokyo, and the University of Electro-Communications in Japan have developed a new technology called MotionSphere that they propose can record real-time images from multiple cameras embedded in swinging and spinning objects, like baseball bats and basketballs. An image-processing algorithm compensates for camera rotations and stabilizes the image so viewers can see a steady image of a scene from the perspective of a rapidly moving object.

Smart Stereoscope: Imagine turning any room into an environment that immerses inhabitants in stereoscopic imagery. That’s what a team from Bauhaus University Weimar in Germany is promising with a new image-correction technique that can project stereo images onto complex, colored surfaces-such as in rooms with curtains, furniture, and wallpaper-as if they were displayed on flat white screens. The system performs pixel correction of shape, color, and focus and uses multiple projectors to enhance brightness in shadows and to minimize blur. The creators suggest that the technique could be used for immersive environments in industry, museums, the home, or any locations where installing large CAVE-like structures would be impractical.

Will the convergence of art and science illustrated in these exhibits, or at least envisioned by their developers, inspire further technological innovation? Absolutely. Fanning both the technical and creative fires of the SIGGRAPH attendees-who have spent more time than most building pathways between right- and left-hemisphere brain processes-may be the most powerful way yet to advance the state of the art.


Phil LoPiccolo

Editor-in-Chief