Issue: Volume: 28 Issue: 1 (Jan 2005)

Between the Lines


Books have changed little since papyrus scrolls began replacing clay tablets some four millennia ago. A few milestones, such as Johannes Gutenberg’s printing press, have influenced the bookmaking process in the interim, but none has radically altered a book’s basic form-that is, until now.

Through an innovative application of CG technologies, the University of Canterbury’s Human Interface Technology Laboratory New Zealand (HIT Lab NZ) has transformed reading with what it calls an eyeMagic book. This revolutionary publication blends reality and virtual reality by superimposing animated 3D content over the pages of a children’s storybook. The end result is a unique, interactive reading experience that may forever change the way people read.

One of the first publications created with the eyeMagic technology is an eight-page fairy tale titled Giant Jimmy Jones, written by renowned New Zealand children’s author and illustrator Gavin Bishop. Using a handheld display, readers are able to transform this seemingly typical storybook into a virtual adventure in which the characters rise from the flat pages and spring to life in stereoscopic 3D.
HIT Lab NZ intern Claudia Nelles, who animated the 3D models for the virtual eyeMagic book, uses a glasses-like display to make the story come to life. A camera inside the display tracks the natural features on the pages of the regular book, and then soft




The eyeMagic project was funded with a New Zealand government grant through an initiative that encourages collaboration between the arts and sciences, with the HIT Lab providing the engineering and science expertise, and Bishop the artistic component. Because the technology is government funded, the project, in essence, is an open book: An open source version of the setup’s vision tracking system is available on the HIT Lab’s Web site (www.hitlabnz.org). Moreover, the eyeMagic book is available for public reading at the South Christchurch City Library in New Zealand.

Books have always had the ability to deliver magical experiences to readers. And yet, for this project, a bit of magic-the type of digital wizardry used in some Hollywood films-was needed to deliver the eyeMagic book. Until Bishop became involved in the project, he had never worked with 3D imagery. But because his art would appear in both the real and virtual worlds, the author/illustrator became a quick study, having to adapt both his overall vision and his sketches to meet the technology requirements in this groundbreaking project.

Foremost, Bishop decided to keep his story short and simple, suspecting that the virtual portion of the book would take some time to complete. He then began sketching out rough storyboards as he normally would do, to determine what would be shown on each page of his book-which characters would appear in each scene, where they would be placed, and so forth. He then painted the full-sized pictures for the regular book.

“I showed the finished pages to the group at the HIT Lab, but they told me I needed to show more detail so the [3D] artists could create their models,” Bishop explains. “I had to redo all my drawings, and all the characters and figures in the book had to be drawn three to four times from all angles so they could be re-created three dimensionally on the computers. I didn’t realize that everything-every item, every image, every character-in my illustration would have to be made again as a 3D model for the eyeMagic book.”
Artists at 3D modeling facility One Glass Eye re-created the storybook’s characters and objects in 3D using Discreet’s 3ds max software, matching the CGI to the 2D drawings. The scenes were later animated at the HIT Lab NZ.




After the regular book’s drawings were finished, Bishop worked closely with the modelers from One Glass Eye, who used Discreet’s 3ds max to re-create the illustrator’s scenes, adding dimensionality to the characters and environments. To match the CGI versions to Bishop’s drawings, the team at One Glass Eye scanned portions of the pictures and applied the surfaces as textures on the virtual images using Adobe’s Photoshop.

But it was at the HIT Lab where the book came alive. There, intern Claudia Nelles animated the characters (about a minute of animation per page) after discussing with Bishop the types of movements he envisioned for each one. “I will never forget the first day I saw the animation,” Bishop recalls. “I dropped by the lab, and Nelles asked me how I wanted the giant to walk. I told her that I had never thought about that, so she showed me what she had done up to that point, which was for the opening sequence. With a mouse click, the giant stood on the page of the book in three dimensions and walked-or rather, sort of lumbered-across the page. Then he knelt on one knee, then the other, and waved to me. He then sat back on his haunches, and the title of the book rose out of the ground and formed a big arc around the giant. I was just blown away. It was amazing.”
Renowned children¿s author Gavin Bishop took his first step into the world of CGI when he collaborated with modelers who transformed his traditionally illustrated tale Giant Jimmy Jones into a 3D adventure.




The 3D animation was possible thanks to a number of technological innovations in object tracking and real-time rendering that when used with standard hardware-a Web camera, a see-through handheld display resembling a pair of opera glasses, and a PC-takes reading to a new level.

Unlike in Bishop’s demonstration, the actual eyeMagic book animations are triggered by the reader. When people look at the book through a handheld display/camera unit, they see 3D images leaping from the pages of the regular book. The book itself is not magic; the magic occurs inside a computer, linked to the glasses-like display, running special tracking software that recognizes the pages of the book. So when a person moves around and looks at the book from different positions, the camera in the glasses will track certain features in the illustrations, and the software will re-create the 3D objects so that they will appear to be attached to the pages.

According to HIT Lab director Mark Billinghurst, when the camera “sees” the book pages, the software, based on a program developed by lab partner Dr. Hirokazu Kato of the University of Osaka in Japan, searches for parts of the picture that are most different from others-the edge of a building or the shape of the giant’s head, for example. Once the software recognizes these so-called natural features, it projects 3D animated pictures onto the pages. “Because the real camera’s viewpoint and the virtual camera’s viewpoint are aligned,” he says, “it looks like the 3D models are popping out of the regular storybook pages.”
Innovative real-time tracking software not only recognizes the specific features of the drawings but also renders the 3D graphics above the regular book’s 2D illustrations.




What differentiates this tracking software from commercial products, for example, is that it works in real time. “By knowing what the features are, we can calculate in real time exactly where the actual camera is relative to the page of the book, allowing us to build the virtual scene from that same viewpoint,” Billinghurst explains.

The real-time camera tracking was the biggest technological achievement for this project, contends Billinghurst. “We worked with Kato, who had been developing this technology for a number of years, and we just kept requesting improvements until we got the robust performance we needed,” he explains. Because the visual quality of the imagery was extremely important for the image recognition, the group coupled it with a high-quality USB camera, a Logitech 4000, which fit comfortably into the lightweight glasses-like display.

The tracking software also serves as the graphics rendering solution. “It’s one thing to know where the camera is, but another to be able to draw the graphics over live video of the real world,” says Billinghurst. To accomplish this, the group wrote its own model-loading script using the OpenGL graphics library, which preserved the animations, lighting, and textures, and provided the option of loading the models without the keyframes, all in real time. It also rendered the imagery in real time, taking the live video feed and compositing the OpenGL graphics above it.

“Most film-quality compositing programs don’t work in real time because they have to produce higher-resolution physical output, but we don’t need a high-res output because the handheld display we use has the resolution of about 800x600 pixels,” says Billinghurst. “So for us, the emphasis is on real-time performance rather than broadcast- or film-quality output.”

Without question, this unique project is expanding the art of reading into a unique, new dimension. But what kind of effect has it had on the author’s overall style of writing and illustrating? “I don’t dwell on the process too much because I don’t want it to influence what I do next,” says Bishop. “Yet I want to keep my ‘vision’ open and embrace new things that may pop up.”

At the HIT Lab, meanwhile, Billinghurst and his group are already planning another eyeMagic book or two that will include improvements to the current technologies. For instance, they are working on making the graphics immersive and interactive. “I’d like users to be able fly inside a virtual scene and see virtual characters as life-sized, and then interact with them in some way, just like in a first-person computer game,” explains Billinghurst.

For his work on the eyeMagic book and other projects, Billinghurst received a nomination in the education category of the World Technology Network awards, presented in October, for his contribution of innovation in the field of education.

Looking at the bigger picture, Billinghurst says the eyeMagic technology has a number of possible applications in education, architecture, medicine, entertainment, and other disciplines. “I’m not really sure where this will become a mainstream technology, but I am sure it will be one somewhere,” he says.
CG artists scanned a number of characters from the book Giant Jimmy Jones and applied the scanned data as textures on the virtual images, thereby ensuring that the CGI versions looked like the printed illustration




In fact, museums are ripe for this technology, Billinghurst notes. Technology is advancing rapidly in the mobile arena, with phones that have good-quality color displays and cameras, and Nvidia’s newly announced graphics chip that will provide a robust graphics solution by midyear, he says. So there is good reason to optimize the technology to work with mobile phones. “People carry their phones around all the time, and they could point them at objects in the real world, such as museum pieces, and have virtual content popping out from the objects.”

But for now, the readers at South Christchurch Library are focused on how the technology works with books. Says 10-year-old Lucas Keane from Somerfield School, who explored the Giant Jimmy Jones: “It’s great. I can’t wait until I’m 15, when there will be millions of these books around.”

Karen Moltenbrey is a senior technical editor at Computer Graphics World.
Back to Top
Most Read