Facebook Logo Twitter Logo RSS Logo
Issue: Volume: 25 Issue: 10 (October 2002)

Digital Broadcasters




By Laurie McCulloch

The launch of Ananova, the virtual newsreader, in April 2000 marked a turning point in the public perception of digital characters. With a mix of trepidation and humor, the more perceptive TV newscasters started asking if they were going to be replaced by virtual actors. We aren't aware of any that were, but it did raise some interesting questions. You only need to look at popular science fiction books and movies to catch glimpses of how digital characters may evolve and become more prolific in our daily lives.

Digital characters are an intriguing fusion of art and science, and we have seen massive advancements in both aspects over the last few years. This summer, Maddy, the virtual science presenter (shown at right), made regular appearances on the BBC's flagship science program Tomorrow's World, a live TV show on which she introduced news items and interacted with human presenters and viewers. Using sophisticated voice synthesis, voice recognition, animation synthesis, and AI, she was able to respond to un-scripted questioning in real time. Unlike digital characters that are controlled by a puppeteer off camera or have their motions and scripts prepared entirely offline, Maddy's responses, motion, and rendering were live, giving us some insight into the future roles of digital characters in broadcast and indeed all aspects of our lives.

In order to see where digital characters are going, we must first take a look at some of the key benefits they can offer. The most obvious one is that a digital character can operate on a 24/7 basis and have the capacity to retrieve information from its knowledge base or perform a task in a fraction of a second. Being digital, the character can also be omnipresent, holding multiple conversations simultaneously across international boundaries.

Much the same can be said about an Internet search engine, but the visual representation and personality that a digital character projects play an important role in satisfying our emotional requirements. These attributes help us identify with a character and sometimes even form a relationship.

This might sound a bit radical, but humans frequently hold conversations and form relationships with non-human or inanimate objects. Have you ever sworn at your car when it broke down? Have you talked to a pet as if it were human? We are born communicators, and we readily assign human behavioral traits to non-human entities.

Within the first moments of seeing a digital character, we subconsciously classify it based on its visual appearance. If the visual representation is backed up with a convincing personality, we can start engaging emotionally with it. This can be a huge benefit when the character is used as a brand communication device. The character can be designed as the embodiment of a corporation's values and be used to communicate these as a sales agent, e-learning tutor, or information assistant. Being digital, the character will promote these values consistently, never tiring and never having an off day.

While there has been much development in the visual appearance of digital characters, there will undoubtedly be further advancements in their visual representation fueled by the increasing performance and features of 3D accelerator chips. But it is the areas of human computer interaction and artificial intelligence that will have the most profound effects.

When we communicate with each other, we convey complex concepts using multiple input channels. We listen to the words, but we gain additional meaning from the way the words are spoken. We also read body language, gestures, and facial expressions to build a clearer understanding.




However, with a digital character, we traditionally have to rely on the keyboard and mouse to perform our interaction. While we may be comfortable with these input devices for many tasks, they are not conducive to high-speed, natural communication. Moreover, if we start considering opportunities for digital characters on mobile devices, then the keyboard and mouse become even less appropriate. We therefore need to look at alternate input devices if we are to achieve natural interactions. Where better to look than at the human senses?

Voice recognition and natural language processing technologies are now starting to come of age, and they offer an excellent input mechanism for certain applications. There are still many limitations, but voice recognition is advancing rapidly and is an area receiving massive research funding.

Vision systems are also getting more sophisticated. They are currently used on production lines to spot defects in products and in security applications to gather and verify biometric data. Digital characters can use vision systems not only to identify the presence of humans and verify their identity, but also to augment their understanding of spoken input. Vision systems capable of reading a person's lips, facial expressions, and body language and gestures would complement voice recognition and take us a long way toward more natural human-computer interaction.

Digital characters would still need to make sense of all this additional input before they could respond or perform actions successfully. In other words, we would still need to enhance their brains. This is obviously a non-trivial task, but digital characters can already perform well when the knowledge domain is constrained. Gradually, with further advances in AI, it will be possible to use wider domains. A digital character will not pass the Turing Test just yet, but it may do so sooner than you imagine.

Digital characters are gaining acceptance and are increasingly being used as brand communication devices, information agents, e-learning tutors, and even TV presenters. Consider this: Personal computers have only been around for about 25 years, digital characters even less. Where will they be in another 25 years? Perhaps the digital characters we see in science fiction books and movies are not so farfetched after all.


Laurie McCulloch is the development director at Digital Animations Group in Glasgow, Scotland, the company that created virtual newscasters Ananova and Maddy and a host of other digital television characters.
Back to Top
Most Read