Issue: Volume: 24 Issue: 4 (April 2001)

Virtual Teacher



DIANA PHILLIPS MAHONEY

Some teachers consistently go the extra mile to make sure their students fully comprehend what they're being taught. STEVE is one of them. An autonomous, animated agent within a simulated environment, STEVE, whose name is an acronym for Soar Training Expert for Virtual Environments, was de signed by researchers at the University of Southern California to teach physical tasks to students in a 3D virtual environment.

Because he is driven by a system that integrates methods from three primary research areas-intelligent tutoring systems, computer graphics, and agent architectures-STEVE embodies a unique set of capabilities.

The intelligent tutor in STEVE can answer such questions as "What should I do next?" and "Why?" His computer-animated persona can demonstrate actions within the virtual environment that previous-generation disembodied intelligent tutors could not, and he can use gaze and gestures to direct students' attention and guide them through the virtual world. Finally, his agent architecture allows STEVE to continually monitor the state of the virtual world and both maintain a plan for completing his current task and revise the plan to accommodate unexpected events.

STEVE's reason for being, according to Jeff Rickel, who developed the technology along with USC colleague W. Lewis Johnson, is to serve as a mentor to students trying to master certain tasks, such as operating complicated machinery. In such situations, says Rickel, "people need hands-on experience in a wide range of situations. They also need a mentor who can demonstrate procedures, answer questions, and monitor their performance."
STEVE, an animated virtual agent, demonstrates to a student avatar the operation of a mechanical device within a simulated mock-up of a physical learning environment. STEVE is programmed to acquire knowledge from each interaction with the student and the




Unfortunately, it is often impractical to provide such training on real equipment given resource and logistical constraints. Consequently, Rickel notes, "we are exploring the use of virtual reality, where training takes place in a 3D simulated mock-up of the student's work environment. And since mentors and teammates are often unavailable when the student needs them, we are developing STEVE as an autonomous, animated agent that can play these roles."

Although simple 2D simulated interfaces might be sufficient for certain training applications, 3D virtual worlds are more critical in situations where students must learn how to use perceptual cues to guide their actions, how to navigate around complex scenes, and how to perform tasks that require spatial motor skills. In such situations, the ability of an animated agent to cohabit the virtual world with students and demonstrate physical tasks is a far more effective teaching approach than simply trying to describe the task verbally or by showing a videotape of the process. "With an interactive demonstration from an animated agent, students can view the demonstrations from different perspectives and interrupt with questions, and the agent can demonstrate the task under a wide variety of conditions," says Rickel.

Students interact with STEVE and the virtual world through a dual-mode simulator. A traditional mode is designed for use on a computer workstation and an immersive mode, is available for use with a headmounted display and data gloves. A speech recognition engine delivers the commands and queries that drive STEVE's responses. And STEVE is able to monitor and react to changes in his environment using a perceptual-motor system that continually receives updates of changes to objects in the environment and alerts the simulator to the agent's actions so the world can respond appropriately.

STEVE's virtual embodiment represents a multi-disciplinary collaborative effort. To bring the agent to life, Rickel and Johnson enlisted the services of a number of colleagues in USC's Information Sciences Institute, including Marcus Thiebaux, who created the body and its animation, Ben Moore, who developed the speech recognition and synthesis tools, and Richard Angros who built the learning module. The virtual-reality software and models for the graphical world that STEVE inhabits were created by a design team at Lockheed Martin led by Randy Stiles. The simulation software and models for the virtual world were developed in the USC Behavioral Technology Laboratories, under the direction of Allen Munro.

Among the researchers' primary design goals for STEVE was that the agent be reusable. "It should be possible to apply STEVE to a wide variety of virtual worlds and tasks that students need to perform," says Rickel. Thus, STEVE was designed to plug into any API-compatible virtual reality tool, including simulators, rendering software, speech synthesis and recognition programs, and input/output devices.

In addition, says Rickel, "STEVE has a clean separation between his domain-specific knowledge and his domain-independent capabilities." An example of domain-specific knowledge may be the agent's understanding of a ship and the tasks performed on it. Domain-independent activities include planning, conversing, and moving around in virtual worlds. "By simply giving him new domain knowledge," according to Rickel, "STEVE can immediately teach new tasks in a new world, without new programming."

The road to STEVE's ultimate success is not obstacle-free. One of the key challenges to be met is the difficulty of simulating human dialogue. "When people carry on conversations, they employ a large number of nonverbal signals to complement their speech and regulate the conversation, including gaze, gestures, and facial expressions," he says. Achieving this "complex synchrony" is important when modeling believable face-to-face interaction. "We continue to gather social psychology literature on how human body movements are connected to speech and what types of information they convey, and combine that data into a computational model that can control the behavior of virtual humans like STEVE."
Actions speak louder than words. By modeling the performance of complex tasks rather than simply describing it verbally, STEVE provides students with important visual cues that help them transfer what they learn virtually to the real world.




An additional challenge is providing support for natural language dialogues with human users. Although STEVE currently includes commercial speech recognition software, there is no true language understanding. "This means that he can only understand those phrases that have been programmed into the speech recognition grammar," says Rickel. "We are beginning to draw on work from the computational linguistics community to get more general capabilities." The researchers are looking into incorporating task-oriented dialog, in which the conversation is restricted to discussing a task the computer understands.

A more fundamental challenge is the need to constantly imbue STEVE (or rather, for STEVE to constantly imbue himself) with the knowledge he needs to understand the virtual world he inhabits and the tasks he must teach. "STEVE was designed from the beginning so he could be given knowledge of virtual worlds and tasks without the need for any programming. He has a relatively simple declarative language for specifying the knowledge," says Rickel. The researchers want to extend this capability. "We want to allow people to teach STEVE much as they would teach a person: through demonstrations and instruction." In this regard, one of the researchers, Richard Angros, has developed a system that allows STEVE to learn through a combination of watching demonstrations by human instructors in the virtual world and by actively experimenting in the virtual world.

Although STEVE's feet are still planted firmly in the R&D world, his future, along with that of potential offspring, looks bright. Researchers are currently focused on extending STEVE's spoken language dialogue capabilities, giving him emotions, and connecting his underlying technology to state-of-the-art graphical bodies that have recently become commercially available.

Early work on STEVE was funded by the US government's Office of Naval Research for the purpose of exploring new technologies to aid in training and maintenance activities. Recently, the researchers acquired new funding from the US Army Research Office to collaborate with the USC Institute for Creative Technologies and representatives from the entertainment industry. In this regard, the research objective, according to Rickel, "is to extend STEVE's capabilities and apply him to Army training-for example, training a young lieutenant how to handle peacekeeping situations by putting him or her in virtual Bosnia-and possibly to interactive entertainment applications." Information on STEVE and related research can be found on the project Web site at http://www.isi.edu/isd/carte.