Intelligent Tutoring in Virtual Environment Simulations 1 ... - CiteSeerX

1 downloads 0 Views 132KB Size Report
May 10, 1996 - communication bus is implemented using the Sun ToolTalkTM package. ... agate changes in one object to other objects. ... the RIDES system developed by the USC Behavioral Technologies Lab Munro et al., 1993]. Students ...
ITS ’96 Workshop on Simulation-Based Learning Technology

Intelligent Tutoring in Virtual Environment Simulations W. Lewis Johnson and Je W. Rickel Information Sciences Institute & Computer Science Dept. University of Southern California 4676 Admiralty Way, Marina del Rey, CA 90292-6695 johnson, [email protected] http://www.isi.edu/isd/VET/vet.html May 10, 1996 Abstract

Virtual environment simulations can be e ective training tools. Students, immersed in a computer simulation of a work environment, interact with the simulation as they would with the actual work environment. However, virtual environment technology by itself does not necessarily lead to ecient learning, because learners may fail to realize when their actions in the environment are inappropriate or suboptimal. Also, if the learners cannot determine how to interact with the simulation to achieve a desired goal, they may be reduced to trial and error. Intelligent tutoring can improve the effectiveness of virtual environments by providing feedback and guidance to the students when they are not able to interact e ectively with the simulation. To provide such tutoring, we are developing a pedagogical agent that can inhabit the virtual environment along with students. The agent can demonstrate skills to students, monitor their activities and provide assistance when needed, and explain why particular actions are appropriate in given situations. Our agent is the rst step toward the creation of virtual collaborators that can interact naturally with learners in an apprenticeship work setting.

1 Introduction Our recent research has been focusing on intelligent tutoring capabilities that can enhance the e ectiveness of simulation-based training systems [Hill and Johnson, 1995, Johnson, 1995]. Interactive simulations enable students to improve their skills through practice and improve their knowledge as they encounter diculties and overcome them. However, simulations by themselves do not ensure ecient learning. For example, students sometimes fail to realize when their actions are inappropriate or suboptimal. At other times, they may not be able to gure out which actions are required to get the simulation into a desired state. Thus, there is a need for intelligent tutors that recognize these problems when they arise and help students overcome them.

We are currently extending our work to apply to virtual reality simulations, where students are immersed in, and interact with, a virtual environment whose behavior is generated by simulations. In this framework, the relationship between students, intelligent tutors, and simulations can be di erent from that in conventional training simulations. The tutor may take the form of an autonomous agent, called a pedagogical agent, that can interact with the students in more ways than in conventional simulations. In additional to serving as computer coaches, such agents can also serve as assistants, co-workers, and guides. Moreover, multiple students and pedagogical agents can enter the virtual environment at the same time, so collaborative learning can be directly supported. Although virtual environment simulations have their own unique characteristics, they remain rst and foremost simulation-based learning environments. In what follows, we rst describe our approach to integrating intelligent tutoring into virtual reality environments, and then we present our position on the issues raised by the workshop organizers.

2 Virtual Environments for Training The Virtual Environments for Training (VET) project, funded by the Oce of Naval Research, is developing virtual environment technology for a variety of training purposes. The main purpose of the project is to determine how to e ectively employ immersive and intelligent technologies to maximize training e ectiveness. The project is being conducted in collaboration with Lockheed Martin, Inc., and the USC Behavioral Technologies Lab. When these technologies are fully developed, students will enter a virtual environment by putting on head-mounted displays. They then will see a model of a work area, such as a control room. The work area may contain a number of human gures, some of which represent other students, and some of which represent computer-generated pedagogical agents. The pedagogical agents may participate in the students' activities or act as observers. The agents will be able to explain and demonstrate how to perform particular tasks and o er advice and assistance when the students encounter diculties. The VET system is a distributed simulation. That is, the simulator, the students, and the pedagogical agents can each run as separate processes, possibly on di erent workstations. The processes communicate by sending messages to a \communication bus," which in turn broadcasts the messages to the other processes. For example, when a student pushes a button on a virtual control panel, the button sends a message indicating that it has been pressed, and the simulator will receive the message and take the appropriate action. The communication bus is implemented using the Sun ToolTalkTM package. The simulated world is controlled by an object-oriented simulator. As in non-immersive simulations, each object in the simulated world is represented internally as an object with attributes. Some attributes control the visual appearance of the object, while others control its behavior. The behavior of the objects is programmed by rules and constraints that propagate changes in one object to other objects. Simulations are authored and executed using the RIDES system developed by the USC Behavioral Technologies Lab [Munro et al., 1993]. Students and instructors interact with the simulated world via viewers that monitor the communications bus to determine the state of objects and then produce graphical renderings of those objects. Details of the 3D rendering process are kept local to the viewer, hidden

from the rest of the simulation system. VET uses Lockheed Martin's Vista Viewer package for this purpose. In contrast, pedagogical agents do not require a graphical rendering of the simulation objects; they monitor and control the state of simulation objects directly using the communication bus. An agent \sees" the simulated world as a set of objects and their attributes, including objects that are in the agent's simulated eld of view, are discernible by other senses such as hearing, or are accessible by sensors such as radar. An agent need only keep track of those attributes that are relevant to the agent's tasks or the student's tasks. For example, for a rotary switch, the agent might only monitor the switch setting and the switch's location on the virtual console; other attributes such as the switch's size and color may be ignored, since they have little bearing on how a student might use the switch. The agent causes changes in the simulated world by sending commands to the simulator via the communication bus. Pedagogical agents appear in the simulated world either as simulated humans, using the JackTM human gure package [Badler et al., 1993], or as disembodied sensors and e ectors (e.g., grippers for grabbing objects). The latter approach is desirable when the agent must share the student's workspace yet avoid diverting the student's attention from the simulated environment.

3 Intelligent Tutoring Capabilities We intend to provide a range of intelligent tutoring capabilities in our pedagogical agents, including plan recognition, explanation, and demonstration. Much of these capabilities has already been developed for other simulation environments, so present work involves adapting them to work e ectively in an immersive virtual environment. Below, we describe our previous and current work in plan recognition and explanation. 3.1

Plan Recognition

To monitor a student's progress and provide assistance where it is needed, pedagogical agents require a plan recognition capability. Our plan recognition component looks for situations where the student encounters a signi cant impasse, i.e., the student is failing to make progress toward completing the task. The plan recognizer also looks for mistakes that will lead to impasses later if not corrected. This impasse-driven approach to plan recognition, which we call situated plan attribution, was previously implemented in the REACT simulation-based training system [Hill and Johnson, 1995]. Situated plan attribution exploits the fact that the student is working with an interactive simulation on a well-de ned task. Most training tasks require the student to take actions in the simulated world in order to achieve some desired state. Situated plan attribution avoids many dicult aspects of student modeling by assuming that the student intends to reach the goal state. It monitors both the simulation state and the student's actions, and concludes that the student is at an impasse when he or she is unable to make progress toward the goal state. The plan recognizer needs to understand only enough of the student's plan to be able to infer which subgoal of the goal state the student is working on, to ensure that any advice that is o ered is relevant to the student's current activities. In contrast, the

traditional model tracing approach [Anderson et al., 1990] attempts to track and interpret every individual student action, allowing the student little opportunity to make mistakes and learn from them. 3.2

Explanation

We are designing our pedagogical agents to be able to show students how to perform tasks, as well as answer questions about why the task should be performed in a particular way. Our approach to question answering is based on the Debrief system for explaining the actions of autonomous agents [Johnson, 1994]. Debrief identi es the goals, situational factors, and situation assessments that were critical to a given agent decision, and it uses these to generate explanations. To enable this analysis, it maintains an episodic memory that associates agent decisions with snapshots of the simulation state and the agent's mental state at the time that the decision was made. Using episodic memory, Debrief applies a discovery learning approach to the agent's own reasoning. That is, it proposes hypothetical changes to the situation in which the decision was made in order to discover which features of the situation were crucial to the decision. In so doing, the agent is able to compare the actions that it took with the actions that it did not take and focus on the di erences. While the original Debrief system generates textual explanations, we plan to use the TrueTalkTM speech generation system to allow pedagogical agents to explain their actions as they demonstrate tasks.

4 Relationship to the Workshop Issues We will now address the issues raised by the workshop organizers. First, we must take issue with their statement regarding the role of intelligent tutoring: One of the great advantages of simulations as an educational tool is their

exibility but, ... such exibility is also a major bottleneck for the provision of advises because of the many actions that a student can perform. In fact, understanding what the student is doing when working with the simulation is so dicult that even a human observer can be mistaken. ... Such adaptive support should not be based on inferring what the student is doing, it should rather be based on what the student is telling you about his/her activity. Our earlier simulation-based tutor, REACT, was successful in supporting students while interacting with simulations, without requiring the students to explain their activities. We anticipate similar success in our current e ort. While we acknowledge the problem described by the workshop organizers, we believe that it can be overcome through careful choice of the training tasks and the tutorial interaction style. Student interactions with simulations are dicult to monitor when the tasks being performed are ill-structured. In contrast, tasks that are the subject of military or industrial training are often highly structured, where there are well-de ned goals and objectives. In these situations, knowledge of common plans and procedures in the domain is sucient to enable student actions to be tracked. This is analogous to the way that knowledge of the intentions underlying programs can facilitate program debugging [Johnson, 1986]. Additionally, highly situated activities, in which students must respond to stimuli provided by

the simulation, further simplify student tracking. If the tracking system notes an anomalous state in the simulation that requires the student's response, this can be used to set up expectations that the student will in fact respond. Moreover, there is less need to monitor student actions in detail in a simulation-based environment than in other types of learning environments. If students encounter diculties when interacting with simulations, and overcome those diculties, they will learn from the experience, as our cognitive modeling studies have shown [Hill and Johnson, 1993]. Furthermore, interruptions from a tutor while the student attends to the simulation can be distracting and impede task performance. Intervention on the part of the computer tutor is only warranted at impasse points, i.e., points where the student is unable to make progress in completing the task. Detailed monitoring of student performance is not essential, as long as students appear to be making progress in completing the task. When the student does reach an impasse, the simulation state can o er guidance as to how to interact with the student. The tutor can examine the simulation state, determine how it needs to be changed in order to complete the task, and assume that guidance should be directed toward that end. In the remainder of this paper, we respond to the questions raised by the workshop organizers. 

How much of the learning support can be built in simulation based learning systems and how much must remain with the human tutor?

The learning system can support the learning only to the extent that it understands what is happening. We assume that the computer tutor has partial knowledge of the domain: it can recognize some simulation states and the actions that change them, but it may not understand the signi cance of other states or actions. Human tutorial assistance may be required in such cases. Another open question is whether or not computer tutors are able to provide the kind of encouragement and moral support that a human being can provide. By embodying computers in virtual agent form, they at least have the potential for interacting with students at a more personal level. Whether such potential can be realized remains to be seen.



What is the cost of providing explanations in a simulation context?



Should student evaluation be cognitive or performance based?

Interactive simulations can be demanding on the student's attention, so interleaving tutorial intervention with task performance can be problematic. We do not want the pedagogical agents to compete for the student's attention at critical moments. One way of overcoming this diculty is to relegate explanations to after-action review sessions, or to impasse points. If the after-action reviews are chosen, the tutoring system must bear the additional overhead of maintaining an episodic memory of what transpired during the session. Because the main objective of tutorial assistance in our framework is to help students overcome diculties in task performance, we believe that student evaluation should be performance based. Signi cant measures include time required for task completion, and frequency of errors during task performance.



How much of ITS technology can we reuse?



What is the status of authoring shells for training simulations?

Although many of the same ITS capabilities are relevant to the simulation-based setting, their implementation is signi cantly di erent. Conventional tutors monitor and respond to student actions; simulation-based tutors can monitor both the student and the simulation state. Because students get some feedback directly from the simulation, tutors can get away with providing less feedback. We nd that these di erences have a signi cant impact on the design of ITS capabilities such as plan recognition and tutorial intervention. Some ITS capabilities such as dynamic curriculum sequencing may be transferable, but these issues are outside the scope of our investigation. Well-developed shells for authoring simulations, such as RIDES, are already available. However, while RIDES also includes some capabilities for authoring instruction, additional authoring support is required to support the type of pedagogical agent described in this paper. As part of the VET project, we are developing authoring tools and reusable ITS components that can facilitate the authoring of pedagogical agents for virtual environment simulations.

References [Anderson et al., 1990] Anderson, J., Boyle, C., Corbett, A., and Lewis, M. (1990). Cognitive modeling and intelligent tutoring. Arti cial Intelligence, 42:7{49. [Badler et al., 1993] Badler, N., Phillips, C., and Webber, B. (1993). Simulated Agents and Virtual Humans. Oxford University Press. [Hill and Johnson, 1993] Hill, R. and Johnson, W. (1993). Designing an intelligent tutoring system based on a reactive model of skill acquisition. In Proceedings of the World Conference of Arti cial Intelligence in Education, pages 273{281, Edinburgh, Scotland. [Hill and Johnson, 1995] Hill, R. and Johnson, W. (1995). Situated plan attribution. Journal of Arti cial Intelligence in Education, 6(1):35{67. [Johnson, 1986] Johnson, W. (1986). Intention-Based Diagnosis of Novice Programming Errors. Morgan Kaufmann. [Johnson, 1994] Johnson, W. (1994). Agents that learn to explain themselves. In Proceedings of the National Conference on Arti cial Intelligence, pages 1257{1263, Seattle, WA. AAAI, AAAI Press. [Johnson, 1995] Johnson, W. (1995). Pedagogical agents for virtual learning environments. In Proceedings of the International Conference on Computers in Education, pages 41{48, Singapore. AACE. [Munro et al., 1993] Munro, A., Johnson, M., Surmon, D., and Wogulis, J. (1993). Attribute-centered simulation authoring for instruction. In Proceedings of the AI-ED 93

World Conference of Arti cial Intelligence in Education, pages 82{89, Edinburgh, Scotland.