Cognitive Engineering: Human Problem Solving with Tools

20 downloads 1930 Views 1MB Size Report
FACTORS, 1988,30(4),415-430. Cognitive Engineering: Human Problem. Solving with Tools. D. D. WOODSt and E. M. ROTH,2 Westinghouse Research and ...
HU MAN

FACTORS,

1988,30(4),415-430

Cognitive Engineering: Human Problem Solving with Tools D. D. WOODSt and E. M. ROTH,2 Westinghouse Pittsburgh, Pennsylvania

Research and Development Center '

C?gnitive engi~~ring is an applied cognitive science that draws on the knowledge and techntqu~ of cog"!ltrve psychology and related disciplines to provide the foundation for principle-dnven desrgn of person-machine systems. This paper examines the fundamental features that cha~acten.ze .c0l!nitive engineering and reviews some of the major issues faced by this nascent mterdrscrplmary field.

INTRODUCTION Why is there talk about a field of cognitive engineering? What is cognitive engineering? What can it contribute to the development of more effective person-machine systems? What should it contribute? We will explore some of the answers to these questions in this paper. As with any nascent and interdisciplinary field, there can be very different perspectives about what it is and how it will develop over time. This paper represents one such view. The same phenomenon has produced both the opportunity and the need for cognitive engineering. With the rapid advances and dramatic reductions in the cost of computational power, computers have become ubiquitous in modern life. In addition to traditional office applications (e.g., word-processing, accounting, information systems), computers increasingly dominate a broad I Requests for reprints should be sent to David D. ~oods,. Department of Industrial and Systems Engineermg, OhIO State University, 1971 Neil Ave. Columbus OH 43210. " zEmilie Roth, Department of Engineering and Public Policy, Carnegie-Mellon University, Pittsburgh, PA 15213.

range of work environments (e.g., industrial process control, air traffic control, hospital emergency rooms, robotic factories). The need for cognitive engineering occurs because the introduction of computerization often radically changes the work environment and the cognitive demands placed on the worker. For example, increased automation in process control applications has resulted in a shift in the human role from a controller to a supervisor who monitors and manages semiautonomous resources. Although this change reduces people's physical workload, mental load often increases as the human role emphasizes monitoring and compensating for failures. Thus computerization creates an increasingly larger world of cognitive tasks to be performed. More and more we create or design cognitive environments. The opportunity for cognitive engineering arises because computational technology also offers new kinds and degrees of machine power that greatly expand the potential to assist and augment human cognitive activities in complex problem-solving worlds, such as monitoring, problem formulation, plan

© 1988, The Human Factors Society, Inc. All rights reserved.

416-August 1988

HUMAN

FACTORS

generation and adaptation, and fault management. This is highly creative time when people are exploring and testing what can be created with the new machine power-displays with multiple windows and even rooms (Henderson and Card, 1986). The new capabilities have led to large amounts of activity devoted to building new and more powerful tools-how to build better-performing machine problem solvers. The question we continue to face is how we should deploy the

1984; Hoogovens Report, 1976; Noble, 1984; Wiener, 1985). Although our ability to build more powerful machine cognitive systems has grown and been promulgated rapidly, our ability to understand how to use these capabilities has not kept pace. Today we can describe cognitive tools in terms of the tool-building technologies (e.g., tiled or overlapping windows). The impediment to systematic provision of effective decision support is the lack of an ad-

power available through new capabilities for

equate

tool building to assist human performance. This question defines the central focus of cognitive engineering: to understand what is effective support for human problem solvers. The capability to build more powerful machines does not in itself guarantee effective performance, as witnessed by early attempts to develop computerized alarm systems in process control (e.g., Pope, 1978) and attempts to convert paper-based procedures to a computerized form (e.g., Elm and Woods, 1985). The conditions under which the machine will be exercised and the human's role in problem solving have a profound effect on the quality of performance. This means that factors related to tool usage can and should affect the very nature of the tools to be used. This observation is not new-in actual work contexts, performance breakdowns have been observed repeatedly with support systems, constructed in a variety of media and technologies including current AI tools, when issues of tool use were not considered (see Roth, Bennett, and Woods, 1987). This is the dark side: the capability to do more amplifies the potential magnitude of both our successes and our failures. Careful examination of past shifts in technology reveals that new difficulties (new types of errors or accidents) are created when the shift in machine power has changed the entire human-machine system in unforeseen ways (e.g., Hirschhorn,

(Clancey, 1985; Rasmussen, 1986). What are the cognitive implications of some application's task demands and of the aids and interfaces available to the practitioners in the system? How do people behave/perform in the cognitive situations defined by these demands and tools? Because this independent cognitive description has been missing, an uneasy mixture of other types of description of a complex situation has been substituted -descriptions in terms of the application itself (e.g., internal medicine or power plant thermodynamics), the implementation technology of the interfaces/aids, the user's physical activities. Different kinds of media or technology may be more powerful than others in that they enable or enhance certain kinds of cognitive support functions. Different choices of media or technology may also represent trade-offs between the kinds of support functions that are provided to the practitioner. The effort required to provide a cognitive support function in different kinds of media or technology may also vary. In any case, performance aiding requires that one focus at the level of the cognitive support functions required and then at the level of what technology can provide those functions or how those functions can be crafted within a given computational technology. This view of cognitive technology as com-

cognitive

language

of description

COGNITIVE ENGINEERING

plementary to computational technology is in stark contrast to another view whereby cognitive engineering is a necessary but bothersome step to acquire the knowledge fuel necessary to run the computational engines of today and tomorrow. COGNITIVE ENGINEERING IS ... There has been growing recognition of this need to develop an applied cognitive science that draws on knowledge and techniques of cognitive psychology and related disciplines to provide the basis for principle-driven design (Brown and Newman, 1985; Newell and Card, 1985; Norman, 1981; Norman and Draper, 1986; Rasmussen, 1986). In this section we will examine some of the characteristics of cognitive engineering (or whatever label you prefer-cognitive technologies, cognitive factors, cognitive ergonomics, knowledge engineering). The specific perspective for this exposition is that of cognitive systems engineering (Hollnagel and Woods, 1983; Woods, 1986). ... about Complex Worlds Cognitive engineering is about human behavior in complex worlds. Studying human behavior in complex worlds (and designing support systems) is one case of people engaged in problem solving in a complex world, analogous to the task of other human problem solvers (e.g., operators, troubleshooters) who confront complexity in the course of their daily tasks. Not surprisingly, the strategies researchers and designers use to cope with complexity are similar as well. For example, a standard tactic to manage complexity is to bound the world under consideration. Thus one might address only a single time slice of a dynamic process or only a subset of the interconnections among parts of a highly coupled world. This strategy is limited because it is not clear whether the relevant

August

1988-417

aspects of the whole have been captured. First, parts of the problem-solving process may be missed or their importance underestimated; second, some aspects of problem solving may emerge only when more complex situations are directly examined. For example, the role of problem formulation and reformulation in effective performance is often overlooked. Reducing the complexity of design or research questions by bounding the world to be considered merely displaces the complexity to the person in the operational world rather than providing a strategy to cope with the true complexity of the actual problem-solving context. It is one major source of failure in the design of machine problem solvers. For example, the designer of a machine problem solver may assume that only one failure is possible to be able to completely enumerate possible solutions and to make use of classification problem-solving techniques (Clancey, 1985). However, the actual problem solver must cope with the possibility of multiple failures, misleading signals, and interacting disturbances (e.g., Pople, 1985; Woods and Roth, 1986). The result is that we need, particularly in this time of advancing machine power, to understand human behavior in complex situations. What makes problem solving complex? How does complexity affect the performance of human and machine problem solvers? How can problem-solving performance in complex worlds be improved and deficiencies avoided? Understanding the factors that produce complexity, the cognitive demands that they create, and some of the cognitive failure forms that emerge when these demands are not met is essential if advances in machine power are to lead to new cognitive tools that actually enhance problem-solving performance. (See Dorner, 1983; Fischhoff, Lanir, and Johnson, 1986; Klein, in press; Montmollin and De Keyser, 1985; Rasmus-

418-August 1988

sen, 1986; Selfridge, Rissland, and Arbib, 1984, for other discussions on the nature of complexity in problem solving.)

... Ecological Cognitive engineering is ecological. It is about multidimensional. open worlds and not about the artificially bounded closed worlds typical of the laboratory or the engineer's desktop (e.g., Coombs and Hartley, 1987; Funder, 1987). Of course, virtually all of the work environments that we might be interested in are man-made. The point is that these worlds encompass more than the design intent-they exist in the world as natural problem-solving habitats. An example of the ecological perspective is the need to study humans solving problems with tools (i.e., support systems) as opposed to laboratory research that continues, for the most part, to examine human performance stripped of any tools. How to put effective cognitive tools into the hands of practitioners is the sine qua non for cognitive engineering. From this viewpoint, quite a lot could be learned from examining the nature of the tools that people spontaneously create to work more effectively in some problem-solving environment, or examining how preexisting mechanisms are adapted to serve as tools, as occurred in Roth et al. (1987), or examining how tools provided for a practitioner are really put to use by practitioners. The studies by De Keyser (e.g., 1986) are extremely unique with respect to the latter. In reducing the target world to a tractable laboratory or desktop world in search of precise results, we run the risk of eliminating the critical features of the world that drive behavior. This creates the problem of deciding what counts as an effective stimulus (as Gibson has pointed out in ecological perception) or, to use an alternative terminology, decid-

HUMAN

FACTORS

ing what counts for a symbol. To decide this question, Gibson (1979) and Dennett (1982), among others, have pointed out the need for a semantic and pragmatic analysis of environment-cognitive agent relationships with respect to the goals/resources of the agent and the demands/constraints in the environment. As a result, one has to pay very close attention to what people actually do in a problem-solving world, given the actual demands that they face (Woods, Roth, and Pople, 1987). Principle-driven design of support systems begins wi th understanding what are the difficult aspects of a problemsolving situation (e.g., Rasmussen, 1986; Woods and Hollnagel, 1987).

. .. about the Semantics of a Domain A corollary to the foregoing points is that

cognitive engineering must address the contents or semantics of a domain (e.g., Coombs, 1986; Coombs and Hartley, 1987). Purely syntactic and exclusively tool-driven approaches to develop support systems are vulnerable to the error of the third kind: solving the wrong problem. The danger is to fall into the psychologist's fallacy of William James (1890) whereby the psychologist's reality is confused with the psychological reality of the human practitioner in his or her problemsolving world. To guard against this danger, the psychologist or cognitive engineer must start with the working assumption that practitioner behavior is reasonable and attempt to understand how this behavior copes with the demands and constraints imposed by the problem-solving world in question. For example, the introduction of computerized alarm systems into power plant control rooms inadvertently so badly undermined the strategies operational personnel used to cope with some problem-solving demands that the systems had to be removed and the previous alarm system restored (Pope, 1978).

COGNITIVE ENGINEERING

The question is not why people failed to accept a useful technology but, rather, how the original alarm system supported and the new system failed to support operator strategies to cope with the world's problem-solving demands. This is not to say that the strategies developed to cope with the original task demands are always optimal, or even that they always produce acceptable levels of performance, but only that understanding how they function in the initial cognitive environment is a starting point to develop truly effective support systems (e.g., Roth and Woods, 1988). Semantic approaches, on the other hand, are vulnerable to myopia. If each world is seen as completely unique and must be investigated tabula rasa, then cognitive engineering can be no more than a set of techniques that are used to investigate every world anew. If this were the case, it would impose strong practical constraints on principle-driven development of support systems, restricting it to cases in which the consequences of poor performance are extremely

August 1988-419

... about Improved Performance Cognitive engineering is not merely about the contents of a world; it is about changing behavior/performance in that world. This is both a practical consideration-improving performance or reducing errors justifies the investment from the point of view of the world in question-and a theoretical consideration-the ability to produce practical changes in performance is the cri terion for demonstrating an understanding of the factors involved. Basic concepts are confirmed only when they generate treatments (aiding either on-line or off-line) that make a difference in the target world. Cheng's concepts about human deductive reasoning (Cheng and Holyoak, 1985; Cheng et aI., 1986) generated treatments that produced very large performance changes both absolutely and relative to the history of rather ineffectual alternative treatments to human biases in deductive reasoning. To achieve the goal of enhanced performance, cognitive engineering must first iden-

high.

tify the sources of error that impair the per-

To achieve relevance to specific worlds and generalizability across worlds, the cognitive language must be able to escape the language of particular worlds, as well as the language of particular computational mechanisms, and identify pragmatic reasoning situations, after Cheng and Holyoak (1985) and Cheng, Holyoak, Nisbett, and Oliver (1986). These reasoning situations are abstract relative to the language of the particular application in question and therefore transportable across worlds, but they are also pragmatic in the sense that the reasoning involves knowledge of the things being reasoned about. More ambitious are attempts to build a formal cognitive language-for example, that by Coombs and Hartley (1987) through their work on coherence in model generative reasoning.

formance of the current problem-solving system. This means that there is a need for cognitive engineering to understand where, how, and why machine, human, and humanplus-machine problem solving breaks down in natural problem-solving habitats. Buggy knowledge-missing, incomplete, or erroneous knowledge-is one source of error (e.g., Brown and Burton, 1978; Brown and VanLehn, 1980; Gentner and Stevens, 1983). The buggy knowledge approach provides a specification of the knowledge structures (e.g., incomplete or erroneous knowledge) that are postulated to produce the pattern of errors and correct responses that characterize the performance of particular individuals. The specification is typically embodied as a computer program and consti-

420-August

1988

tutes a "theory" of what these individuals "know" (including misconceptions). Human performance aiding then focuses on providing missing knowledge and correcting the knowledge bugs. From the point of view of computational power, more knowledge and more correct knowledge can be embodied and delivered in the form of a rule-based expert system following a knowledge acquisition phase that determines the fine-grained domain knowledge.

HUMAN

FACTORS

inert in another (Bransford et aI., 1986; Cheng et aI., 1986; Perkins and Martin, 1986). For example, Gick and Holyoak (1980) found that unless explicitly prompted, people will fail to apply a recently learned problem-solution strategy to an isomorphic problem (see also Kotovsky, Hayes, and Simon, 1985). Thus the fact that people possess relevant knowledge does not guarantee that this knowledge will be activated when needed. The critical question is not to show that the

But the more critical question for effective

problem solver possesses domain knowledge,

human performance may be how knowledge is activated and utilized in the actual problem-solving environment (e .g., Bransford, Sherwood, Vye, and Rieser, 1986; Cheng et aI., 1986). The question concerns not merely whether the problem solver knows some particular piece of domain knowledge, such as the relationship between two entities. Does he or she know that it is relevant to the problem at hand, and does he or she know how to utilize this knowledge in problem solving? Studies of education and training often show that students successfully acquire knowledge that is potentially relevant to solving domain problems but that they often fail to exhibit skilled performance-for example, differences in solving mathematical exercises versus word problems (see Gentner and Stevens, 1983, for examples). The fact that people possess relevant knowledge does not guarantee that that knowledge will be activated and utilized when needed in the actual problem-solving environment. This is the issue of expression of knowledge. Education and training tend to assume that if a person can be shown to possess a piece of knowledge in any circumstance, then this knowledge should be accessible under all conditions in which it might be useful. In contrast, a variety of research has revealed dissociation effects whereby knowledge accessed in one context remains

but rather the more stringent criterion that situation-relevant knowledge is accessible under the conditions in which the task is performed. This has been called the problem of inert knowledge-knowledge that is accessed only in a restricted set of contexts. The general conclusion of studies on the problem of inert knowledge is that possession of the relevant domain knowledge or strategies by themselves is not sufficient to ensure that this knowledge will be accessed in new contexts. Off-line training experiences need to promote an understanding of how concepts and procedures can function as tools for solving relevant problems (Bransford et aI., 1986; Brown, Bransford, Ferrara, and Campione, 1983; Brown and Campione, 1986). Training has to be about more than simply student knowledge acquisition; it must also enhance the expression of knowledge by conditionalizing knowledge to its use via information about "triggering conditions" and constraints (Glaser, 1984). Similarly, on-line representations of the world can help or hinder problem solvers in recognizing what information or strategies are relevant to the problem at hand (Woods, 1986). For example, Fischhoff, Slovic, and Lichtenstein (1979) and Kruglanski, Friedland, and Farkash (1984) found that judgmental biases (e.g., representativeness) were greatly reduced or eliminated when aspects

COGNITIVE ENGINEERING

of the situation cued the relevance of statistical information and reasoning. Thus one dimension along which representations vary is their ability to provide prompts to the knowledge relevant in a given context. The challenge for cognitive engineering is to study and develop ways to enhance the expression of knowledge and to avoid inert knowledge. What training content and experiences are necessary to develop condi tionalized knowledge (Glaser, 1984; Lesh, 1987; Perkins and Martin, 1986)? What representations cue people about the knowledge that is relevant to the current context of goals, system state, and practitioner intentions (Wiecha and Henrion, 1987; Woods and Roth, 1988)? ... about Systems Cognitive engineering is about systems. One source of tremendous confusion has been an inability to clearly define the "systems" of interest. From one point of view the computer program being executed is the end application of concern. In this case, one often speaks of the interface, the tasks performed within the syntax of the interface, and human users of the interface. Notice that the application world (what the interface is used for) is deemphasized. The bulk of work on humancomputer interaction takes this perspective. Issues of concern include designing for learnability (e.g., Brown and Newman, 1985; Carroll and Carrithers, 1984; Kieras and Polson, 1985) and designing for ease and pleasureableness of use (Malone, 1983; Norman, 1983; Shneiderman, 1986). A second perspective is to distinguish the interface from the application world (Hollnagel, Mancini, and Woods, 1986; Mancini, Woods, and Hollnagel, in press; Miyata and Norman, 1986; Rasmussen, 1986; Stefik et aI., 1985). For example, text-editing tasks are performed only in some larger context such

August 1988-421

as transcription, data entry, and composition. The interface is an external representation of an application world; that is, a medium through which agents come to know and act on the world-troubleshooting electronic devices (Davis, 1983), logistic maintenance systems, managing data communication networks, managing power distribution networks, medical diagnosis (Cohen et aI., 1987; Gadd and Pople, 1987), aircraft and helicopter flight decks (Pew et aI., 1986), air traffic control systems, process control accident response (Woods, Roth, and Pople, 1987), and command and control of a battlefield (e.g., Fischhoff et aI., 1986). Tasks are properties of the world in question, although performance of these fundamental tasks (i.e., demands) is affected by the design of the external representation (e.g., Mitchell and Saisi, 1987). The human is not a passive user of a computer program but is an active problem-solver in some world. Therefore we will generally refer to people as domain agents or actors or problem solvers and not as users. In part, the difference in the foregoing views can be traced to differences in the cognitive complexity of the domain task being supported. Research on person-computer interaction has typically dealt with office applications (e.g., word processors for document preparation or copying machines for duplicating material), in which the goals to be accomplished (e.g., replace word 1 with word 2) and the steps required to accomplish them are relatively straightforward. These applications fall at one extreme of the cognitive complexity space. In contrast, there are many decision-making and supervisory environments (e.g., military situation assessment; medical diagnosis) in which problem formulation, situation assessment, goal definition, plan generation, and plan monitoring and adaptation are significantly more complex. It is in designing interfaces and aids for

422-August

1988

these applications that it is essential to distinguish the world to be acted on from the interface or window on the world (how one comes to know that world), and from agents who can act directly or indirectly on the world. The "system" of interest in design should not be the machine problem solver per se, nor should the focus of interest in evaluation be the performance of the machine problem solver alone. Ultimately the focus must be the design and the performance of the human-machine problem-solving ensemble -how to "couple" human intelligence and machine power in a single integrated system that maximizes overall performance. ... about Multiple Cognitive Agents A large number of the worlds that cognitive engineering should be able to address contain multiple agents who can act on the world in question (e.g., command and control, process control, data communication networks). Not only do we need to be clear about where systemic boundaries are drawn with respect to the application world and interfaces to or representations of the world, we also need to be clear about the different agents who can act directly or indirectly on the world. Cognitive engineering must be able to address systems with multiple cognitive agents. This applies to multiple human cognitive systems (often called distributed decision making); e.g., (Fischhoff, 1986; Fischhoff, et ai., 1986; March and WeisingerBaylon, 1986; Rochlin, La Porte, and Roberts, in press; Schum, 1980). Because of the expansions in computational powers, the machine element can be thought of as a partially autonomous cognitive agent in its own right. This raises the problem of how to build a cognitive system that combines both human and machine cog-

HUMAN

FACTORS

nitive systems or, in other words, joint cognitive systems (Hollnagel, Mancini, and Woods, 1986; Mancini et aI., in press). When a system includes these machine agents, the human role is not eliminated but shifted. This means that changes in automation are changes in the joint human-machine cognitive system, and the design goal is to maximize overall performance. One metaphor that is often invoked to frame questions about the relationship between human and machine intelligence is to examine human-human relationships in multi person problem-solving or advisory situations and then to transpose the results to human-intelligent machine interaction (e.g., Coombs and Alty, 1984). Following this metaphor leads Muir (1987) to raise the question of the role of "trust" between man and machine in effective performance. One provocative question that Muir's analysis generates is, how does the level of trust between human and machine problem solvers affect performance? The practitioner's judgment of machine competence or predictability can be miscalibrated, leading to excessive trust or mistrust. Either a system will be underutilized or ignored when it could provide effective assistance or the practitioner will defer to the machine even in areas that challenge or exceed the machine's range of competence. Another question concerns how trust is established between human and machine. Trust or mistrust is based on cumulative experience with the other agent that provides evidence about enduring characteristics of the agent such as competence and predictability. This means that factors about how new technology is introduced into the work environment can playa critical role in building or undermining trust in the machine problem solver. If this stage of technology introduction is mishandled (for example, practitioners are exposed to the system before it

COGNITIVE ENGINEERING

is adequately debugged), the practitioner's trust in the machine's competence can be undermined. Muir's analysis shows how variations in explanation and display facilities affect how the person will use the machine by affecting his or her ability to see how the machine works and therefore his or her level of calibration. Muir also points out how human information processing biases can affect how the evidence of experience is interpreted in the calibration process. A second metaphor that is frequently invoked is supervisory control (Rasmussen, 1986; Sheridan and Hennessy, 1984). Again, the machine element is thought of as a semiautonomous cognitive system, but in this case it is a lower-order subordinate, albeit partially autonomous. The human supervisor generally has a wider range of responsibility, and he or she possesses ultimate responsibility and authority. Boy (1987) uses this metaphor to guide the development of assistant systems built from AI technology. In order for a supervisory control architecture between human and machine agents to function effectively, several requirements must be met that, as Woods (1986) has pointed out, are often overlooked when tooldriven constraints dominate design. First, the supervisor must have real as well as titular authority; machine problem solvers can be designed and introduced in such a way that the human retains the responsibility for outcomes without any effective authority. Second, the supervisor must be able to redirect the lower-order machine cognitive system. Roth et al. (1987) found that some practitioners tried to devise ways to instruct an expert system in situations in which the machine's problem solving had broken down, even when the machine's designer had provided no such mechanisms. Third, in order to be able to supervise another agent, there is need for a common or shared representation

August 1988-423

of the state of the world and of the state of the problem-solving process (Woods and Roth, 1988b); otherwise communication between the agents will break down (e.g., Suchman, 1987). Significant attention has been devoted to the issue of how to get intelligent machines to assess the goals and intentions of humans without requiring explicit statements (e.g., Allen and Perrault, 1980; Quinn and Russell, 1986). However, the supervisory control metaphor highlights that it is at least as important to pay attention to what information or knowledge people need to track the intelligent machine's "state of mind" (Woods and Roth, 1988a). A third metaphor is to consider the new machine capabilities as extensions and expansions along a dimension of machine power. In this metaphor machines are tools; people are tool builders and tool users. Technological development has moved from physical tools (tools that magnify capacity for physical work) to perceptual tools (extensions to human perceptual apparatus such as medical imaging) and now, with the arrival of AI technology, to cognitive tools. (Although this type of tool has a much longer historye.g., aide-memories or decision analysis-AI has certainly increased the interest in and ability to provide cognitive tools.) In this metaphor the question of the relationship between machine and human takes the form of what kind of tool is an intelligent machine (e.g., Ehn and Kyng, 1984; Suchman, 1987; Woods, 1986). At one extreme, the machine can be a prosthesis that compensates for a deficiency in human reasoning or problem solving. This could be a local deficiency for the population of expected human practitioners or a global weakness in human reasoning. At the other extreme, the machine can be an instrument in the hands of a fundamentally competent but limited-resource

424-August

1988

HUMAN

FACTORS

human practitioner (Woods, 1986). The machine aids the practitioner by providing increased or new kinds of resources (either knowledge resources or processing resources such as an expanded field of attention). The extra resources may support improved performance in several ways. One path is to off-load overhead information-processing activities from the person to the machine to allow the human practitioner to focus his or her resources on "higher-level" issues and

The instrumental perspective suggests that the most effective power provided by good cognitive tools is conceptualization power (Woods and Roth, 1988a). The importance of conceptualization power in effective problem-solving performance is often overlooked because the part of the problem-solving process that it most crucially affects, problem formulation and reformulation, is often left out of studies of problem solving and the design basis of new support systems. Support

strategies. Examples include keeping track of

systems

that

increase

conceptualization

power (1) enhance a problem solver's ability multiple ongoing activities in an external to experiment with possible worlds or stratememory, performing basic data computations or transformations, and collecting the gies (e.g., Hollan, Hutchins, and Weitzman, 1984; Pea, 1985; Woods et aI., 1987), (2) enevidence related to decisions about particular domain issues, as occurred recently with hance their ability to visualize or to make new computer-based displays in nuclear concrete the abstract and uninspectable (analogous to perceptual tools) in order to power plant control rooms. Extra resources better see the implications of concept and to may help to improve performance in another way by allowing a restructuring of how the help one restructure one's view of the probhuman performs the task, shifting perfor- lem (Becker and Cleveland, 1984; Coombs mance onto a new higher plateau (see Pea, and Hartley, 1987; Hutchins, Hollan, and 1985). This restructuring concept is in con- Norman, 1985; Pople, 1985); and (3) to entrast to the usual notion of new systems as hance error detection by providing better amplifiers of user capabilities. As Pea (1985) feedback about the effects/results of actions points out, the amplification metaphor im- (Rizzo, Bagnara, and Visciola, 1987). plies that support systems improve human ... Problem-Driven performance by increasing the strength or power of the cognitive processes the human Cognitive engineering is problem-driven, problem solver goes through to solve the tool-constrained. This means that cognitive problem, but without any change in the un- engineering must be able to analyze a probderlying activities, processes, or strategies lem-solving context and understand the that determine how the problem is solved. sources of both good and poor performance Alternatively, the resources provided (or not -that is, the cognitive problems to be solved provided) by new performance aids and in- or challenges to be met (e.g., Rasmussen, terface systems can support restructuring of 1986; Woods and Hollnagel, 1987). the activities, processes, or strategies that To build a cognitive description of a probcarry out the cognitive functions relevant to lem-solving world, one must understand how performing domain tasks (e.g., Woods and representations of the world interact with Roth, 1988). New levels of performance are different cognitive demands imposed by the now possible, and the kinds of errors one is application world in question and with charprone to (and therefore the consequences of acteristics of the cognitive agents, both for errors) change as well. existing and prospective changes in the

COGNITIVE ENGINEERING

world. Building of a cognitive description is part of a problem-driven approach to the application of computational power. The results from this analysis are used to define the kind of solutions that are needed to enhance successful performance-to meet cognitive demands of the world, to help the human function more expertly, to eliminate or mitigate error-prone points in the total cognitive system (demand-resource mismatches). The results of this process then can be deployed in many possible ways as constrained by tool-building limitations and tool-building possibili ties-explora tion training worlds, new information, representation aids, advisory systems, or machine problem solvers (see Roth and Woods, 1988; Woods and Roth, 1988). In tool-driven approaches, knowledge acquisi tion focuses on describing domain knowledge in terms of the syntax of computational mechanisms-that is, the language of implementation is used as a cognitive language. Semantic questions are displaced either to whomever selects the computational mechanisms or to the domain expert who enters knowledge. The alternative is to provide an umbrella structure of domain semantics that organizes and makes explicit what particular pieces of knowledge mean about problem solving in the domain (Woods, 1988). Acquiring and using a domain semantics is essential to avoiding potential errors and specifying performance boundaries when building "intelligent" machines (Roth et aI., 1987). Techniques for analyzing cognitive demands not only help characterize a particular world but also help to build a repertoire of general cognitive situations that are transportable. There is a clear trend toward this conception of knowledge acquisition in order to achieve more effective decision support and fewer brittle machine problem solvers (e.g., Clan-

August 1988-425

cey, 1985; Coombs, 1986; Gruber and Cohen, 1987). AN EXAMPLE OF HOW COGNITIVE AND COMPUTATIONAL TECHNOLOGIES INTERACT To illustrate the role of cognitive engineering in the deployment of new computational powers, consider a case in human-computer interaction (for other cases see Mitchell and Forren, 1987; Mitchell and Saisi, 1987; Roth and Woods, 1988; Woods and Roth, 1988). It is one example of how purely technologydriven deployment of new automation capabilities can produce unintended and unforeseen negative consequences. In this case an attempt was made to implement a computerized procedure system using a commercial hypertext system for building and navigating large network data bases. Because cognitive engineering issues were not considered in the application of the new technology, a highlevel person-machine performance problem resulted-the" getting lost" phenomenon (Woods, 1984). Based on a cognitive analysis of the world's demands, it was possible to redesign the system to support domain actors and eliminate the getting-lost problems (Elm and Woods, 1985). Through cognitive engineering it proved possible to build a more effective computerized procedure system that, for the most part, was within the technological boundaries set by the original technology chosen for the application. The data base application in question was designed to computerize paper-based instructions for nuclear power plant emergency operation. The system was built based on a network data base "shell" with a builtin interface for navigating the network (Robertson, McCraken, and Newell, 1981). The shell aiready treated human-computer interface issues, so it was assumed possible to create the computerized system simply by

426-August

1988

entering domain knowledge (i.e., the current instructions as implemented for the paper medium) into the interface and network data base framework provided by the shell. The system contained two kinds of frames: menu frames, which served to point to other frames, and content frames, which contained instructions from the paper procedures (generally one procedure step per frame). In preliminary tests of the system it was found that people uniformly failed to complete recovery tasks with procedures computerized in this way. They became disoriented or "lost," unable to keep procedure steps in pace with plant behavior, unable to determine where they were in the network of frames, unable to decide where to go next, or unable even to find places where they knew they should be (i.e., they diagnosed the situation, knew the appropriate responses as trained operators, yet could not find the relevant procedural steps in the network). These results were found with people experienced with the paper-based procedures and plant operations as well as with people knowledgeable in the frame-network software package and how the procedures were implemented within it. What was the source of the disorientation problem? It resulted from a failure to analyze the cognitive demands associated with using procedures in an externally paced world. For example, in using the procedures the operator often is required to interrupt one activity and transition to another step in the procedure or to a different procedure depending on plant conditions and plant responses to operator actions. As a result, operators need to be able to rapidly transition across procedure boundaries and to return to incomplete steps. Because of the size of a frame, there was a very high proportion of menu frames relative to content frames, and the content frames provided a narrow window on the world. This structure made it difficult to read ahead

HUMAN

FACTORS

to anticipate instructions, to mark steps pending completion and return to them easily, to see the organization of the steps, or to mark a "trail" of activities carried out during the recovery. Many activities that are inherently easy to perform in a physical book turned out to be very difficult to carry outfor example, reading ahead. The result was a mismatch between user information-processing activities during domain tasks and the structure of the interface as a representation of the world of recovery from abnormalities. These results triggered a full design cycle that began with a cognitive analysis to determine the user information-handling activities needed to effectively accomplish recovery tasks in emergency situations. Following procedures was not simply a matter of linear, step-by-step execution of instructions; rather, it required the ability to maintain a broad context of the purpose and relationships among the elements in the procedure (see also Brown et aI., 1982; Roth et aI., 1987). Operators needed to maintain awareness of the global context (i.e., how a given step fits into the overall plan), to anticipate the need for actions by looking ahead, and to monitor for changes in plant state that would require adaptation of the current response plan. A variety of cognitive engineering techniques were utilized in a new interface design to support the demands (see Woods, 1984). First, a spatial metaphor was used to make the system more like a physical book. Second, display selection/movement options were presented in parallel, rather than sequentially, with procedural information (defining two types of windows in the interface). Transition options were presented at several grains of analysis to support moves from step to step as easily as moves across larger units in the structure of the procedure system. In addition, incomplete steps were automati-

COGNITIVE ENGINEERING

cally tracked, and those steps were made directly accessible (e.g., electronic bookmarks or placeholders). To provide the global context within which the current procedure step occurs, the step of interest is presented in detail and is embedded in a skeletal structure of the larger response plan of which it is a part (Furnas, 1986; Woods, 1984). Context sensitivity was supported by displaying the rules for possible adaptation or shifts in the current response plan that are relevant to the current context in parallel with current relevant options and the current region of interest in the procedures (a third window). Note how the cognitive analysis of the domain defined what types of data needed to be seen effectively in parallel. which then determined the number of windows required. Also note that the cognitive engineering redesign was, with a few exceptions, directly implementable within the base capabilities of the interface shell. As the foregoing example saliently demonstrates, there can be severe penalties for failing to adequately map the cognitive demands of the environment. However, if we understand the cognitive requirements imposed by the domain, then a variety of techniques can be employed to build support systems for those functions. SUMMARY The problem of providing effective decision support hinges on how the designer decides what will be useful in a particular application. Can researchers provide designers with concepts and techniques to determine what will be useful support systems, or are we condemned to simply build what can be built practically and wait for the judgment of experience? Is principle-driven design possible? A vigorous and viable cognitive engineering can provide the knowledge and tech-

August 1988-427

niques necessary for principle-driven design. Cognitive engineering does this by providing the basis for a problem-driven rather than a technology-driven approach whereby the requirements and bottlenecks in cognitive task performance drive the development of tools to support the human problem solver. Cognitive engineering can address (a) existing cognitive systems in order to identify deficiencies that cognitive system redesign can correct and (b) prospective cognitive systems as a design tool during the allocation of cognitive tasks and the development of an effective joint architecture. In this paper we have attempted to outline the questions that need to be answered to make this promise real and to point to research that already has begun to provide the necessary concepts and techniques. REFERENCES Allen, J., and Perrault, C. (1980). Analyzing intention in utterances. ArtificialIntelligence, 15,143-178. Becker, R. A., and Cleveland, W. S. (1984). Brushing the scatlerplot matrix: High-interaction graphical methods for analyzing multidimensional data (Tech. Report). Murray Hill, NJ: AT&T Bell Laboratories. Boy, G. A. (1987). Operator assistant systems. International Journal of Man-Machine Studies, 27, 541-554. Also in G. Mancini, D. Woods, and E. Hollnagel (Eds.). (in press). Cognitive engineering in dynamic worlds. London: Academic Press. Bransford, J., Sherwood, R., Vye, N., and Rieser, J. (1986). Teaching and problem solving: Research foundations. American Psychologist, 41, 1078-1089. Brown, A. L., Bransford, J. D., Ferrara, R. A., and Campione, J. C. (1983). Learning, remembering, and understanding. In J. H. Flavell and E. M. Markman (Eds.), Carmichael's manual of child psychology. New York: Wiley. Brown, A. L., and Campione, J. C. (1986). Psychological theory and the study of learning disabilities. American Psychologist, 41,1059-1068. Brown, J. S., and Burton, R. R. (1978). Diagnostic models for procedural bugs in basic mathematics. Cognitive Science, 2, 155-192. Brown, J. 5., Moran, T. P., and Williams, M. D. (1982). The semantics of procedures (Tech. Report). Palo Alto, CA: Xerox Palo Alto Research Center. Brown, J. 5., and Newman, S. E. (1985). Issues in cognitive and social ergonomics: From our house to Bauhaus. Human-Computer Interaction, 1, 359-391. Brown, J. S., and VanLehn, K. (1980). Repair theory: A generative theory of bugs in procedural skills. Cognitive Science, 4, 379-426.

428-August

1988

HUMAN

Carroll, J. M., and Carrithers, C. (1984). Training wheels in a user interface. Communications of the ACM, 27, 800-806. Cheng, P. W., and Holyoak, K. J. (1985). Pragmatic reasoning schemas. Cognitive Psychology, 17, 391-416. Cheng, P., Holyoak, K., Nisbett, R., and Oliver, 1. (1986). Pragmatic versus syntactic approaches to training deductive reasoning. Cognitive Psychology, 18,293-328. Clancey, W. J. (1985). Heuristic classification. Artificial Intelligence, 27, 289-350. Cohen, P., Day, D., Delisio, J., Greenberg, M., Kjeldsen, R., and Suthers, D. (1987). Management of uncertainty in medicine. In Proceedings of the IEEE Conference on Computers and Communications. New York: IEEE. Coombs, M. J. (1986). Artificial intelligence and cognitive technology: Foundations and perspectives. In E. Hollnagel, G. Mancini, and D. D. Woods (Eds.), Intelligent decision

support in process environments.

New York:

Springer- Verlag. Coombs, M. J., and Alty, J. 1. (1984). Expert systems: An alternative paradigm. International Journal of ManMachine Studies, 20, 21-43. Coombs, M.J., and Hartley, R. T. (1987). The MGR algorithm and its application to the generation of explanations for novel events. International Journal of ManMachine Studies, 27, 679-708. Also in Mancini, G., Woods, D., and Hollnagel. E. (Eds.). (in press). Cognitive engineering in dynamic worlds. London: Academic Press. Davis, R. (1983). Reasoning from first principles in electronic troubleshooting. International Journal of ManMachine Studies, 19,403-423. De Keyser, V. (1986). Les interactions hommes-machine: Caracteristiques et utilisations des different supports d'information par les operateurs. (Person-machine interaction: How operators use different information channels). Rapport Politique Scientifique/FAST no. 8. Liege, Belgium: Psychologie du Travail, Universite de I'Etat a Liege. Dennett, D. (1982). Beyond belief. In A. Woodfield (Ed.), Thought and object. Oxford: Clarendon Press. Dorner, D. (1983). Heuristics and cognition in complex systems. In R. Groner, M. Groner, and W. F. Bischof (Eds.), Methods of heuristics. Hillsdale, NJ: Erlbaum. Ehn, P., and Kyng, M. (1984). A tool perspective on design of interactive computer support for skilled workers. Unpublished manuscript, Swedish Center for Working Life, Stockholm. Elm, W. C., and Woods, D. D. (1985). Getting lost: A case study in interface design. In Proceedings of the Human Factors Society 29th Annual Meeting (pp. 927-931). Santa Monica, CA: Human Factors Society. Fischhoff, B. (1986). Decision making in complex systems. In E. Hollnagel. G. Mancini, and D. D. Woods, (Eds.), Intelligent decision support. New York: Springer-Verlag. Fischhoff, B., Slovic, P., and Lichtenstein, S. (1979). Improving intuitive judgment by subjective sensitivity analysis. Organizational Behavior and Human Performance, 23, 339-359. Fischhoff, B., Lanir, Z., and Johnson, S. (1986). Military risk taking and modem C31(Tech. Report 86-2). Eugene, OR: Decision Research. Funder, D. C. (1987). Errors and mistakes: Evaluating the accuracy of social judgments. Psychological Bulletin, 101,75-90. Furnas, G. W. (1986). Generalized fisheye views. In M.

FACTORS

Mantei and P. Orbeton (Eds.), Human factors in computing systems: CHJ'86 Conference Proceedings (pp. 16-23). New York: ACM/SIGCHI. Gadd, C. S., and Pople, H. E. (1987). An interpretation synthesis model of medical teaching rounds discourse: Implications for expert system interaction. International Journal of Educational Research, 1. Gentner, D., and Stevens, A. 1. (Eds.). (1983). Mental models. Hillsdale, NJ: Erlbaum. Gibson, J. J. (1979). The ecological approach to visual perception. Boston: Houghton Mifflin. Gick, M. 1., and Holyoak, K. J. (1980). Analogical problem solving. Cognitive Psychology, 12, 306-365. Glaser, R. (1984). Education and thinking: The role of knowledge. American Psychologist, 39, 93-104. Gruber, T., and Cohen, P. (1987). Design for acquisition: Principles of knowledge system design to facilitate knowledge acquisition (special issue on knowledge acquisition for knowledge-based systems). International Journal of Man-Machine Studies, 26, 143-159. Henderson, A., and Card, S. (1986). Rooms: The use of multiple virtual workspaces to reduce space contention in a window-based graphical user interface (Tech. Report). Palo Alto: Xerox PARCo Hirschhorn, 1. (1984). Beyond mechanization: Work and technology in a postindustrial age. Cambridge, MA: MIT Press. Hollan, J., Hutchins, E., and Weitzman, 1. (1984). Steamer: An interactive inspectable simulation-based training system. AI Magazine, 4, 15-27. Hollnagel. E., Mancini, G., and Woods, D. D. (Eds.). (1986). Intelligent decision support in process environments. New York: Springer-Verlag. Hollnagel. E., and Woods, D. D. (1983). Cognitive systems engineering: New wine in new bottles. International Journal of Man-Machine Studies, 18, 583-600. Hoogovens Report. (1976). Human factors evaluation: Hoogovens No.2 hot strip mill (Tech. Report FR251). London: British Steel Corporation/Hoogovens. Hutchins, E., Hollan, J., and Norman, D. A. (1985). Direct manipulation interfaces. Human-Computer Interaction, 1,311-338. James, W. (1890). The principles of psychology. New York: Holt. Kieras, D. E., and Polson, P. G. (1985). An approach to the formal analysis of user complexity. International Journal of Man-Machine Studies, 22, 365-394. Klein, G. A. (in press). Recognition-primed decisions. In W. B. Rouse (t:d.), Advances in man-machine research, vol. 5. Greenwich, CT: JAI Press. Kotovsky, K., Hayes, J. R., and Simon, H. A. (1985). Why are some problems hard? Evidence from Tower of Hanoi. Cognitive Psychology, 17, 248-294. Kruglanski, A., Friedland, N., and Farkash, E. (1984). Lay persons' sensitivity to statistical information: The case of high perceived applicability. Journal of Personality and Social Psychology, 46, 503-518. Lesh, R. (1987). The evolution of problem representations in the presence of powerful conceptual amplifiers. In C. Janvier (Ed.), Problems of representation in the teaching and learning of mathematics. Hillsdale, NJ: Erlbaum. Malone, T. W. (1983). How do people organize their desks: Implications for designing office automation systems. ACM Transactions on Office Information Systems, 1, 99-112.

COGNITIVE

August 1988-429

ENGINEERING

Mancini, G., Woods, D. D., and Hollnagel. E. (Eds.). (in press). Cognitive engineering in dynamic worlds. London: Academic Press. (Special issue of International Journal of Man-Machine Studies, vol. 27). March, J. G., and Weisinger-Baylon, R. (Eds.). (1986). Ambiguity and command. Marshfield, MA: Pitman Publishing. McKendree, 1., and Carroll, J. M. (1986). Advising roles of a computer consultant. In M. Mantei and P. Oberton (Eds.), Human factors in computing systems: CHI'86 Conference Proceedings (pp. 35-40). New York: ACM/ SIGCHI. Miller, P. L. (1983). ATTENDING: Critiquing a physician's management plan. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-5, 449-461. Mitchell, C., and Forren, M. G. (1987). Multimodal user input to supervisory control systems: Voice-augmented keyboard. IEEE Transactions on Systems, Man, and Cybernetics, SMC-I7, 594-607. Mitchell, C., and Saisi, D. (1987). Use of model-based qualitative icons and adaptive windows in workstations for supervisory control systems. IEEE Transactions on Systems, Man, and Cybernetics, SMC-I7, 573-593. Miyata, Y., and Norman, D. A. (1986). Psychological issues in support of multiple activities. In D. A. Norman and S. W. Draper (Eds.), User-centered system design: New perspectives on human-computer interaction. Hillsdale, NJ: Erlbaum. Montmollin, M. de, and De Keyser, V. (1985). Expert logic vs. operator logic. In G. Johannsen, G. Mancini, and L. Martensson (Eds.), Analysis, design, and evaluation of man-machine systems. CEC-JRC Ispra, Italy: IFAC. Muir, B. (1987). Trust between humans and machines.lnternational Journal of Man-Machine Studies, 27, 527-539. Also in Mancini, G., Woods, D., and Hollnagel. E. (Eds.). (in press). Cognitive engineering in dynamic worlds. London: Academic Press. Newell, A., and Card. S. K. (1985). The prospects for psychological science in human-computer interaction. Human-Computer Interaction, I, 209-242. Noble, D. F. (1984). Forces of production: A social history of industrial automation. New York: Alfred A. Knopf. Norman, D. A. (1981). Steps towards a cognitive engineering (Tech. Report). San Diego: University of California, San Diego, Program in Cognitive Science. Norman, D. A. (1983). Design rules based on analyses of human error. Communications of the ACM, 26, 254-258. Norman, D. A., and Draper, S. W. (1986). User-centered system design: New perspectives on human-computer interaction. Hillsdale, NJ: Erlbaum. Pea, R. D. (1985). Beyond amplification: Using the computer to reorganize mental functioning. Educational psychologist, 20, 167-182. Perkins, D., and Martin, F. (1986). Fragile knowledge and neglected strategies in novice programmers. In E. Soloway and S. Iyengar (Eds.), Empirical studies of programmers. Norwood, NJ: Ablex. Pew, R. W., et at (1986). Cockpit automation technology (Tech. Report 6133). Cambridge, MA: BBN Laboratories Inc. Pope, R. H. (1978). Power station control room and desk design: Alarm system and experience in the use of CRT displays. In Proceedings of the International Symposium on Nuclear Power Plant Control and Instrumentation. Cannes, France.

Pople, H., Jr. (1985). Evolution of an expert system: From internist to caduceus. In L De Lotto and M. Stefanelli (Eds.), Artificial intelligence in medicine. New York: Elsevier Science Publishers. Quinn, L., and Russell, D. M. (1986). Intelligent interfaces: User models and planners. In M. Mantei and P. Oberton (Eds.), Human factors in computing systems: CHI'86 Conference Proceedings (pp. 314-320). New York: ACM/SIGCHI. Rasmussen, J. (1986). Information processing and humanmachine interaction: An approach to cognitive engineering. New York: North-Holland. Rizzo, A., Bagnara, S., and Visciola, M. (1987). Human error detection processes. International Journal of Man-Machine Studies, 27, 555-570. Also in Mancini, G., Woods, D., and Hollnagel, E. (Eds.). (in press). Cognitive engineering in dynamic worlds. London: Academic Press. Robertson, G., McCracken, D., and Newell, A. (1981). The ZOG approach to man-machine communication. Internationa/J ournal of M an-Machine Studies, 14, 461-488. Rochlin, G. I., La Porte, T. R., and Roberts, K. H. (in press). The self-designing high-reliability organization: Aircraft carrier flight operations at sea. Naval War College Review. Roth, E. M., Bennett, K., and Woods, D. D. (1987). Human interaction with an "intelligent" machine. International Journal of Man-Machine Studies, 27, 479-525. Also in Mancini, G., Woods, D., and Hollnagel. E. (Eds.). (in press). Cognitive engineering in dynamic worlds. London: Academic Press. Roth, E. M., and Woods, D. D. (1988). Aiding human performance: L Cognitive analysis. Le Travail Humain, 51(1),39-64. Schum, D. A. (1980). Current developments in research on cascaded inference. In T. S. Wallstein (Ed.), Cognitive processes in decision and choice behavior. Hillsdale, NJ: Erlbaurn.

Selfridge, O. G., Rissland, E. L., and Arbib, M. A. (1984). Adaptive control of ill-defined systems. New York: Plenum Press. Sheridan, T., and Hennessy, R. (Eds.). (1984). Research and modeling of supervisory control behavior. Washington, DC: National Academy Press. Shneiderman, B. (1986). Seven plus or minus two central issues in human-computer interaction. In M. Mantei and P. Obreton (Eds.), Human factors in computing systems: CHI'86 Conference Proceedings (pp. 343-349). New York: ACM/SIGCHL Stefik, M., Foster, G., Bobrow, D., Kahn, K., Lanning, S., and Suchman, L. (1985, September). Beyond the chalkboard: Using computers to support collaboration and problem solving in meetings (Tech. Report). Palo Alto, CA: Intelligent Systems Laboratory, Xerox Palo Alto Research Center. Suchman, L. A. (1987). Plans and situated actions: The problem of human-machine communication. Cambridge: Cambridge University Press. Wiecha, C., and Henrion, M. (1987). A graphical tool for structuring and understanding quantitative decision models. In Proceedings of Workshop on Visual Languages. New York: IEEE Computer Society. Wiener, E. (1985). Beyond the sterile cockpit. Human Factors, 27, 75-90. Woods, D. D. (1984). Visual momentum: A concept to improve the cognitive coupling of person and computer.

430 -August

1988

International Journal of Man-Machine Studies, 21, 229-244. Woods, D. D. (1986). Paradigms for intelligent decision support. In E. Hollnagel, G. Mancini, and D. D. Woods (Eds.), Intelligent decision support in process environments. New York: Springer-Verlag. Woods, D. D. (1988). Coping with complexity: The psychology of human behavior in complex systems. In L. P. Goodstein, H. B. Andersen, and S. E. Olsen (Eds.), Mental models, tasks and errors: A collection of essays to celebrate Jens Rasmussen's 60th birthday. London: Taylor and Francis. Woods, D. D., and Hollnagel, E. (1987). Mapping cognitive demands in complex problem solving worlds (special issue on knowledge acquisition for knowledge-based systems). International Journal of Man-Machine Studies, 26, 257-275.

HUMAN

FACTORS

Woods, D. D., and Roth, E. M. (1986). Models of cognitive behavior in nuclear power plant personnel. (NUREGCR-4532). Washington, DC: U.S. Nuclear Regulatory Commission. Woods, D. D., and Roth, E. M. (l988a). Cognitive systems engineering. In M. Helander (Ed.), Handbook of human-computer interaction. New York: North-Holland. Woods, D. D., and Roth, E. M. (l988b). Aiding human performance: II. From cognitive analysis to support systems. Le Travail Humain, 51, 139-172. Woods, D. D., Roth, E. M., and Pople, H. (1987). Cognitive Environment Simulation: An artificial intelligence system for human performance assessment (NUREG-CR4862). Washington, DC: U.S. Nuclear Regulatory Commission.