Cognitive tasks

0 downloads 0 Views 1MB Size Report
Jul 15, 2018 - In many tasks, however, the cognitive components are more ... Design, managerial and production planning, computer ... tools; the latter is very important as modern artifacts (e.g., control .... Cognitive processes, such as diagnosis, decision-making and planning, can ...... The feedback stage is similar to.
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/229712091

Cognitive Tasks Chapter · December 2007 DOI: 10.1002/9780470172339.ch39

CITATIONS

READS

0

5

2 authors: Nicolas V. Marmaras

Tom Kontogiannis

National Technical University of Athens

Technical University of Crete

63 PUBLICATIONS   575 CITATIONS   

65 PUBLICATIONS   872 CITATIONS   

SEE PROFILE

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

(VIRTHUALIS (Virtual Reality and Human Factors in Process Safety) View project

Design Studies View project

All content following this page was uploaded by Tom Kontogiannis on 15 July 2018. The user has requested enhancement of the downloaded file.

CHAPTER 36: Cognitive Tasks Nicolas Marmaras National Technical University of Athens, Greece and Tom Kontogiannis Technical University of Crete, Greece

1

1. INTRODUCTION In any purposeful human activity there is a blend of physical components or manipulations and cognitive components, such as information processing, situation assessment, decision making and planning. In many tasks, however, the cognitive components are more demanding and crucial for the task performance, than the physical one. We call these tasks cognitive. Design, managerial and production planning, computer programming, medical diagnosis, process control, air traffic control and fault diagnosis in technological systems, are typical examples of cognitive tasks. Traditionally, cognitive tasks were supposed to be carried out by white-collar workers, middle and upper cadres of the enterprises, as well as several liberal professionals. With the advent of information technology and automation in modern industrial settings, however, the role of blue-collars has changed from manual controllers to supervisors and diagnosticians. In other words, developments in technology are likely to affect the requirements about knowledge and skills, and the way operators interact with systems. Consequently, cognitive tasks are abound in modern industries. In view of these changes in cognitive and collaborative demands, ergonomics has to play a crucial role in matching technological options and user requirements. Cognitive ergonomics or cognitive engineering is an emerging branch of ergonomics that places particular emphasis on the analysis of cognitive processes – e.g., diagnosis, decision making and planning – required of operators in modern industries. Cognitive ergonomics aim to enhance performance of cognitive tasks by means of several interventions, including: − user-centered design of human-machine interaction; − design of information technology systems that support cognitive tasks (e.g., cognitive aids); − development of training programs; − work redesign to manage cognitive workload and increase human reliability. Successful ergonomic interventions in the area of cognitive tasks require a thorough understanding, not only of the demands of the work situation, but also of user strategies in performing cognitive tasks and of limitations in human cognition. In some cases, the artifacts or tools used to carried out a task may impose their own constraints and limitations (e.g., navigating through a large number of VDU pages) which add up to the total work demands. In this sense, the analysis of cognitive tasks should examine both the interaction of users with their work environment and the user interaction with artifacts or tools; the latter is very important as modern artifacts (e.g., control panels, electronic procedures, expert systems) become increasingly sophisticated. As a result, this chapter has put particular emphasis on how to design man-machine interfaces and cognitive aids so that human performance is sustained in work environments where information may be unreliable, events may be difficult to predict, goals may have conflicting effects, and performance may be time constrained. Other types of situations, familiar but variable, are also considered as they entail performance of cognitive tasks. Variations from everyday situations are often associated with an increase in human errors as operators are required to perform several cognitive tasks, such as detecting variations,

2

tailoring old methods or devising new ones, and monitoring performance for errors. The structure of this chapter is as follows. Initially an introduction is made to human performance models that provide the basic framework for a cognitive analysis of the work environment, artifact constraints and user strategies (Sections 2 and 3). Design principles and guidelines derived from these models are also presented in these sections. Section 4 describes the methodology of cognitive task analysis. The chapter ends with two case studies presenting the cognitive analysis carried out for the design of cognitive aids for complex tasks (Section 5).

2. MODELS OF HUMAN COGNITION AND DESIGN PRINCIPLES Models of human cognition that have been extensively used in ergonomics to develop guidelines and “design principles” may fall into two broad categories. Models of the first category have been based on the classical paradigm of experimental psychology – also called the behavioral approach – focusing mainly on information processing stages. Behavioral models view humans as "fallible machines" and try to determine the limitations of human cognition in a neutral fashion independent from the context of performance, the goals and intentions of the users, and the background or history of previous actions. On the other hand, more recent models of human cognition have been developed mainly through field studies and the analysis of real world situations. Quite a few of these cognitive models have been inspired from Gibson’s (1979) work on ecological psychology, emphasizing the role of the person’s intentions, goals and history as central determinants of human behavior. The human factors literature is rich in behavioral and cognitive models of human performance. Because of space limitations, however, only three generic models of human performance will be presented which have found extensive applications. Section 2.1 presents a behavioral model developed by Wickens (1992), that is, the human informationprocessing model. Sections 2.2 and 2.3 present two cognitive models, that is, the actioncycle model of Norman (1988) and the skill-, rule-, and knowledge-based model of Rasmussen (1986).

2.1. The Human Information Processing Model The model Experimental psychology has provided a rich literature in human performance models that focus on how humans perceive and process information. Wickens (1992) has summarized this literature into a generic model of information processing (Figure 1). This model draws upon a “computer metaphor” whereby information is perceived by appropriate sensors, help-up and processed in a temporal memory (i.e., the working memory or RAM memory) and finally, acted upon through dedicated actuators. Long term memory (corresponding to a permanent form of memory, e.g., the hard-disk) can be used to store well-practiced work methods or algorithms for future use. A brief description of the human information processing model is given below.

3

Cognitive Processes

Receptors Perception Short-Term Sensory Memory

decision-making diagnosis response selection …...

Response Execution

Work Environment

Work Environment

Attention Resources

Working Memory

Long-Term Memory

Feedback

Figure 1: The human information processing model (adapted from Wickens 1992) Information is captured by sensory systems or receptors, e.g., the visual, auditory, vestibular, gustatory, olfactive, tactile and kinesthetic systems. Each sensory system is equipped with a central storage mechanism called short-term sensory store (STSS) or simply short-term memory. STSS prolongs a representation of the physical stimulus for a short period after the stimulus has terminated. It appears that environmental information is stored in the STSS even if attention is diverted elsewhere. STSS is characterized by two types of limitations: (i) the storage capacity, which refers to the amount of information that can be stored in the STSS, and (ii) the decay, indicating how long information stays in the STSS. Although there is strong experimental evidence about these limitations, there is some controversy regarding the numerical values of their limits. For example, experiments have shown that the short-term visual memory capacity varies from 7 to 17 letters and the auditory capacity from 4.4 to 6.2 letters (Card et al. 1986). With regard to memory decay rate, experiments have shown that it varies from 70 to 1000 msec for visual short-term memory whereas for the auditory short-term memory the decay varies from 900 to 3500 msec (op.cit.). In the next stage of perception, stimuli stored in STSS are processed further, i.e., their presence becomes conscious, they are recognized and identified or classified. For example, the driver first sees a “red” traffic light, then detects it, recognizes that it is a signal related to the driving task and identifies it as a stop sign. A large number of different physical stimuli may be assigned to a single perceptual category. For example, perceiving a, A, a, a or the sound “a”, all generate the same categorical perception: the letter “a”. At the same time, other characteristics or dimensions of the stimulus are also processed, such as whether the letter is spoken by a male or female voice, whether it is written in upper or lower case, etc. This example shows that stimulus perception and encoding is dependent upon available attention resources and personal goals and knowledge as stored in the long-term memory. Long-term memory (LTM) is the memory where perceived information and knowledge acquired through learning and training are stored permanently. As in working memory, the

4

information in long-term memory can have any combination of auditory, spatial and semantic characteristics. As far as knowledge is concerned, it can be either procedural (i.e., how to do things) or declarative (i.e., knowledge of facts). The goals of the person provide a workspace within which perceived stimuli and past experiences or methods retrieved from LTM are combined and processed further. This workspace is often called working memory since the person has to be conscious of and remember all presented or retrieved information. However, the capacity of working memory also seems to be limited. Miller (1956) was the first one to define the capacity of working memory, in his famous paper of “the magical number seven plus or minus two”. In a series of experiments, where participants carried out absolute judgment tasks, Miller found that the capacity of working memory varied between five and nine items or chunks∗ of information, when full attention was deployed. Cognitive processes, such as diagnosis, decision-making and planning, can operate within the same workspace or working memory space. Attention is the main regulatory mechanism for determining when control should pass from perception to cognitive processes or retrieval processes from LTM. At the final stage of information processing (response execution), the responses chosen in the previous stages are executed. A response can be any kind of action, e.g., eye or head movements, hand or leg movement, and verbal responses. Attention resources are also required at this stage since intrinsic (e.g., kinesthetic) and/or extrinsic feedback (e.g., visual, auditory) can be used to monitor the consequences of the actions performed. A straightforward implication of the information processing model shown in Figure 1, is that performance can become faster and more accurate when certain mental functions become “automated” through increased practice. For instance, familiarity with machine drawings may enable maintenance technicians to focus on the cognitive aspects of the task such as trouble-shooting; less experienced technicians would spend much more time in identifying technical components from the drawings and carrying out appropriate tests. As people acquire more experience, they are better able to time-share tasks since wellpracticed aspects of the job become “automated” (that is, they require less attention and effort in doing them). Practical implications The information processing model has been very useful in examining the mechanisms underlying several mental functions – e.g., perception, judgment, memory, attention, and response selection – and the work factors that affect them. For instance, signal detection theory (Green and Swets 1988) describes a human detection mechanism based on the properties of “response criterion” and “sensitivity”. Work factors, such as knowledge of action results, introduction of false signals, and access to images of defectives, can increase human detection and provide a basis for ergonomic interventions. Other mental functions (e.g., perception) have also been considered in terms of several processing modes, such as serial and parallel processing, and contributed to principles for designing man-machine interfaces (e.g., head-up displays that facilitate parallel processing of data superimposed on one another). In fact, there are several information processing models looking at specific mental functions, all of which ascribe to the same behavioral approach.



The term chunk is used to define a set of adjacent stimulus units that are closely tied together by associations in a person’s long-term memory. Therefore, a chunk may be a letter, a digit, a word, a phrase, a shape, or any other unit. 5

Because information processing models have looked at the mechanisms of mental functions and the work conditions that degrade or improve them, ergonomists have often turned to them to generate guidelines for interface design, training development and job aids design. To facilitate information processing, for instance, control panel information could be cast into meaningful chunks, thus, increasing the amount of information to be processed as a single “perceptual unit”. For example, dials and controls can be designed and laid out on a control panel according to the following ergonomic principles (Helander 1987; McCormick and Sanders 1987): − Frequency of use and criticality: dials and controls that are frequently used, or are of special importance, should be placed in prominent positions, for example in the center of the control panel. − Sequential consistency: when a particular procedure is always executed in a sequential order, controls and dials should be arranged according this order. − Topological consistency: where the physical location of the controlled items is important, the layout of dials should reflect their geographical arrangement. − Functional grouping: dials and controls that are related to a particular function should be placed together. Many systems, however, fail to adhere to these principles - e.g., the layout of stove burner controls fails to conform to the topological consistency principle (see Figure 2). Controls located beside their respective burners (Figure 2b and 2d) are compatible and will eliminate confusions caused by arrangements shown in Figure 2a and 2c.

POOR (a)

1

GOOD (b)

1

2

1

2

3

4

3

4

2

3

4

POOR (c)

1

2

3

4

GOOD (d)

Figure 2: Alternative layouts of stove burner controls Information processing models have also found many applications in the measurement of mental workload. The limited-resource model of human attention (Allport 1980) has been extensively used to examine the extent that different tasks may rely upon similar psychological resources or mental functions. Presumably, the higher the reliance upon similar mental functions, the higher the mental workload experienced by the human

6

operator. Measures of workload, hence, have tended to rely upon the degree of sharing similar mental functions and their limited capacities. Information processing models, however, seem insufficient to account for human behavior in many cognitive tasks where knowledge and strategy play an important role. Recent studies in tactical decision making (Serfaty et al. 1998), for instance, have shown that experienced operators are able to maintain their mental workload at low levels, even when work demands increase, because they can change their strategies and their use of mental functions. Under time pressure, for instance, crews may change from an “explicit” mode of communication to an “implicit” mode whereby information is made available without previous requests; the crew leader may keep all members informed of the “big picture” so that they could volunteer information when necessary without excessive communications. Performance of experienced personnel may become adapted to the demands of the situation, thus, overcoming high workload imposed by previous methods of work and communication. This interaction of expert knowledge and strategy in using mental functions has been better explored by two other models of human cognition that are described below.

2.2. The Action-Cycle Model The model Many artifacts seem rather difficult to use, often leading to frustrations and human errors. Norman (1988) was particularly interested in how equipment design could be benefit from models of human performance. He developed the action-cycle model (see Figure 3) which examines how people set themselves and achieve goals by acting upon the external world. This action-perception cycle entails two main cognitive processes by which people implement goals (i.e., execution process) and make further adjustments on the basis of perceived changes and evaluations of goals and intentions (i.e., evaluation process).

Intention

Evaluation of interpretations

Planning of actions

Interpreting the perceptions

Execution of actions sequence

Perceiving the state of the word

The

EvaluatIon

ExecutIon

Goals/Functions

World

Figure 3: The action-cycle model (adapted from Norman 1988)

7

The starting point of any action is some notion of what is wanted, i.e., the goal to be achieved. In many real tasks, this goal may be imprecisely specified with regard to actions that would lead to the desired goal, e.g. “I want to write a letter”. To lead to actions, human goals must be transformed into specific statements of what is to be done. These statements are called intentions. For example, I may decide to write the letter either by using a pencil, or a computer, or by dictating it to my secretary. To satisfy intentions, a detailed sequence of actions must be thought of (i.e., planning) and executed by manipulating several objects in the world. On the other hand, the evaluation side of things has three stages: first, perceiving what happened in the world; second, making sense of it in terms of needs and intentions (i.e., interpretation); and finally, comparing what happened with what was wanted (i.e., evaluation). The action-cycle model consists of seven action stages: one for goals (forming the goal), three for execution (forming the intention, specifying actions, executing actions) and three for evaluation (perceiving the state of the world, interpreting the state of the world, evaluating the outcome). As Norman (1988, p.48) pointed out, the action cycle model is an approximate model rather than a complete psychological theory. The seven stages are almost certainly not discrete entities. Most behavior does not require going through all stages in sequence, and most tasks are not carried out by single actions. There may be numerous sequences, and the whole task may last hours or even days. There is a continual feedback loop, in which the results of one action-cycle are used to direct further ones, in which goals lead to subgoals, and intentions lead to sub-intentions. There are actions-cycles in which goals are forgotten, discarded, or reformulated.

The gulfs of execution and evaluation The action-cycle model can help us understand many difficulties and errors in using artifacts. Difficulties in “use” are related to the distance (or amount of mental work) between intentions and possible physical actions, or between observed states of the artifact and interpretations. In other words, problems arise either because the mappings between intended actions and equipment mechanisms are insufficiently understood, or because action feedback is rather poor. There are several gulfs that separate the mental representations of the person from the physical states of the environment (Norman, 1988). The gulf of execution reflects the difference between intentions and allowable actions. The more the system allows a person to do the intended actions directly, without any extra mental effort, the smaller the gulf of execution is. A small gulf of execution ensures high usability of equipment. Consider for example the faucets for cold and hot water. The user intention is to control two things or parameters: the water temperature and the volume. Consequently, users should be able to do that with two controls, one for each parameter. This would insure a good mapping between intentions and allowable actions. In conventional settings, however, one faucet controls the volume of cold water and the other of hot water. To obtain water of a desired volume and temperature, users must try several combinations of faucet adjustments, hence, loosing valuable time and water. This is an example of a bad mapping between intentions and allowable actions (a large gulf of execution). The gulf of evaluation reflects the amount of effort that users must exert to interpret the physical state of the artifact and to determine how well their intentions have been met. The

8

gulf of evaluation is small when the artifact provides information about its state that is easy to get, is easy to interpret, and matches the way the user thinks of the artifact.

Using the action-cycle model in design According to the action cycle model, the usability of an artifact can be increased when its design bridges the gulfs of execution and evaluation. The seven-stage structure of the model can be cast as a list of questions to consider when designing artifacts (Norman 1988). Specifically, the designer may ask how easily can the user of the artifact: − determine the function of the artifact (i.e., setting a goal)? − perceive what actions are possible (i.e., forming intentions)? − determine what physical actions would satisfy intentions (i.e., planning)? − perform the physical actions by manipulating the controls (i.e., executing)? − perceive what state the artifact is in (i.e., perceiving state)? − achieve his or her intentions and expectations (i.e., interpreting states)? − tell if the artifact is in a desired state or intentions should be changed (i.e., evaluating intentions and goals)? A successful application of the action-cycle model in the domain of human-computer interaction regards the “direct manipulation interfaces” (Shneiderman 1983, Hutchins et al. 1986). These interfaces bridge the gulfs of execution and evaluation by incorporating the following properties (Shneiderman 1982; p. 251): − visual representation of the objects of interest, − direct manipulation of objects, instead of commands with complex syntax, − incremental operations that their effects are immediately visible and, on most occasions, reversible. The action-cycle model has been translated into a new design philosophy that views “design as knowledge in the world”. Norman (1988) argues that the knowledge required to do a job can be distributed partly in the head and partly in the world. For instance, the physical properties of objects may constrain the order in which parts can be put together, moved, picked up, or otherwise manipulated. Apart from these physical constraints, there are also cultural constraints that rely upon accepted cultural conventions. For instance, turning a part clockwise is the culturally defined standard for attaching a part to another while anti-clockwise movement usually results in dismantling a part. Because of these natural and artificial or cultural constrains, the number of alternatives for any particular situation is reduced as, are the amount of knowledge required within human memory. “Knowledge in the world”, in terms of constraints and labels, explains why many assembly tasks can be performed very precisely even when technicians cannot recall the sequence they followed in dismantling equipment; physical and cultural constrains reduce the alternative ways in which parts can be assembled. Therefore, cognitive tasks are easier done when part of the required knowledge is available externally – either explicit in the world (i.e., labels) or readily derived through constraints.

9

2.3. The Skill- Rule- and Knowledge-Based Model The model In a study of troubleshooting tasks in real work situations, Rasmussen (1983) observed that people control their interaction with the environment in different modes. The interaction depends on a proper match between the features of the work domain and the requirements of control modes. According to the skill- rule- and knowledge-based model (or SRK model), control and performance of human activities seem to be a function of a hierarchically organized control system (Rasmussen et al. 1994). Cognitive control operates at three levels referring to skill-based or automatic control, rule-based or conditional control, and knowledge-based or compensatory control (see Figure 4).

Knowledge-based behavior

Decision, choice of task

Planning

Recognition

Association state/task

Stored rules for tasks

Skill-based behavior

Interpretation

Rule-based behavior

Goals

Situation’s feature formation

Automated sensorimotor patterns

Sensory inputs

Signals Actions

Figure 4: The skill- rule and knowledge-based model (adapted from Rasmussen 1986) At the lowest level of skill-based behavior, human performance is governed by patterns of pre-programmed behaviors represented as analogue structures in a time-space domain in human memory. This mode of behavior is characteristic of well-practiced and routine situations whereby open-loop or feedforward control makes performance faster. Skillbased behavior is the result of extensive practice where people develop a repertoire of “cue – response” patterns suited for specific situations. When a familiar situation is recognized, a response is activated, tailored and applied to the situation. Neither any conscious analysis of the situation nor any sort of delibitation of alternative solutions are required. At the middle level of rule-based behavior, human performance is governed by conditional rues of the type: if then

10

Initially, stored rules are formulated at a general level and, subsequently, become supplemented with further details from the work environment. Behavior at this level requires conscious preparation whereby the need for action is first recognized, followed by retrieval of past rules or methods and, finally, composition of new rules either through selfmotivation or through instruction. Rule-based behavior is slower and more cognitively demanding than skill-based behavior. Rule-based behavior can be compared to the function of an expert system where situations are matched to a database of production rules and responses are produced by retrieving or combining different rules. It is worth noting that people may combine rules into macro-rules by collapsing conditions and responses into a single unit. With increasing practice, these macro-rules may become “temporal-spatial” patterns requiring less conscious attention; this illustrates the transition from rule-based to skill-based behavior. At the higher level of knowledge-based behavior, performance is governed by a thorough analysis of the situation and a systematic comparison of alternative means for action. Goals are explicitly formulated and alternative plans are compared rationally to maximize efficiency and minimize risks. Alternatives are considered and tested either physically, by trial and error, or conceptually by means of “thought experiments”. The way that the internal structure of the work system is represented by the user is extremely important for performance. Knowledge-based behavior is slower and more cognitively demanding than rule-based behavior because it requires access to an internal or “mental” model of the system as well as laborious comparisons of work methods to find the most optimal one. Knowledge-based behavior is characteristic of unfamiliar situations. As expertise evolves, a shift to lower levels of behavior occurs. This does not however mean that experienced operators always work at the skill-based level. Depending on the novelty of the situation, experts may move at higher levels of behavior when uncertain about a decision. The shift to the appropriate level of behavior is another characteristic of expertise and graceful performance in complex work environments. Using the SRK model The SRK model has been used by Reason (1990) to provide a framework for assigning errors to several categories related to the three levels of behavior. Deviations from current intentions due to execution failures and/or storage failures, for instance, are errors related to skill-based behavior (slips and lapses). Misclassification of the situation, leading to the application of the wrong rule or incorrect procedure, are errors occurring at the rule-based behavior. Finally, errors due to limitations in cognitive resources – “bounded rationality” – and incomplete or incorrect knowledge, are characteristic of knowledge-based behavior. More subtle forms of errors may occur when experienced operators fail to shift to higher levels of behavior. In these cases, operators keep behaving at the skill- or rule- based behavior, although the situation call for analytical comparison of options and re-assessment of the situation (e.g., knowledge-based processing). The SRK model also provides a useful framework for designing man-machine interfaces for complex systems. Vincente and Rasmussen (1992) advanced the concept of ecological interfaces that exploit the powerful human capabilities of perception and action, at the same time, providing appropriate support for more effortful and error-prone cognitive processes. Ecological interfaces aim at presenting information “in such a way as not to force cognitive control to a higher level than the demands of the task require, while at the

11

same time providing the appropriate support for all three levels” (op.cit., p. 598). In order to achieve these goals, interfaces should meet the following three principles: I. Support skill-based behavior: the operator should be able act directly on the display while the structure of presented information should be isomorphic to the part-whole structure of eye and hand movements. II. Support rule-based behavior: the interface should provide a consistent one-to-one mapping between constraints of the work domain and cues or signs presented on the interface. III. Support knowledge-based behavior: the interface should represent the work domain in the form of an abstraction hierarchy∗ to serve as an externalized “mental model” that supports problem solving.

Output Power

Temperature

Secondary coolant circulation

Primary coolant circulation Reactor

19.8 MW Electric Super Heater

Heat Exchanger

Turbine

Steam Generator

62.5 MW Power

Drum Hot Well Feedwater Heaters

Drum Pressure Level

Reactor Power % 62.5 MW Primary

Primary Flow %

Condenser Water circulation in evaporator, superheater, turbine and condenser 61.0 MW

60.5 MW

Secondary

Steam

Heat Balances

Figure 5: An ecological interface for monitoring the operation of a nuclear power plant (© Lindsay and Staffon 1988) ∗

Abstraction hierarchy is a framework proposed by Rasmussen (1985, 1994), useful for representing the cognitive constraints of a complex work environment. For example, the constraints related to process control have been found to belong to five hierarchically ordered levels (Vincente and Rasmussen 1992; p. 592): the purpose for which the system was designed (functional purpose); the intended causal structure of the process in terms of mass, energy, information or value flows (abstract purpose); the basic functions that the plant is designed to achieve (generalized function); the characteristics of the components and the connections between them (physical function); and the appearance and special location of these components (physical form). 12

An example of an ecological interface that supports direct perception of higher-level functional properties is shown in Figure 5. This interface was designed by Lindsay and Staffon (1988) and is used for monitoring the operation of a nuclear power plant. Graphical patterns show the temperature profiles of the coolant in the primary and secondary circulation systems as well as the profiles of the feedwater in the steam generator, superheater, turbine and condenser. Note that while the display is based on primary sensor data, still the functional status of the system can be directly read from the interrelationships among data. The Rankine cycle display presents the constraints of the feedwater system in a graphic visualization format so that workers can use it in guiding their actions; in doing this, workers are able to rely on their perceptual processes rather than analytical processes (e.g., by having to solve a differential equation or engage in any other abstract manipulation). Plant transients and sensor failures can be shown as distortions of the rectangles representing the primary and secondary circulation, or as temperature points of the feedwater system outside the Rankine cycle. The display can be perceived, at the discretion of the observer, at the level of the physical implications of the temperature readings, of the state of the coolant circuits, and of the flow of the energy.

3. DIAGNOSIS, DECISION-MAKING AND ERGONOMICS After this brief review of behavioral and cognitive models, we can now focus on two complex cognitive processes, that is, diagnosis and decision-making. These cognitive processes are brought into play in many problem-solving situations, where task goals may be insufficiently specified and responses may not benefit from past knowledge. These characteristics are common to many problem-solving scenarios and affect how people shift to different modes of cognitive control. Problem solvers may use experiences from similar cases in the past, apply generic rules related to a whole category of problems, or try alternative courses of actions and assess their results. In other words, optimizing problemsolving on the basis of knowledge-based behavior may be time-consuming and laborious. People tend to use several heuristics to regulate their performance between rule-based and knowledge-based processing. This increases task speed but may result in errors that are difficult to recover. In what follows, we present a set of common heuristics followed by experienced diagnosticians and decision-makers, potential biases and errors that may arise, and finally, ergonomic interventions how to support human performance.

3.1. Diagnosis Diagnosis is a cognitive process whereby a person tries to identify the causes of an undesirable event or situation. Technical failures and medical problems are two wellknown application areas of human diagnosis. Figure 6 presents some typical stages of the diagnosis process. Diagnosis starts with the perception of signals alerting to a system’s failure or malfunction. Following this, diagnosticians may choose whether to search for more information to develop a mental representation of the current system state. At the same time, knowledge about system structure and functioning can be retrieved from the long-term memory. On the basis of this evidence, diagnosticians may generate hypotheses about possible causes of the failure. Pending upon further tests, hypotheses may be 13

confirmed, which completes the diagnosis process, or rejected leading to selection of new hypotheses. Compensation for failures may start when the diagnostician feels confident that a correct interpretation has been made of the situation.

Perception of signs for system’s failure/malfunction

Search for information about the system’s state

Recall of knowledge concerning the system

Hypotheses generation

Planning and executing actions for hypotheses testing

Hypotheses are confirmed

Hypotheses are rejected

Decisions for further actions (e.g. error recovery)

Figure 6: Typical processes in fault diagnosis Fault diagnosis is a demanding cognitive activity whereby the particular diagnostic strategy will be influenced by the amount of information to be processed in developing a “mental model” of the situation, the number of hypotheses consistent with available evidence, and the activities required for testing hypotheses. In turn, these factors are influenced by several characteristics of the system or the work environment, including: − the number of interacting components of the system; − the degrees of freedom in the operation of the system; − the number of system components that can fail simultaneously; − the transparency of the mechanisms of the system. Time constraints and high risk may also added up, thus, increasing the difficulty of fault diagnosis. A particularly demanding situation is dynamic fault management (Woods 1994), whereby operators have to maintain system functions despite technical failures or disturbances. Typical fields of practice, where dynamic fault management occurs, are flightdeck operations, control of space systems, anesthetic management and process control. When experienced personnel perform fault diagnosis, they tend to use several heuristics to overcome their cognitive limitations. Although heuristics may decrease mental workload, they may often lead to cognitive biases and errors. It is worth considering some heuristics commonly used when searching for information, interpreting a situation and making diagnostic decisions.

14

Failures in complex systems may give rise to large amounts of information to be processed. Experienced people tend to filter information according to its informativeness, i.e., the degree to which it helps distinguish one failure from another. However people may be biased when applying heuristics. For instance, diagnosticians may accord erroneous or equal informative value to a pattern of cues (“as if” bias; Johnson et al. 1973). In other cases, salient cues – e.g., noises, bright lights, and abrupt onsets of intensity or motion – may receive disproportional attention to the extent that others are neglected (“salience bias”; Payne 1980). Diagnosis may be carried out under psychological stress which can impose limitations in entertaining a set of hypotheses about the situation (Rasmussen 1981; Mehle 1982; Lusted 1976). To overcome this cognitive limitation, experienced diagnosticians may use alternative strategies for hypothesis generation and testing. For instance, diagnosticians may start with a hypothesis that is seen as the most probable one; this probability may be either subjective, based on past experience or communicated by colleagues or the hierarchy. Alternatively, they may start with a hypothesis associated with a high risk failure, or a hypothesis that can be easily tested. Finally, they may start with a hypothesis that readily comes to mind (“availability” bias; Tversky & Kahneman 1974). Another heuristic used by people in diagnosis is “anchoring”, whereby initial evidence provides a cognitive “anchor” for the diagnostician’s belief in this hypothesis over the others (Tversky & Kahneman 1974). Consequently, people may tend to seek information that confirms the initial hypothesis and avoid any disconfirming evidence (Einhorn & Hogarth 1981; DeKeyser & Woods 1990). This bias is also known as “cognitive tunnel vision” (Sheridan, 1981). Finally, Einhorn & Hogarth (1981), Schustack & Sternberg (1981), DeKeyser & Woods (1990) describe situations where diagnosticians tend to seek – and therefore find – information that confirms the chosen hypothesis and to avoid information or tests whose outcomes could reject it. This is known as “confirmation bias”. Two possible causes for the confirmation bias have been proposed: (i) People seem to have greater cognitive difficulty dealing with negative information than with positive one (Clark & Chase 1972). (ii) Abandon a hypothesis and reformulate a new one, requires more cognitive effort than does the search and acquisition of information consistent with the first hypothesis (Einhorn & Hogarth 1981, Rasmussen 1981).

3.2. Decision Making Decision making is a cognitive process whereby a person tries to choose a goal and a method that would stabilize the system or increase its effectiveness. In real world situations, goals may be insufficiently defined and, thus, goal-setting becomes part of decision making. Furthermore, evaluation criteria for choosing among options may vary including economic, safety, and quality considerations. In such cases, optimizing on most criteria may become the basis for a good decision. Managerial planing, political decisions and system design are typical examples of decision making. Traditional models of decision making have adopted a rational approach that entailed: (i) determining the goal(s) to be achieved and the evaluation criteria, (ii) examining aspects of the work environment, (iii) developing alternative courses of action, (iv) assessing 15

alternatives, and (v) choosing an optimal course of action. Rational models of decision making view these stages as successive, however, in real fields of practice their sequence may change (Marmaras et al. 1992; Klein 1997; Dreyfus 1997) as a function of several factors in the work environment. Factors that could influence the difficulty of decision making include: − the amount of information that the decision maker has to consider; − the uncertainty of available information; − the dynamic nature of the work environment; − the complexity of evaluation criteria; − the alternative courses of action that can be developed; − the time constraints imposed on the decision maker; − the risk related to possible decisions. To overcome limitations in human cognition, experienced decision makers may use a number of heuristics. As in diagnosis, heuristics may allow decision makers to cope with complexity in real world situations but, on the other hand, they may lead to cognitive biases and errors. For instance, decision makers may use information selectively and make speculative inferences based on limited data. Hayes-Roth (1979) has called this heuristic “opportunistic thinking” which is similar to Simon´s (1978) concept of “satisfying and search cost”; in other words, decision makers may pursue a trade-off between seeking more information and minimizing the cost in obtaining information. Although this strategy can simplify decision making, important data may be neglected, leading to erroneous or sub-optimal decisions. To cope with complexity and time pressure, decision makers tend to acquire sufficient evidence to form a mental representation of the situation and examine a rather limited set of alternative actions (Marmaras et al. 1992). Instead of generating a complete set of alternatives at the outset, and subsequently perform an evaluation based on optimizing several criteria, decision makers may start with an option that has been incompletely assessed. This heuristic could be attributed to the fact that experienced decision makers possess a repertoire of well-practiced responses accessed through recognition rather than conscious search (Simon, 1978). Limited consideration of alternatives, however, can lead to ineffective practices. For instance, if the situation at hand differs in subtle ways from previous ones, sub-optimal solutions may be adopted. Furthermore, in domains such as system design and managerial planing, new innovative solutions and radical departures may be of great advantage. In dynamic systems, the distinguishing features of a situation may change over time and new events may add up. This implies that operators should reflect on their thinking and revise their assessment of the situation or their earlier decisions in order to take account of new evidence, interruptions and negative feedback (Weick 1983; Schon 1983; Lindblom 1980). Under stress, however, experienced decision makers may “fixate” on earlier decisions, thus, failing to revise them at later stages. Thinking/acting cycles may compensate, to some extent, for cognitive fixation on earlier decisions. That is, initial decisions can be effected, on the basis of related experiences from similar situations, but their suitability is evaluated after the first outcomes; in this way, decision makers can undertake corrective actions and tailor earlier decisions to new circumstances.

16

3.3. Supporting Diagnosis and Decision Making To overcome these weaknesses of human cognition, many engineering disciplines – e.g., artificial intelligence, operations research and supervisory control – have pursued to develop stand-alone expert systems that support humans in diagnosis and decision making. Although these artificial advisors have had a significant contribution to system design, their performance seems to degrade in unfamiliar situations as humans find it rather awkward to cooperate. Operators are forced to repeat the whole diagnostic or decision making process instead of taking over from the computer advisor. There is a need therefore to develop cognitive advisors that enhance the cognitive processes of operators rather than construct computer advisors capable of independent performance (Woods & Hollnagel 1987; Roth et al. 1987). In order to compact several biases related to diagnosis and decision making, support can be provided to human operators in the following ways: − bring together all information required to form a mental representation of the situation; − present information in appropriate visual forms; − provide memory aids; − design cognitive aids for overcoming biases; − make systems transparent to facilitate perception of different system states; − incorporate intelligent facilities for hypothesis testing and evaluation of options. Ecological interfaces (Vicente & Rasmussen 1992), along the arguments presented earlier, provide a good example of artifacts that meet some of these requirements. Other cognitive aids are presented in section 5. Real world situations requiring diagnosis and decision making can vary, each one presenting its own specific characteristics. As a result, artifacts and cognitive aids require an exhaustive analysis of the demands of the situation, the user requirements, and the user strategies in achieving satisfactory performance. This type of analysis is usually referred to as cognitive analysis and is valuable in putting human performance models into actual practice.

4. COGNITIVE TASK ANALYSIS (CTA) Ergonomics interventions in terms of man-machine interfaces, cognitive aids or training programs, require a thorough understanding of the work constraints and user strategies. Work constraints can range from system constraints (e.g., time pressure and conflicting goals) to cognitive constraints or limitations and “use” constraints imposed by the tool itself (e.g., an interface or a computer program). On the other hand, user strategies can range from informal heuristics, retrieved from similar situations experienced in the past, to optimisation strategies for unfamiliar events. The analysis of work constraints and “use” strategies is the main objective of cognitive task analysis (CTA). Cognitive task analysis, involves a consideration of user goals, means and work constraints, in order to identify the what, how and why of operator´s work (Rasmussen et al. 1994, Marmaras & Pavard 2000). Specifically, CTA can be used to identify: 17

the problem solving and self-regulation strategies∗ adopted by operators; the problem solving processes followed (heuristics); the specific goals and sub-goals of operators at each stage in these processes; the signs available in the work environment (both formal and informal signs), the information carried by them and the significance attached to them by humans; − the regulation loops used; − the resources of the work environment that could help manage workload; − the causes of erroneous actions or sub-optimal performance. − − − −

CTA differs from traditional task analysis since the latter describes the performance demands imposed upon the human operator in a neutral fashion (Drury et al. 1987; Kirwan & Ainsworth 1992) regardless of how operators perceive of the problem and how they chose their strategies. Furthermore, CTA differs from methods of job analysis that look at occupational roles and positions of specific personnel categories (Davis & Wacker 1987; Drury et al. 1987).

4.1. A Framework for Cognitive Task Analysis Cognitive task analysis deals with how operators respond to tasks delegated to them either by the system or by their supervisors. The term tasks is used here to designate the operations undertaken to achieve certain goals under a set of conditions created by the work system∗* (Leplat, 1990). CTA looks at several mental activities or processes that operators rely upon in order to assess the current situation, make decisions and formulate plans of actions. In this respect, CTA can comprise several stages, including: 1. Systematic observation and recording of operator´s actions in relation to the components of the work system; the observed activities may include body movements and postures, eye movements, verbal and gesture communications, etc. 2. Interviewing the operators with the aim to identify the why and when of the observed actions. 3. Inference of operator´s cognitive activities and processes. 4. Formulation of hypotheses about operator´s competencies***, by interpreting their cognitive activities with reference to work demands, cognitive constraints and possible resources to manage workload. 5. Validation of hypotheses by repeating stages (1) and (2), as required. Techniques, such as video and tape recording, equipment mock-ups, and eye tracking, can be used to collect data regarding observable aspects of human performance. To explore what internal or cognitive processes underlie observable actions, however, we need to consider other techniques, such as thinking aloud while doing the job (i.e., verbal protocols) and retrospective verbalisations. For high risk industries, event scenarios and simulation methods may be used when on-site observations are difficult or impossible. ∗

Self-regulation strategies are strategies for deciding how to adapt to different circumstances, monitor complete or interrupted tasks, and detect errors. ∗* The work system consists of the technological system, the workplace, the physical environment, the organizational and management system, and the socio-economic policies. *** The term competencies is used here to designate the specific cognitive strategies and heuristics the operators develop and use to respond to the task's demands, within the constraints imposed by a specific work environment. 18

CTA should cover a range of work situations representing both normal and degraded conditions. As the analysts develop a better image of the investigated scenario, they may become increasingly aware of the need to explore other unfamiliar situations. Cognitive analysis should also be expanded into how different operating crews respond to the same work situation. Presumably, crews may differ in their performance because of varying levels of expertise, different decision-making styles and different coordination patterns (see for example Marmaras et al. 1997). The inference of cognitive activities and formulation of hypotheses concerning user competencies requires familiarity with models of human cognition as offered in cognitive psychology, ethnology, psycholinguistics, and organisational psychology. Several theoretical models, such as those cited earlier, have already found several practical applications in eliciting cognitive processes of expert users. Newell and Simon´s (1972) human problem-solving paradigm, for instance, could be used as a background framework for inferring operator´s cognitive processes when they solve problems. Hutchins´ (1990, 1992) theory of distributed cognition could support analysts in identifying operator resources in managing workload. Rasmussen´s (1986) “ladder” model of decision making could be used to examine how people diagnose problems and evaluate goals. Norman´s (1988) action-cycle model, could be used to infer the cognitive activities in control tasks. Finally, Reason´s (1990) model of human errors could be used to classify, explain and predict potential errors as well as underlying error-shaping factors. Human performance models, however, have a hypothetical rather than a normative value for the analyst. They constitute his/her background knowledge and may support interpretation of observable activities and inference of cognitive activities. Cognitive analysis may confirm these models (totally or partially), enrich them, indicate their limits or reject them. Consequently, although the main scope of cognitive analysis is the design of artefacts and cognitive advisory systems for complex tasks, it can also provide valuable insights at a theoretical level. Models of human cognition and behavior can provide practical input to ergonomics interventions when cast in the form of cognitive probes or questions regarding how operators search their environment, assess the situation, make decisions, plan their actions, and monitor their own performance. Table 1 shows a list of cognitive probes to help analysts infer these cognitive processes that underlie observable actions and errors. This need to observe, interview, test, and probe operators in the course of cognitive analysis implies that the role of operators in the conduct of analysis is crucial. Inference of cognitive activities and elicitation of their competencies cannot be realised without their active participation. Consequently, explicit presentation of the scope of analysis and commitment of their willingness to provide information are prerequisites in the conduct of CTA. Furthermore, retrospective verbalisations and on-line verbal protocols are central to the proposed methodology (Ericsson and Simon, 1984; Sanderson et al., 1989). CTA permits to develop a functional model of the work situation. The functional model should: 1. Describe the cognitive constraints and demands imposed on operators, including, multiple goals and competing criteria for the good completion of the task, unreliable, uncertain, or excessive information to make a decision, and time restrictions.

19

2. Identify situations where human performance may become ineffective as well as their potential causes – e.g., cognitive demands exceeding operator capacities and strategies that are effective under normal situations but may seem inappropriate for the new one. 3. Identify error-prone situations and causes of errors or cognitive biases – e.g., irrelevant or superfluous information, inadequate work organisation, poor workplace design, and insufficient knowledge. 4. Describe the main elements of operator´s competencies and determine their strengths and weaknesses. 5. Describe how resources of the environment can be used to support the cognitive processes. Table 1: Examples of cognitive probes

Cognitive activities Recognition

Probes ♦ ♦ ♦

Interpretation

♦ ♦ ♦

Decision making

♦ ♦ ♦ ♦ ♦

Planning

♦ ♦ ♦ ♦

Feedback and self-monitoring

♦ ♦

What features were you looking at when you recognized that a problem existed ? What was the most important piece of information that you used in recognizing the situation ? Were you reminded of previous experiences in which a similar situation was encountered ? At any stage, were you uncertain about either the reliability or the relevance of the information that you had available ? Did you use all the information available to you when assessing the situation ? Was there any additional information that you might have used to assist in assessing the situation ? What were your specific goals at the various decision points ? Were there any other alternatives available to you other than the decision you made ? Why were these alternatives considered inappropriate ? At any stage, were you uncertain about the appropriateness of the decision ? How did you know when to make the decision ? Are there any situations in which your plan of action would have turned out differently ? When you do this task, are there ways of working smart (e.g., combine procedures) that you have found especially useful ? Can you think of an example when you improvised in this task or noticed an opportunity to do something better ? Have you thought of any side-effects of your plan and possible steps to prevent them or minimize their consequences ? What would this result tell you in terms of your assessment of the situation or efficiency of your actions ? At this point, do you think you need to change the way you were performing to get the job done ?

20

The functional model of the work situation can provide valuable inputs into the specification of user requirements, prototyping, and evaluation of cognitive aids. Specifically, based on the elements 1, 2 and 3 of the functional model, situations and tasks for which cognitive aid would be desirable and the ways such aid must be provided, can be determined and specified. This investigation can be made responding to questions such as: − What other information would be useful to the operators? − Is there a more appropriate form to present the information already used as well as the additional new information? − Is it possible to increase the reliability of information? − Could the search for information be facilitated, and how? − Could the treatment of information be facilitated, and how? − Could we provide memory supports, and how? − Could we facilitate the complex cognitive activities carried out, and how? − Could we promote and facilitate the use of the most effective diagnosis and decision making strategies, and how? − Could we provide supports which would decrease mental workload and mitigate degraded performance, and how? − Could we provide supports which would decrease human errors occurrence, and how? Cognitive aids can take several forms including, memory aids, computational tools, decision aids to avoid cognitive biases, visualisation of equipment that is difficult to inspect, situation assessment aids, and so on. The functional model is also useful in designing man-machine interfaces to support retrieval of solutions and generation of new methods. By representing system constraints on the interface, operators may be supported in predicting side-effects stemming from specific actions. On the other hand, the functional model can also be useful in specifying the competencies and strategies required in complex tasks and, hence, provide the content of skill training. Furthermore, based on the information provided by the elements 4 and 5 of the functional model, the main features of the human-machine interface can also be specified, ensuring compatibility with operators´ competencies. The way task´s objects should be represented by the system, the type of man-machine dialogues to be used, the procedures to be proposed and generic or customisable elements of the system, are examples of humancomputer interface features which can be specified using the acquired data. Close cooperation between ergonomics specialists, information technology specialists and stake-holders of the design project is required in order to examine what system functions should be supported by available information technology, what features of the humancomputer interface should be realised, and what functions should be given priority.

4.2. Techniques for Cognitive Task Analysis CTA can be carried out by using a variety of techniques which, according to Redding and Seamster (1994), can include: cognitive interviewing, analysis of verbal protocols, multidimensional scaling, computer simulations of human performance, and human error analysis. For instance, Rasmussen (1986) has conducted cognitive interviews to examine 21

the trouble-shooting strategies used by electronics technicians. Roth et al. (1992) have used a cognitive environment simulation to investigate cognitive activities in fault-management in nuclear power plants. Seamster et al. (1993) have carried out extensive cognitive task analyses to specify instructional programs for air traffic controllers. These CTA techniques have been used both to predict how users perform cognitive tasks on prototype systems and to analyze the difficulties and errors in already functioning systems. The former use is associated with the design and development of user interfaces in new systems while the latter use with the development of decision support systems or cognitive aids, and training programs. The results of CTA are usually cast in the form of graphical representations that incorporate the work demands and user strategies. For cognitive tasks that have been encountered in the past, operators may have developed well-established responses which may need some modifications but nevertheless they provide a framework to start with. For unfamiliar tasks, which have not been encountered in the past or are beyond the designbasis of the system, operators are required to develop new methods or combine old methods in new ways. To illustrate how the results of CTA can be merged in a graphical form, two techniques are presented, that is, the Hierarchical Task Analysis and the Critical Decision Method.

4.2.1. Hierarchical Task Analysis The human factors literature is rich in task analysis techniques for situations and jobs requiring rule-based behavior (e.g., Kirwan & Ainsworth 1992). Some of these techniques can also be used for the analysis of cognitive tasks where well-practiced work methods must be adapted to task variations and new circumstances. This can be achieved provided that task analysis goes beyond the recommended work methods and explores task variations that can cause failures of human performance. Hierarchical task analysis (Shepherd 1989), for instance, can be used to describe how operators set goals and plan their activities in terms of work methods, antecedent conditions, and expected feedback. When the analysis is expanded to cover, not only normal situations, but also task variations or changes in circumstances then it would be possible to record possible ways in which humans may fail and how they could recover from errors. Table 2 shows an analysis of a process control task where operators start-up an oil refinery furnace. This is a safetycritical task because many safety systems are on manual mode, radio communications between control room and on-site personnel are intensive, side-effects are not visible (e.g., accumulation of fuel in the fire box) and errors can lead to furnace explosions.

22

Table 2: Extract from the analysis of starting an oil-refinery furnace (a variant of Hierarchical Task Analysis). Goal-setting

Work method (Planning)

Antecedent conditions (planning)

Feedback

Common errors and problem detection

2. Establish flames on all burners

Light first burner (2.1) and take safety precautions (2.2) if flame is ´t "healthy". Proceed with other burners.

Furnace has been purged with air and free of residual fuel

"Healthy" flames on all burners

 Selects wrong equipment to test furnace atmosphere.  Repeated failures to ignite burners may require shutting down the furnace.

2.1. Light selected burner

Remove slip-plate, open burner valve and ignite burner

Flame has not extinguished more than twice before

"Healthy" flame on selected burner

 Forgets to light and insert torch before removing slipe-plate from fuel pipe; explosive mixture could be created in the event of a leaking block valve.  Tries to light burner by flashing off from adjacent burners.  Does not check and reset safety trips which give rise to burner failing to ignite

2.2. Take safety precautions

Close burner valve and inject steam

Burner flame has been extinguished

Steam injection for 2 - 3 minutes

 Forgets to put air/fuel ratio control and furnace temperature control on manual; it can cause substoichiometric firing  Forgets to investigate problem if a burner does not light after two attempts.

3. Gradually increase furnace load to target temperature

Adjust fuel flow (3.1) & air flow (3.2) in cycles until thermal load is achieved

All burners remain lighted throughout operation

Temperature of crude oil on target

 In the event of a burner failure (e.g., flame put out), compensates by increasing fuel to other burners

3.1. Adjust flow of fuel supply

Monitor fuel supply indicator and adjust fuel valve

Burners are fitted with suitable fuel diffusion plugs

Temperature of crude oil on target

 Does not consult criterion table for changing plugs at different fuel supplies

3.2. Adjust supply of combustion air

Monitor oxygen in flue gases and adjust air damper to keep oxygen above limit

Oxygen content above limit

 Risk of explosion if oxygen below limit (may indicate unburned fuel)

A variant of Hierarchical Task Analysis has been used to examine several cognitive activities, such as goal-setting and planning, and failures due to slips and mistakes. Variations in human performance were examined in terms of how teams in different shifts would perform the same task and how the same team would respond to changes in circumstances. A study conducted by Kontogiannis and Embrey (1997) has used this technique to summarize findings from on-line observations of performance, interviews 23

with process operators about their work methods, near-miss reviews and critical incident analysis. The task analysis in Table 2 has provided valuable input in revising the operating procedures for start-up; the sequence of operations was re-organized, contingency steps were included for variations in circumstances, check-boxes were inserted for tracking executed steps, and warnings and cautions were added to prevent human errors or help in their detection and recovery. In addition, the task analysis in Table 2 has been used to redesign the computer-based process displays in ways that all information required for the same task could be grouped and presented in the same screen. For instance, the oxygen content in flue gases is an indicator of the efficiency of combustion (see last row in Table 2) and should be related to the flow-rates of air and fuel; this implies that these parameters are functionally-related in achieving the required furnace temperature and, thus, they should be presented in the same computer screen. The analysis of cognitive tasks, therefore, may provide input to several forms of human factors interventions, including control panel design, revision of operating procedures, and development of job aids and training.

4.2.2. Critical Decision Method This section reports on the use of the Critical Decision Method or CDM (Klein et al. 1989), a retrospective cognitive task analysis based on cognitive interviews for eliciting expert knowledge, decision strategies and cues attended to, and potential errors. Applications of the CDM technique can be found in fireground command, tactical decision making in naval systems, ambulance emergency planning, and incident control in offshore oil industries. The technique relies on Subject Matter Experts (SMEs) recalling a particularly memorable incident they had experienced in the course of their work. The sequence of events and actions are organized on a timeline which can be re-arranged as SMEs would remember other details of the incident. The next stage is to probe SMEs to elicit more information concerning each major decision point. Cognitive probes address the cues attended to, the knowledge needed to make a decision, the way in which the information was presented, the assessment made of the situation, the options considered and, finally, the basis for the final choice. The third stage of the CDM technique involves comparisons between experts and novices. The participants or SMEs are asked to comment on the expected performance of a less experienced person when faced with the same situation. This is usually done in order to identify possible errors that would have been made by less experienced personnel and potential recovery routes through better training, operating procedures, and interface design. The results of the cognitive task analysis can be represented in a single format by means of a decision analysis table. One of the most important aspects of applying the CDM technique is selecting appropriate incidents for further analysis. The incidents should refer to complex events which challenged ordinary operating practices, regardless of the severity of the incident caused by these practices. It is also important that a “no blame” culture exists in the organization so that participants are not constrained in their description of events and unsuccessful actions. To illustrate the use of the CDM technique, the response of the operating crew during the first half hour of the Ginna nuclear power incident is examined below, as reported in Woods (1982) and INPO (1982). The operating crew at the Ginna nuclear plant encountered a major emergency, in January 1982, due to a tube rupture in a steam generator; as a result, radioactive coolant leaked into 24

the steam generator and subsequently to the atmosphere. In a pressurized water reactor, such as Ginna, water coolant is used to carry heat from the reactor to the steam generator (i.e. the primary loop); a secondary water loop passes through the steam generator and the produced steam drives the turbine that generates electricity. A water leak from the primary to the secondary loop can be a potential risk when not isolated in time. In fact, the delayed isolation of the faulted or leaking steam generator was one of the contributory factors in the evolution of the incident. Woods (1982) used a variant of the CDM technique to analyze the major decisions of the operating crew at the Ginna incident. Figure 7 shows a timeline of events and human actions plotted against a simplified version of the SRK model of Rasmussen. Detection refers to the collection of data from control room instruments and the recognition of a familiar pattern; for instance, the initial plant symptoms leads to an initial recognition of the problem as a steam generator tube rupture (i.e., a SGTR event). However, alternative events may be equally plausible which requires further processing of information at the next stage. Interpretation, then involves identifying other plausible explanations of the problem, predicting the criticality of the situation, and exploring options for intervention. The formulation of specific actions how to implement a plan and the execution of actions are carried out at the following stage of control. It is worth noting that the large number of actions at the control stage provide a context of the conditions of work that may affect overall performance; for instance, high workload at this stage could prevent crew from detecting emerging events. The feedback stage is similar to the detection stage since both are based on the collection of data; however, in the detection stage, observation follows an alarm, while in the feedback stage observation is a follow-up to an action. The decision flow chart in Figure 7 has been developed on the basis of cognitive interviews with the operators involved and investigations of the operating procedures in use. 1 DETECTION

INTERPRETATION

CONTROL

FEEDBACK

2

Low system pressure; Steam/feedwater flow mismatch in “B” SG; “B” SG level deviation; Radiation in air ejector

4

5

Actuation of safety systems: Auto reactor trip (0:03); Safety injection (0:03); Main feedwater pumps to both SG trip

General diagnosis of a SGTR event {probably in “B” SG}

Reduce power (0:01); Increase charging flow to reactor loop (0:03)

3

Apply SGTR procedure Need more evidence to identify faulty SG

Verify actuation of safety systems; Trip reactor coolant pumps (0:04)

Observe “B” SG level increasing more rapidly than “A” SG level

Close steam valves of turbine-driven AFW pumps to control radioactive release; Stop AFW flow to “B” SG; Send operator to measure radiation levels Observe “B” SG level increasing with no AFW flow to “B” SG (0:13)

Faulty SG confirmed

Further isolate “B” SG to prevent contaminated water overflowing through safety valves

Begin RCS cooldown via CSDV (0:13); Close MSIV (0:15); Stop turbine-driven AFW pump (0:21) Level & pressure still increasing in “B” SG; Operator reports high radiation in “B” SG loop

Figure 7: Analysis of the response of the crew during the first half hour at the Ginna incident (an example of the Critical Decision Method) 25

To understand the cognitive processes involved at critical decision points, analysts should employ several cognitive probes similar to those shown in Table 1. It is worth investigating, for instance, the decision of the crew to seek further evidence to identify the tube rupture in one of the two steam generators (column 3, Figure 7). At this point in time, the crew was aware that the level in the "B" Steam Generator was increasing more rapidly than the level of "A" Steam Generator (i.e., due to tube rupture) with auxiliary feedwater flows established to both Steam Generators. However, the crew needed additional evidence to conclude their fault diagnosis. Was this delay in interpretation caused by insufficient plant knowledge? Did the crew interpret the consequences of the problem correctly? Were the instructions in the procedures clear? And in general, what factors have influenced this behavior of the crew? These are some cognitive probes necessary to help analysts explore the way that the crew interpreted the problem. As it appeared to be the case, the crew stopped the auxiliary feedwater flow to "B" Steam Generator and came to a conclusion when they observed that its level was still increasing with no input; in addition, an auxiliary operator was sent to measure radioactivity in the suspect steam generator (column 3, Figure 7). Nevertheless, the crew isolated the "B" Steam Generator before feedback was given by the operator with regard to the levels of radioactivity. The cognitive interviews established that a plausible explanation for the delayed diagnosis was the high cost of misinterpretation. Isolation of the wrong steam generator (i.e., by closing the main steam isolation valve, MSIV) would require a delay to re-open it in order to re-pressurize and establish this steam generator as functional. The crew also thought of a worse scenario where the MSIV would stuck in the closed position depriving the crew from an efficient mode of cooldown (i.e., by dumping steam directly to the condensers); in the worst case scenario, the crew would have to operate the atmospheric steam dump valves (ASDV) which had a smaller cooling function and an increased risk of radiological release. Other contextual factors that should be investigated include the presentation of instructions in the procedures and the goal priorities established in training regimes. Notes for prompt isolation of the faulty steam generator were buried in a series of notes placed in a separate location from the instructions how to identify the faulty steam generator. In addition the high cost of making an incorrect action (an error of commission), as compared to an error of omission, may have contributed to this delay. In other words, experience with bad treatment of mistaken actions on the part of the organization may have promoted the crew to wait for redundant evidence. Nevertheless, the eight minutes delay in diagnosis was somewhat excessive and reduced the available time window to control the rising water level in "B" Steam Generator. As a result the "B" Steam Generator overflowed and contaminated water passed through the safety valves; subsequent actions (e.g., block ASDV, column 5) due to inappropriate wording of procedural instructions also contributed to this release. The cognitive analysis of the diagnostic activity can provide valuable insights into the strategies employed by experienced personnel and the contextual factors that led to delays and errors. It should be emphasized that the crew performed well under the stressful conditions of the emergency (i.e., safety-critical event, inadequate procedures, distractions by the arrival of extra staff, etc) given that they had been on duty for an hour and a half only. The analysis has also revealed some important aspects of how people make decisions under stress. When events have high consequences, experienced personnel take into account the cost of mis-interpretation and think ahead of possible contingencies (e.g.,

26

equipment stuck in closed position depriving a cooling function). Intermingling faultinterpretation and contingency planning could be an efficient strategy under stress provided that people know how to switch between them. In fact, cognitive analysis of incidents has been used as a basis for developing training programs for the nuclear power industry (Kontogiannis 1996). The Critical Decision Method has also been used in eliciting cognitive activities and problem-solving strategies required in ambulance emergency planning (O´Hara et al. 1998) and fire-fighting command (Klein et al. 1986). Taxonomies of operator strategies in performing cognitive tasks in emergencies have emerged including, maintaining situation assessment, matching resources to situation demands, planning ahead, balancing workload between crew members, keeping track of unsuccessful or interrupted tasks, and revising plans in the light of new evidence. What is more important, these strategies can be related to specific cognitive tasks and criteria can be derived how to switch between alternative strategies when the context of the situation changes. It is conceivable that these problemsolving strategies can become the basis for developing cognitive aids as well as training courses for maintaining them under psychological stress.

5.

DESIGNING COGNITIVE AIDS FOR COMPLEX COGNITIVE TASKS

Recent developments in information technology and system automation have provided challenging occasions for developing computerized artifacts aiming to support cognitive processes such as diagnosis, decision-making and planning. This enthusiasm with the capabilities of new technology has resulted in an over-reliance on the merits of technology and the development of stand-alone systems capable of undertaking fault-diagnosis, decision making and planning. Recent reviews of such independent computer consultants (e.g., Roth et al. 1997) however, have found them to be difficult to use and “brittle” in the face of novel circumstances. Woods et al. (1990) argued that many of these systems cannot anticipate operator responses, provide unsatisfactory accounts of their own goals, and cannot re-direct their line of reasoning in cases of misperception of the nature of the problem. To make cognitive aids more “intelligent” and “co-operative” it is necessary to examine the nature of human cognition in complex worlds and its coupling with the wider context of work (Marmaras & Pavard 2000). Because of the increasing interest in developing cognitive aids for complex tasks, this section presents two case studies that address this issue in an applied context.

5.1. The Case of a Cognitive Aid for CNC Lathe Programming The scope of this study was to design a cognitive aid for CNC lathe programming (Marmaras et al. 1997) by adopting a problem-driven approach. This approach combines an analysis of the task in terms of constraints and cognitive demands with an analysis of user strategies to cope with the problem (Marmaras et al. 1992; Woods and Hollnagel, 1987). A cognitive task analysis was undertaken at the first place in order to identify the cognitive demands of the task and investigate likely problem-solving strategies leading to optimal and sub-optimal solutions. On the basis of the cognitive task analysis and the results of a follow-up experiment, an information technology cognitive aid was developed 27

for CNC lathe programming. A prototype of the cognitive aid was evaluated in a second experiment. Cognitive task analysis Programming a CNC lathe requires the development of a program that would guide the lathe to transform a simple cylindrical part into a complex shape. This cognitive task requires planning and codification, and it is very demanding due to several inter-dependent constraints, competing criteria for task efficiency, delayed feedback and lack of memory aids for maintaining a mental imagery of the manufacturing process. A cognitive analysis of the activities and strategies was conducted on the basis of real-time observations of experienced operators programming their CNC lathes and off-the-job interviews. The analysis identified three main cognitive tasks, that is: 1. Planning the whole process of manufacturing which entails deciding upon: − the order of cutting several elements of the complex shape; − the cutting tool to be used at each stage of the cutting process; − the movement of the tool at each stage of the cutting process (e.g., starting and ending points, type of the movement, and speed); − the number of iterative movements of the tool at each stage; − the speed of rotation of the part. 2. Codification of the manufacturing process to the programming language of the CNC machine. The operator has to consider the vocabulary of this language to designate the different objects and functions of the manufacturing process, as well as the grammar and syntax of the language. 3. Introduction of the program to the machine by using various editing facilities and by starting the execution of the program. It is worth noting that, at this stage, certain omission and syntax errors can be recognized by the computer logic and the operator may be called upon to correct them. Table 3 summarizes the results of the cognitive analysis for the planning task. It is important to note a number of constraints that experienced operators take into account when deciding upon the most suitable strategy to manufacture the part. These include: − the shape of the part and the constraints imposed by its material; − the constraints of the machine tool, e.g., the rotating movement of the part, the area and movement possibilities of the cutting tool, its dimensions, its cutting capabilities, and so forth; − the product quality constraints derived from the specification; they are affected by the speeds of the part and tools, the type of tools and the designation of their movements; − the manufacturing time that is affected by the number of tool changes, the movements the tools make without cutting (“idle time” of the tool) and the speeds of the part and cutting tools; − the safety considerations, such as avoid breaking or destroying the part or the tools; safety rules concern the area of tool movement and the speeds of the part and cutting tools.

28

Table 3: A decision table for the task of planning how to cut a part in a CNC lathe

Tasks Planning the manufacturing process

Decision choices  Order of cutting;  Cutting tools at each phase;  Movement of tools;  Iterative tool movements;  Rotation speed of the part.

Constraints  Part shape & material;  Tool constrains;  Product quality;  Manufacturing time.

Sources of difficulty

Strategies

 Interdepende nt constraints;  Competing criteria for task efficiency;  Lack of real- time feedback;  Lack of aids for mental imagery of manufacturing process.

 Serial strategy.  Optimizati on strategy.

An analysis of how experienced operators decide to use the CNC lathe in cutting parts into complex shapes revealed a variety of problem strategies ranging from serial to optimized strategies. The simplest strategy, at one end of the spectrum, involved cutting each component of the part having a different shape separately and in a serial order (e.g., from right to left). A number of heuristics have been introduced in the problem-solving process resulting in such a serial strategy, including: − decomposition of the part to its components (problem decomposition); − specification and prioritization of several criteria affecting task efficiency (criteria prioritization); − simplification of mental imagery of the manufacturing process (mental representation); − use of a unique frame of reference regarding the part to be manufactured (frames of reference minimization). At the other end of the spectrum, a more complex strategy was adopted that relied on optimizing a number of criteria for task efficiency, such as product quality, manufacturing time and safety considerations. A case in point is a cutting strategy that integrates more than one different shapes into one tool movement which alters the serial order of cutting the different shapes of the part. This strategy would decrease the time consuming changes of tools and the “idle time”, thus, minimizing the manufacturing time. Optimized cutting strategies require complex problem-solving processes, may include: − adopting a holistic view of the part to be manufactured; − considering continuously all the criteria of task efficiency; − using a dynamic mental imagery of the cutting process that includes all intermediate phases of the whole part; − adopting two frames of reference regarding the part to be manufactured and the cutting tools in order to optimize their use. Two hypotheses have been formulated with respect to the observed differences in the 29

performance of CNC lathe programmers, that is: (a) Very good programmers would spend

30

more time on problem formulation, before proceeding to the specification of the program, than good programmers, and (b) Very good programmers would adopt an optimized cutting strategy while good programmers would settle for a serial strategy. Consequently, very good programmers will adopt a more complex problem-solving process than good programmers. To test these hypothesis, an experiment was designed that would also provide valuable input to the cognitive task analysis utilized in this study. Details on the design of the experiment and the results obtained can be found in Marmaras et al. (1997). User requirements for a cognitive aid supporting CNC lathe programming The cognitive task analysis of programming the CNC lathe provided useful insights into the design of an information technology system that supports CNC programmers. The scope of this cognitive aid was twofold. On the one hand, the aid should guide programmers in using efficient cutting strategies and, on the other hand, it should alleviate the cognitive demands that are often associated with optimized problem-solving strategies. Specifically, the cognitive aid could have the following features: − support users in deciding upon a cutting strategy at the front-end stages of the programming process; − generate several suggestions about efficient strategies, e.g., “try to integrate as many different shapes as possible into one movement of the cutting tool” and “avoid many changes of the cutting tool”; − support human memory throughout the whole programming process; − provide real-time feedback by showing programmers the effects of their intermediate decisions through real-time simulation; − facilitate the standard actions of the whole programming process. A description of the prototype cognitive aid that was developed in accordance with the user requirements can be found in Marmaras et al. (1997). Another experiment showed that the quality of the CNC lathe programs was improved and the time required to program the lathe was decreased by approximately 28%, when operators used the proposed cognitive aid. It was worth noting that not only did the performance of good CNC lathe programmers improved but also did the performance of the very good programmers.

5.2. The Case of a Cognitive Aid for Managerial Planning This section presents a study of user requirements in the design of an information technology system that supports high-level managerial planning tasks in small-to-medium size enterprises (Laios et al. 1992; Marmaras et al. 1992). High-level managerial planning tasks, usually referred to as strategic or corporate planning, involve a series of cognitive processes preceding or leading to crucial decisions about the future course of a company. Managerial planning entails: (i) strategies or high-level decisions that have a long lasting influence on the enterprise; (ii) tactics or lower-level strategies regarding policies and actions; and (iii) action plans that specify detailed courses of action for the future. In large companies, strategic planning is an iterative, lengthy and highly formalised process involving many managers at several levels in the organisation. In contrast, in small companies the manager-owner is the sole, most important actor and source of information on planning issues; in this case, planning is an individual cognitive process without the formalism required in larger organisations.

31

Elements which affect planning decisions are past, present and expected future states of the external environment of the firm that have a complex relationship with the internal environment. Managers may perceive of these elements as threats or opportunities in the market place (external environment), and strengths or weaknesses of the enterprise (internal environment). Effective planning decisions should neutralise threats of the external environment and exploit its opportunities, taking into account the strengths and weaknesses of the company. At the same time, effective planning decisions should increase the strengths of the company and decrease its weaknesses. A literature review and three case studies were undertaken in order to identify the main elements of the external and internal environments that may constrain managerial planning. The main constraints identified were: − Complexity/multiplicity of factors: The external and internal environments of an enterprise comprise a large number of interacting factors. Market demand, intensity of competition, socio-economic situation, labour market and technology are examples of the external environment factors. Product quality, process technology, distribution channels and financial position are examples of the internal environment factors. − Change/unpredictability: The world in which firms operate is constantly changing. Very often these changes are difficult to predict with regard to their timing, impact and size effects. Managers, therefore, face a great degree of uncertainty. − Limited knowledge with respect to the final impact of planning decisions and actions. For example, what will be the sales increase resulting from an advertising campaign costing X, and what from a product design improvement costing Y? − Interrelation between goals and decisions: Planning decisions made in order to achieve a concrete goal may refute – if only temporarily – some other goals. For example, renewal of production equipment aiming at increasing product quality may refute the goal of increased profits for some years. − Risk related to planning decisions: The potential outcomes of planning decisions are crucial to the future of the enterprise. Inaccurate decisions may put the enterprise in jeopardy, often leading to important financial losses. A valuable input to cognitive analysis was a series of structured interviews with managers from 60 small-to-medium size enterprises in order to collect taxonomic data about typical planning tasks in various enterprises, environmental and internal factors and personality characteristics. In other words, the aim of these interviews was to identify the different situations in which managerial planning takes place. The “scenario method” was used to do a cognitive analysis of managerial planning because of the difficulties involved in direct observations of real-life situations. The scenario presented a hypothetical firm, its position in the market, and a set of predictions regarding two external factors: an increase in competition for two of the three types of products and a small general increase in market demand. The description of the firm and its external environment was quite general, but a realistic one so that managers could create associations with their own work environment and, therefore, rely on their own experience and competencies. For example, the scenario did not specify the products manufactured by the hypothetical firm, neither the production methods and information systems used, nor the tactics followed by the firm. The knowledge elicited from the three case studies and the structured interviews was used to construct realistic problem scenarios.

32

Scenario sessions with 21 practising managers were organised in the following way. After a brief explanation about the scope of the experiment, managers were asked to read a twopage description of the scenario. Managers were then asked to suggest what actions they would take had they owned the hypothetical firm and to specify their sequence of performance. Additional information about the scenario (e.g., financial data, analysis of profits and losses, production and operating costs) were provided in tables, in cases that managers would like to use them. Managers were asked to think aloud and were allowed to take notes and make as many calculations as needed. The tape recordings of the scenario sessions were transcribed and verbal protocols were analysed using a 10-category codification scheme (see Table 4). The coded protocols indicated the succession of decision making stages and the semantic content of each stag – e.g., the sort of information acquired, the specific goals to be attained, or the specific tactic chosen. Details of the verbal protocol analysis can be found in Marmaras et al. (1992). There follows a brief presentation of the main conclusions drawn from the cognitive analysis with particular emphasis on the managers’ cognitive strategies and heuristics used in managerial planning. Table 4: A 10-category scheme for interpreting the verbal protocols of managers

Code

Description

C1

Information acquisition from the data sheets of the scenario or by the experimenter

C2

Statement about the tactic followed by the manager (e.g., “I will reduce the production of product C and I will focus on product A…”)

C3

Reference to threats or opportunities, strengths and weaknesses of the firm (e.g., “.. with these tactics, I will neutralize competitors penetration in the market …”)

C4

Reference to conditions necessary for implementing a tactic (e.g., “I will apply tactic X only if tactic Y does not bring about the expected results…”)

C5

Reference to the time a tactic would be put into effect and the duration of its implementation (e.g., “I will reduce the product price immediately …”)

C6

Reference to goals (e.g., “… with this tactic I expect sales to grow…”)

C7

Reference to data and calculations in order to form a better picture of the present planning situation

C8

Explicit reference to a generic strategy (e.g., “I would never reduce the prices of the product…”)

C9

Justification of the selected strategy

C10

Calculations for quantitative evaluation of selected tactics

Limited generation of alternative tactics. Most managers did not generate at the outset an extensive set of alternative tactics to select the most appropriate for the situation. Instead, as soon as they acquired some information and formed a representation of the situation, they defined a rather limited set of tactics that they would apply immediately (e.g., C1 C2, C1C7C3C2, C1C3C2, C1C3C1C2). This finding could be attributed to the fact that experienced managers possess an extensive repertoire of experiences accessed 33

through recognition rather than conscious search. Optimisation strategies are time consuming and difficult to sustain under the constraints of the work environment and the limited knowledge regarding the impact of decisions. In addition, the high risks associated to different decisions would probably push managers to adopt already tested tactics. Limited generation of alternative tactics, however, may lead to ineffective practices. For instance, a past solution may be inappropriate if the current situation has some subtle differences from others experienced in the past. In the case of managerial planning, new innovative solutions and radical departures may be of great importance. Acting/thinking cycles. After deciding upon a first set of actions, managers would wait for immediate feedback before proceeding to corrective actions, or before applying other tactics. This behavior compensates, to some extent, for the potential negative consequences of the previous strategy. That is, early decisions may be based on past experiences but their suitability can be evaluated later on when feedback becomes available; thus, managers may undertake other corrective actions. However, these cycles of acting/thinking behavior may lead to delays to act, increased costs or money loss. Limited use of information and analysis. Most managers avoided speculating on the available predictive quantitative account data and did not make any projections in their evaluation of selected tactics. Instead, they quickly came up with ideas about what to do, and stated that they would base their evaluation and future actions on specific outcomes of their initial actions. Decisions were justified by making references to environmental factors (C2C3, C3C2), other super-ordinate goals (C2C6, C6C2, C8C2, C2C8) and feedback from previous actions. This behavior may be attributed to the constraints of the environment, the uncertainty in making predictions and the interleaving goals which may render a quantitative evaluation of tactics rather difficult or even impossible. The possible negative consequences of this behavior are that data important for planning decisions may be neglected during the planning process. Lack of quantitative goals and evaluation. The results of the analysis suggested that, in general, managers do not set themselves quantitative goals that would influence the selection and evaluation of their actions. Instead, their planning decisions seem to be based mainly on assessments of external and internal environment factors and certain generic goals related to past experiences. This observation provides evidence supporting the criticism of several authors (e.g., Lindblom 1980) with respect to the relevance of “goalled” models; it also suggests that cognitive aids should not be based on optimisation models of managerial planning. Drawing on the results of the cognitive analysis, an information technology system that supports managerial planning in small-to-medium size enterprises was specified and designed. The system took the form of an active framework that supports cognitive processes in the assessment of environmental factors, generation and selection of strategies and tactics, as well as their evaluation. At the same time, the support system would limit some negative consequences of the managers’ cognitive strategies and heuristics. By making use of appropriate screens and functions, the proposed cognitive aid should provide support in the following ways: − Assist managers to consider additional data in the planning process and to identify similarities and differences with other experiences in the past. This could be done by presenting menus of external and internal factors in the planning environment and by inviting managers to determine future states or evolutions of system states. 34

− Enrich the repertoire of planning decisions by presenting menus of candidate options and tactics relevant to the current situation. − Support the acting/thinking process by providing crucial information concerning the state of external and internal environment factors at the successive action/thinking cycles, and the different planning decisions made during these cycles. The planning process proposed by the cognitive aid could be achieved through a series of steps. At first, an assessment should be made of the past, present and future state of environmental factors; this is in contrast to the setting of quantitative goals, as proposed by formal models of strategic planning. The next step involves the specification of alternative tactics and evaluation using qualitative and, optionally, quantitative criteria. Additional steps with regard to the setting of generic strategies and goals has been included in between. However, some degrees of freedom should be allowed in the choice and sequence of these steps. Although this procedure imposes a certain formalism in managerial planning (this is inevitable when using an information technology system), sufficient compatibility has been achieved with the cognitive processes entailed in managerial planning. A prototype of the proposed cognitive aid was evaluated by a number of managers of small-to-medium size enterprises and its functionality and usability were perceived to be of high quality.

6. CONCLUDING REMARKS Advances in information technology and automation have changed the nature of many jobs by placing particular emphasis on cognitive tasks and, on the other hand, by increasing their demands, e.g., coping with more complex systems, higher volume of work, and operating close to safety limits. As a result, successful ergonomic interventions should consider the interactions in the triad of “users – tools or artifacts – work environment”. These interactions are affected by the user strategies, the constraints of the work environment, and the affordances and limitations of the work tools. In this respect, techniques of cognitive task analysis are valuable because they explore how users´ cognitive strategies are shaped by their experience and their interaction with the work environment and the tools. This explains why this chapter has devoted a large part in illustrating how cognitive task analysis can be used in the context of applied work situations. Cognitive analysis provides a framework for considering many types of ergonomic interventions in modern jobs including, the design of man-machine interfaces, cognitive tools and training programs. However, cognitive analysis requires a good grasp of human cognition models in order to understand the mechanisms of cognition and develop techniques for eliciting them in work scenarios or real-life situations. The cognitive probes and the Critical Decision Method, in particular, demonstrate how models of human cognition can provide direct input to cognitive task analysis. Further advances in cognitive psychology, cognitive science, organizational psychology and ethnography, should provide more insights into how experienced personnel adapt their strategies when work demands change and how they detect and correct errors before critical consequences are ensued. The chapter has also put emphasis on the application of cognitive analysis in the design of information technology cognitive aids for complex tasks. The last two case studies have

35

shown that cognitive aids should not be seen as independent entities, capable of their own reasoning, but as agents interacting with human users and with work environments. Since unforeseen situations, beyond the capabilities of cognitive aids, are bound to arise human operators should be able to take over. This requires humans to be kept in the loop and develop an overall mental picture of the situation before taking over. In other words, useraid interactions should be at the heart of system design. Recent advantages of technology may increase the “conceptual power” of computers so that both users and cognitive aids can learn from each other strategies.

7. REFERENCES Allport, D.A. (1980). Attention. In New Directions in Cognitive Psychology, G.L. Claxton (Ed.), Routledge & Kogan Paul, London. Card, K., Moran, P. and Newell, A. (1986). The Model Human Processor. In Handbook of Perception and Human Performance, K.R. Boff, L. Kaufman and J.P. Thomas (Eds.), John Wiley, New York, Vol.II, pp. 45-31. Clark, H.H. and Chase, W.G. (1972). On the Process of Comparing Sentences Against Pictures. Cognitive Psychology, 3, pp.472-517. Davis, L. and Wacker, G. (1987). Job Design. In Handbook of Human Factors, G. Salvendy (Ed.), John Wiley, New York, pp. 431-452. DeKeyser, V. and Woods, D.D. (1990). Fixation Errors: Failures to Revise Situation Assessment in Dynamic and Risky Systems. In Systems Reliability Assessment, A.G. Colombo and A. Saiz de Bustamante (Eds.), Kluwer Academic Press, Dordrechts, The Netherlands. Dreyfus, H. (1997). Intuitive, Deliberative and Calculative Models of Expert Performance. In Naturalistic Decision Making, C. Zambok and G. Klein (Eds.), Lawrence Erlbaum Associates, New Jersey, pp. 17-28. Drury, C., Paramore, B., Van Cott, H., Grey, S. and Corlett, E. (1987). Task Analysis. In Handbook of Human Factors, G. Salvendy (Ed.), John Wiley, New York, pp. 370401. Einhorn, H.J. and Hogarth, R.M. (1981). Behavioral Decision Theory. Annual Review of Psychology, 32, 53-88. Ericsson, K.A. and Simon, H. (1984). Protocol Analysis. MIT Press, Cambridge MA. Gibson, J.J. (1979). The Ecological Approach to Visual Perception. Houghton Mifflin, Boston. Green, D.M. and Swets, J.A. Signal Detection Theory and Psychophysics. New York, John Wiley. Hayes-Roth, B. (1979). A Cognitive Model of Action Planing. Cognitive Science, 3, pp. 275-310. Helander, M. (1987). Design of Visual Displays. In Handbook of Human Factors, G. Salvendy (Ed.), John Wiley, New York, pp. 507-548. Hutchins, E., Hollan, J. and Norman, D. (1986). Direct Manipulation Interfaces. In UserCentered System Design, D. Norman and S. Draper (Eds), Lawrence Erlbaum Associates, Hillsdale, pp. 87-124. Hutchins, E. (1990). The Technology of Team Navigation. In Intellectual Teamwork, J. Galegher, R.E. Kraut and C. Edigo (Eds), Lawrence Erlbaum Associates, Hillsdale, N.J.

36

Hutchins, E. and Klausen, T. (1992). Distributed Cognition in an Airline Cockpit. In Communication and Cognition at Work, D. Middleton and Y. Engestrom (Eds), Cambridge University Press, Cambridge. Johnson, E.M., Cavanagh, R.C., Spooner, R.L. and Samet, M.G. (1973). Utilization of Reliability Measurements in Bayesian Inference: Models and Human Performance. IEEE Transactions on Reliability, 22, pp. 176-183. INPO (1982). Analysis of Steam Generator Tube Rupture Events at Oconee and Ginna. Report 83-030. Institute of Nuclear Power Operations, Atlanta, GA. Kirwan, B. and Ainsworth, L. (1992). A Guide to Task Analysis. Taylor and Francis, London. Klein, G. (1997). An Overview of Naturalistic Decision Making Applications. In Naturalistic Decision Making, C. Zambok and G. Klein (Eds.), Lawrence Erlbaum Associates, New Jersey, pp. 49-60. Klein, G.A., Calderwood, R. and MacGregor, D. (1989). Critical Decision Method for Eliciting Knowledge. IEEE Transactions on Systems, Man, and Cybernetics, 19, pp. 462-472. Klein, G.A., Calderwood, R. and Clinton-Cirocco, A. (1986). Rapid Decision Making on the Fire Ground. In Proceedings of the 30th Annual Meeting of Human Factors Society, Vol. 1, Santa Monica, CA., pp. 576-580. Kontogiannis, T. (1996). Stress and Operator Decision Making in Coping with Emergencies. International Journal of Human-Computer Studies, 45, pp. 75-104. Kontogiannis, T. and Embrey, D. (1997). A User-centred Design Approach for Introducing Computer-Based Process Information Systems. Applied Ergonomics, 28, 2, pp. 109119. Laios, L., Marmaras, N. and Giannakourou, M. (1992). Developing Intelligent Decision Support Systems Through User-centred Design. In Methods and Tools in Usercentred Design for Information Technology, M. Galer, S. Harker and J. Ziengler (Eds), Elsevier Science Publishers, Amsterdam, pp. 373-412. Leplat, J. (1990). Relations Between Task and Activity: Elements for Elaborating a Framework for Error Analysis. Ergonomics, 33, pp. 1389-1402. Lindblom, C.E. (1980). The Science of “Muddling Thought”. In Readings in Managerial Psychology, H. Leavit, L. Pondby and D. Boje (Eds.), University of Chicago Press, Chicago, pp. 144-160. Lindsay, R.W. and Staffon, J.D. (1988). A Model Based Display System for the Experimental Breeder Reactor-II. Paper presented at the Joint Meeting of the American Nuclear Society and the European Nuclear Society, Washington, DC. Lusted, L.B. (1976). Clinical Decision Making. In Decision Making and Medical Care, D. Dombal and J. Grevy (Eds.), Elsevier North Holland, Amsterdam. Marmaras, N., Lioukas, S. and Laios, L. (1992). Identifying Competences for the Design of Systems Supporting Decision Making Tasks: A Managerial Planning Application, Ergonomics, 35, pp. 1221-1241. Marmaras, N., Vassilakopoulou, P. and Salvendy, G. (1997). Developing a Cognitive Aid for CNC-Lathe Programming Through Problem-Driven Approach. International Journal of Cognitive Ergonomics, 1, pp. 267-289. Marmaras, N. and Pavard, B. (2000). Problem-driven Approach for the Design of Information Technology Systems Supporting Complex Cognitive Tasks. Cognition, Technology and Work, (to be published). McCormick, E.S. and Sanders, M.S. (1987), Human Factors in Engineering and Design. McGraw-Hill, New York.

37

Mehle, T. (1982). Hypothesis Generation in an Automobile Malfunction Inference Task. Acta Psychologica, 52, pp. 87-116. Miller, G. (1956), The Magical Number Seven Plus or Minus Two: Some Limits on our Capacity for Processing Information. Psychological Review, 63, pp. 81-97. Norman, D. (1988). The Design of Everyday Things. Doubleday, New York. Newell, A. and Simon, H. (1972). Human Problem Solving. Prentice-Hall, Englewood Clifs. Payne, J.W. (1980). Information Processing Theory: Some Concepts and Methods Applied to Decision Research. In Cognitive Processes in Choice and Decision Behavior, T.S. Wallsten (Ed.), Erlbaum, Hillsdale NJ. Rasmussen, J. (1981). Models of Mental Strategies in Process Control. In Human Detection and Diagnosis of Systems Failures, J. Rasmussen and W. Rouse (Eds.), Plenum Press, New York. Rasmussen, J. (1983). Skill, Rules and Knowledge: Signals, Signs and Symbols, and other Distinctions in Human Performance Models. IEEE Transactions on Systems, Man and Cybernetics, SMC-13, pp. 257-267. Rasmussen, J. (1985). The Role of Hierarchical Knowledge Representation in Decision Making and System Management. IEEE Transactions on Systems, Man and Cybernetics, SMC-15, pp. 234-243 Rasmussen, J. (1986). Information Processing and Human-machine Interaction: An Approach to Cognitive Engineering. North Holland, Amsterdam. Rasmussen, J., Pejtersen and Goodstein, L. (1994). Cognitive Systems Engineering, John Wiley & Sons, New York. Reason, J. (1990). Human Error. Cambridge University Press, Cambridge. Redding, R.E. and Seamster, T.L. (1994). Cognitive Task Analysis in Air Traffic Control and Aviation Crew Training. In Aviation Psychology in Practice, N. Johnston, N. Mc Donald and R. Fuller (Eds), Avebury Press, Aldershot, pp. 190-222. Roth, E.M., Bennett, K.B. and Woods, D.D. (1987). Human Interaction with an “Intelligent” Machine. International Journal of Man-Machine Studies, 27, pp. 479525. Roth, E.M., Woods, D.D. and Pople, H.E., Jr. (1992). Cognitive Simulation as a Tool for Cognitive Task Analysis. Ergonomics, 35, pp. 1163-1198. Roth E.M., Malin, J.T. and Schreckenghost (1997). Paradigms for Intelligent Interface Design. In M. Helander, T.K. Landauer and P. Phabhu (Eds.), Handbook of HumanComputer Interaction. Elservier Science, Amsterdam, pp. 1177-1201. O´ Hara, D., Wiggins, M., Williams, A. and Wong, W. (1998). Cognitive Task Analysis for Decision Centred Design and Training. Ergonomics, 41, pp. 1698-1718. Sanderson, P.M., James, J.M. and Seidler, K.S. (1989). SHAPA: An Interactive Software Environment for Protocol Analysis. Ergonomics, 32, pp. 463-470. Schon, D. (1983). The Reflective Practitioner, Basic Books, New York. Schustack, M.W. and Sternberg, R.J. (1981). Evaluation of Evidence in Causal Inference. Journal of Experimental Psychology: General, 110, pp. 101-120. Seamster, T.L., Redding, R.E., Cannon, J.R., Ryder, J.M. and Purcell, J.A. (1993). Cognitive Task Analysis of Expertise in Air Traffic Control. The International Journal of Aviation Psychology, 3, pp. 257-283. Serfaty, D., Entin, E. and Hohnston, J.H. (1998). Team Coordination Training. In: J.A. Cannon-Bowers and E. Salas (Eds.), Making Decision Under Stress: Implications for Individual and Team Training. American Psychological Association, Washington DC, pp. 221-245.

38

Shepherd, A. (1989). Analysis and Training of Information Technology Tasks. In Task Analysis for Human Computer Interaction, D. Diaper (Ed), Ellis Harwood, Chidester. Sheridan, T. (1981). Understanding Human Error and Aiding Human Diagnosis Behavior in Nuclear Power Plants. In Human detection and diagnosis of systems failures, J. Rasmussen and W. Rouse (Eds.), Plenum Press, New York. Shneiderman, B. (1983). Direct Manipulation: A Step Beyond Programming Languages. IEEE Computer, 16, pp. 57-69. Shneiderman, B. (1982). The Future of Interactive Systems and the Emergence of Direct Manipulation. Behavior and Information Technology, 1, pp. 237-256. Simon, H. (1978). Rationality as Process and Product of Thought. American Economic Review, 68, pp. 1-16. Tversky, A. and Kahneman, D. (1974). Judgement Under Uncertainty: Heuristics and Biases. Science, 185, pp. 1124-1131. Vicente, K. and Rasmussen, J. (1992). Ecological Interface Design: Theoretical Foundations. IEEE Transactions on Systems, Man and Cybernetics, SMC-22, pp. 589-606. Weick, K.E. (1983). Managerial Thought in the Context of Action. In The Executive Mind, S. Srivastra (Ed.), Jossey-Bass, San Francisco CA, pp. 221-242. Wickens, C. (1992), Engineering Psychology and Human Performance, Harper Collins Publishers, New York. Woods, D. (1994). Cognitive Demands and Activities in Dynamic Fault Management: Abductive Reasoning and Disturbance Management. In Human Factors in Alarm Design, N. Stanton (Ed.), Taylor & Francis, London, pp. 63-92. Woods, D.D. (1982). Operator Decision Behavior During the Steam Generator Tube Rupture at the Ginna Nuclear Power Station. Research Report 82-1057-CONRM-R2, Westinghouse Research and Development Centre, Pittsburgh, PA. Woods, D. and Hollnagel, E. (1987), Mapping Cognitive Demands in Complex and Dynamic Worlds, International Journal of Man-Machine Studies, 26, pp. 257-275. Woods, D. D., Roth, E. M. and Bennett, K. B. (1990). Explorations in Joint HumanMachine Cognitive Systems. In Cognition, Computing, and Cooperation, S. P. Robertson, W. Zachary and J. B. Black (Eds.), Ablex Publishing, New Jersey.

39

View publication stats