Nonexplicit Change Detection in Complex ... - Semantic Scholar

1 downloads 0 Views 886KB Size Report
displays, command and control, eye tracking, pupillometry, microworld. IntroductIon. The prevalence of surveillance and information- gathering technologies has ...
Nonexplicit Change Detection in Complex Dynamic Settings: What Eye Movements Reveal François Vachon and Benoît R. Vallières, Université Laval, Québec, Canada, Dylan M. Jones, Cardiff University, Cardiff, United Kingdom, and Sébastien Tremblay, Université Laval, Québec, Canada Objective: We employed a computer-controlled command-and-control (C2) simulation and recorded eye movements to examine the extent and nature of the inability to detect critical changes in dynamic displays when change detection is implicit (i.e., requires no explicit report) to the operator’s task. Background: Change blindness—the failure to notice significant changes to a visual scene—may have dire consequences on performance in C2 and surveillance operations. Method: Participants performed a radar-based riskassessment task involving multiple subtasks. Although participants were not required to explicitly report critical changes to the operational display, change detection was critical in informing decision making. Participants’ eye movements were used as an index of visual attention across the display. Results: Nonfixated (i.e., unattended) changes were more likely to be missed than were fixated (i.e., attended) changes, supporting the idea that focused attention is necessary for conscious change detection. The finding of significant pupil dilation for changes undetected but fixated suggests that attended changes can nonetheless be missed because of a failure of attentional processes. Conclusion: Change blindness in complex dynamic displays takes the form of failures in establishing taskappropriate patterns of attentional allocation. Application: These findings have implications in the design of change-detection support tools for dynamic displays and work procedure in C2 and surveillance. Keywords: change detection, focused attention, dynamic displays, command and control, eye tracking, pupillometry, microworld

Address correspondence to François Vachon, École de psychologie, Université Laval, Pavillon Félix-AntoineSavard, 2325 Rue des Bibliothèques, Québec, Canada G1V 0A6; e-mail: [email protected]. HUMAN FACTORS Vol. 54, No. 6, December 2012, pp. 996-1007 DOI:10.1177/0018720812443066 Copyright © 2012, Human Factors and Ergonomics Society.

Introduction

The prevalence of surveillance and informationgathering technologies has almost invariably led to a marked change in the volume and complexity of information presented to a system’s operator. In turn, this change may compromise the capacity of the operator to sustain performance in complex and dynamic settings, particularly in activities related to discerning significant objects and events. Failure to notice changes in a visual scene—a phenomenon sometimes referred to as change blindness (CB)—has been identified as one of the key consequences of increasing load and is particularly relevant to many work-related dynamic environments, especially in situations characterized by severe constraints of time pressure, uncertainty, and high information load (Durlach, 2004). Indeed, the consequences of missing a critical change during command-and-control (C2) operations (e.g., military operations, air traffic control, crisis response management), because of their safety-critical nature, may be disastrous (see Varakin, Levin, & Filder, 2004). Most studies on change detection deal with static visual displays, and surprisingly little research has addressed complex dynamic contexts. This lack of research in dynamic contexts is remiss, given the likely practical impact. We adopt the approach of augmenting behavioral data on detection of events with the analysis of eye movements to study the deployment of attention in a simulated C2 task. In typical CB experiments, participants are instructed to report explicitly any changes occurring in a static visual scene (e.g., Henderson & Hollingworth, 1999; O’Regan, Deubel, Clark, & Rensink, 2000; Rensink, O’Regan, & Clark, 1997; see Rensink, 2002, for a review), but such explicit detection tasks have been also applied to dynamic displays (e.g., Bahrami, 2003; Boot, Kramer, Becic, Wiegmann, & Kubose, 2006; Levin & Simons, 1997). Indeed, considerable

Downloaded from hfs.sagepub.com at UNIVERSITE LAVAL on December 6, 2012

Nonexplicit Change Detection 997

“blindness” has been observed when participants had to report explicitly any task-relevant changes occurring in the dynamic C2 environment (e.g., DiVita, Obermayer, Nugent, & Linville, 2004; Durlach & Chen, 2003; Smallman & St. John, 2003). For instance, in a simulated C2 battlefield environment, Durlach and Chen (2003) found that change detection performance dropped from approximately 90% to approximately 50% when changes were contingent on the closing of a secondary task window. Although the detection of important changes in real C2 environments is typically pivotal to the operator’s mission, more often than not, operational tasks are not required to be reported; rather, C2 operators can respond to critical changes without having to explicitly declare them, and instead, such information is bound up in tasks such as visual search, display manipulations, situation assessment, and decision making (Durlach, 2004). In other words, C2 operations usually involve implicit change detection (cf. Triesch, Ballard, Hayhoe, & Sullivan, 2003); that is, the detection of critical changes to the situation is intrinsic to the operator’s mission and does not require any explicit report of change. However, it is unclear to what extent the knowledge about CB in static and dynamic scenes can generalize to C2 multitasking situations in which change detection would be implicit to the operator’s tasks. For example, Boot et al. (2006) demonstrated that minimizing eye movements over the display promotes change detection in dynamic scenes. However, the adoption of such scan strategy would be unthinkable in C2 situations, in which operators must perform multiple tasks concurrently and exploit the whole tactical display. CB has been demonstrated in the context of relatively simple tasks in which participants were unaware that changes might happen (e.g., Johansson, Hall, & Sikström, 2008; Triesch et al., 2003), revealing how task demands can affect the ability to notice changes. In extending the investigation of implicit change detection to dynamic displays, we use the Simulated Combat Control System (S-CCS) microworld (Lafond, Vachon, Rousseau, & Tremblay, 2010), a simplified emulation that requires no particular expertise but that provides a functional simulation of

the cognitive activities performed by a tactical coordinator aboard a ship, such as the threatevaluation and combat-power management processes. In S-CCS, participants have to monitor a radar screen representing the airspace around the ship, evaluate the threat level of every aircraft moving in the vicinity of the ship on the basis of a list of parameters, and take appropriate defensive measures against hostile aircraft. Critical changes are inserted naturally in the S-CCS environment because change detection is intrinsic to the participant’s tasks. As each scenario begins with no hostile aircraft, a critical change consists of an aircraft passing unexpectedly from a nonthreatening to a hostile status. Necessarily, participants must notice these critical changes to neutralize the hostile aircraft before being hit. Each critical change is accompanied by a change on the radar screen (i.e., a change in the direction and/or the speed of the aircraft) that makes it more visually noticeable. Change detection is assessed through the timeliness of the actions made in relation to the “new” hostile aircraft rather than the explicit report of a critical change. In other words, a critical change is considered detected when an action is performed on the aircraft within seconds after the change. This methodology can be viewed as a hybrid of the paradigms used to study CB (the failure to detect expected change) and a closely related phenomenon termed inattentional blindness (the failure to detect unexpected events; Mack & Rock, 1998). A widely held view is that change detection in a visual scene depends on the application of conscious attention (e.g., Rensink, 2002; Simons & Ambinder, 2005; Tse, 2004). To examine the role of attention in the context of an implicit changedetection C2 task, we used eye tracking as an index of attention allocation over the dynamic display. Although visual selective attention and the overt movements of the eyes can be dissociated (e.g., Posner, 1980), they are nevertheless intrinsically related (e.g., McCarley & Kramer, 2008; Rayner, 2009). Indeed, “a fixation at a given location is strong evidence that attention has been there” (McCarley & Kramer, 2008, p. 99). Because eye movements can index the deployment of attention, the technique of eye

Downloaded from hfs.sagepub.com at UNIVERSITE LAVAL on December 6, 2012

998

December 2012 - Human Factors

tracking has been useful in studying CB in static displays (e.g., Hollingworth, Schrock, & Henderson, 2001; McCarley et al., 2004; O’Regan et al., 2000; Triesch et al., 2003; Tse, 2004). For instance, some studies established that changes are more likely to be detected when occurring close to a fixated—that is, attended—position (e.g., Hollingworth et al., 2001; O’Regan et al., 2000), and further research showed that directly fixating the changing object does not ensure successful detection of change, suggesting that attention is not always sufficient to enable change detection (e.g., Caplovitz, Fendrich, & Hughes, 2008; O’Regan et al., 2000). In this study, we recorded eye movements to characterize the interaction between attentional allocation over the visual scene and the ability to notice significant changes to dynamic displays. The enhancement of traditional behavioral data with nonobtrusive eye tracking data is of particular importance, given the implicit nature of the change-detection task. So, we compared critical changes for which at least one fixation was recorded before and/or after the change with those changes that were never fixated. Eye tracking data were also used in assessing whether change detection in dynamic display depends on fixation position on the visual scene at the moment a critical change arises. On the basis of previous work involving dynamic displays (e.g., Boot et al., 2006), detection performance was anticipated to be worse for changes that occurred farther from the fixation location. We also looked at the pupillary size in relation to critical changes. Pupil size has been studied in relation to many cognitive processes (see Beatty, 1982; Kahneman, 1973; Wang, 2011) and may vary in response to changes in attentional effort (e.g., Hoeks & Levelt, 1993). Exploiting pupillometry in the context of change detection could be a good complement to studying eye fixation. For instance, Privitera, Renninger, Carney, Klein, and Aguilar (2010) showed that pupil dilation was related to the detection of visual targets presented at fixation and found significant pupil dilation even for undetected targets.

Method Participants

Participants were 19 students from Université Laval (11 men; mean age = 22.6 years) reporting normal or corrected-to-normal vision and normal hearing. They received $20 compensation for their participation in a single 2-hr experimental session. Eye Tracking

Eye movements were recorded with a Tobii T1750 eye tracker at a sampling rate of 50 Hz. Participants were seated 60 cm from the monitorintegrated eye tracking camera and were free to move their head. The threshold to detect an eye fixation was set at 100 ms, and the fixation field corresponds to a circle with a 50-pixel radius. Eye movement data were analyzed with the use of Tobii’s (2006) ClearView software. Microworld

The S-CCS microworld is a low-level, computer-controlled simulation of single-ship naval above-water warfare. The simulation is dynamic and evolves according to a scenario in interaction with the operator’s actions. Typical scenarios of the S-CCS microworld involve multiple aircraft moving in the vicinity of the ship with possible attacks requiring retaliatory missile firing from the ship. A single participant plays the role of tactical coordinator who must observe and comprehend the operational space; conduct threat assessments, including the categorization and prioritization of threats; and plan and schedule the application of combat power. This microworld was a substrate that enabled us to generate displays, develop scenarios, and record the status of all objects and events. See Figure 1 for a description of the various parts of the display. (A video of the S-CCS microworld can be viewed at http://www.co-dot.ulaval.ca, in the Cognitive System Integration section.) Task

As a tactical coordinator, participants performed three main subtasks. First, participants assessed the level of threat posed by an aircraft by classifying all aircraft as nonhostile, uncertain, or

Downloaded from hfs.sagepub.com at UNIVERSITE LAVAL on December 6, 2012

Nonexplicit Change Detection 999

Figure 1. Screenshot of the Simulated Combat Control System (S-CCS) microworld visual interface. This interface can be divided into three parts: (a) a radar display depicting in real time all aircraft (represented by a white dot surrounded by a green square) moving at various speeds and trajectories around the ship (represented by the central point), (b) a parameters list providing information on a number of parameters about the selected aircraft, and (c) a set of action buttons allowing the participant to allocate threat level and threat immediacy to an aircraft and to engage with missile fire a candidate hostile aircraft.

hostile. They were instructed to take into account 5 out of the 11 parameters displayed in the list: Origin, Altitude, Identification Friendor-Foe (IFF), Military Electronic Emissions, and Detection of Weapons. Each critical parameter can take either a threatening or a nonthreatening value. The number of threatening cues determines the threat level of an aircraft. The classification decision must be made according to the following classification rule: An aircraft is (a) nonhostile when it shows zero to one threatening cue, (b) uncertain when it manifests two to three threatening cues, and (c) hostile if it exhibits four to five threatening cues. None of the critical parameters was intrinsically more important than the others. When a decision was made, participants were required to click on the

corresponding classification button. The white dot representing the selected aircraft changed color according to the level of threat assigned to it: green (nonhostile), yellow (uncertain), or red (hostile). However, since aircraft threat level could increase over time, participants had to check the parameters regularly to reassess the threat level, if appropriate. The second task consisted of assessing the threat immediacy of all aircraft designated as hostile on the basis of their spatiotemporal proximity to the ship. Since hostile aircraft were programmed to hit the ship in the absence of any intervention, threat immediacy had to be evaluated by adding the parameter values for the time the aircraft will take to reach the closest point of approach (CPA) if its velocity

Downloaded from hfs.sagepub.com at UNIVERSITE LAVAL on December 6, 2012

1000

December 2012 - Human Factors

remains constant, or time to CPA (TCPA), and CPA distance in units of time (CPAUT), that is, the time it will take the aircraft to hit the ship from the CPA (see Roy, Paradis, & Allouche, 2002, for more details about “time before hit” computation). Participants had to determine whether the threat immediacy level was high (30 s) and to click on the corresponding immediacy button (1 to 3, respectively). The third task was to defend the ship against hostile aircraft by launching an antimissile using the Engage button when the target was within range. A small white dot, representing the antimissile device, then appeared on the radar screen. Only one projectile could be airborne at a time. Change Detection

When an aircraft appeared on the radar screen, it was either nonhostile or uncertain (i.e., displaying fewer than four threatening cues). Its status could change during the course of the scenario to become hostile (i.e., displaying four to five threatening cues). Such changes were considered critical because hostile aircraft were programmed to hit the ship. A total of eight critical changes occurred unexpectedly in each scenario. Each critical change was accompanied by a change of either speed (i.e., an increase), direction (i.e., the aircraft is heading toward the ship), or both. A critical change was considered detected if the aircraft was selected and/or classified within the 15 s following this change. If no actions were made on a hostile aircraft within these 15 s, the change was considered undetected. Procedure

Following a tutorial describing the context of the simulation and the tasks, participants were asked to undertake threat-evaluation and threatimmediacy tasks for nine static screenshots to verify their understanding. Familiarity with the dynamic environment was established in two training sessions, each comprising four 4-min scenarios (or trials). After calibrating the eye tracking system, participants performed four randomly presented experimental blocks separated by 5-min rest periods. Each block com-

prised four 4-min scenarios of similar difficulty, presented randomly. Each scenario involved a set of 27 aircraft (11 nonhostile, 8 uncertain, and 8 hostile) varying in speed and trajectory. A maximum of 10 aircraft could appear on the radar screen at the same time. Results Changes to aircraft parameters were examined in relation to whether they were fixated (i.e., attended) before and after the change as well as by the position of the gaze on the display at the moment of the change. We also looked at the impact of change detection on pupil size. Given that the nondetection of a shift toward a hostile status is expected to influence the participant’s capacity to adequately achieve his or her mission—that is, to correctly classify aircraft and to neutralize hostile aircraft—we examined whether the level of CB was related to classification and neutralization tasks. The alpha level was set at .05. Change Detection

Fixated vs. nonfixated changes. Overall, 13.1% of critical changes were not detected; that is, no actions were recorded on the changed aircraft within the 15-s postchange period. Among the changes that received at least one postchange fixation (i.e., within 15 s after the change), which represent 91.7% of all critical changes, 7.2% remained undetected, but this percentage increased to 78.1% for changes that were never fixated after the change. (Overall, 1.8% of the critical changes were detected despite that no fixations were recorded on the changed aircraft after the change. Among those changes, 27% can be explained by the fact that the changed aircraft was either fixated before the change or already selected while the change occurred [e.g., Hollingworth & Henderson, 2002]. This means that only 1.3% of critical changes were detected without being fixated, probably reflecting some action unrelated to the critical change, e.g., the action was planned before the change occurred, or the aircraft was the closest to the mouse at the moment the decision to act upon it was taken.) Odds ratios indicated that a change was 45.85 times more likely to be missed if not

Downloaded from hfs.sagepub.com at UNIVERSITE LAVAL on December 6, 2012

Nonexplicit Change Detection 1001

Figure 2. Proportion of undetected changes in the prechange fixation (fixated vs. not fixated) and postchange fixation (fixated vs. not fixated) conditions. Error bars represent 95% within-subject confidence intervals calculated separately for postchange fixated and nonfixated changes.

fixated after it occurred than if fixated, χ2(1) = 814.94, p < .001. When considering prechange fixations (i.e., within 5 s before a critical change occurred), we found that 19.8% of nonfixated changes remained undetected, whereas this percentage dropped to 9.3% for changed aircraft fixated before the change. A change was 2.13 times more likely to be missed if not fixated before the change than if fixated, χ2(1) = 33.94, p < .001. Overall, 17.9% of the undetected changes were fixated both before and after the change, suggesting that looking at a changing object did not guarantee the detection of the change (e.g., O’Regan et al., 2000). To assess the combined impact of pre- and postchange fixations on change detection, we dichotomized the proportion of undetected changes for each participant as a function of whether the changing aircraft has been fixated before and after the change. The results are plotted in Figure 2. A 2 × 2 repeated-measures ANOVA revealed significant effects of postchange fixation, F(1, 18) = 223.75, p < .001, and of prechange fixation, F(1, 18) = 6.33, p = .022, pointing to better detection for changing aircraft that were fixated before and/or after the change. The significant interaction, F(1, 18) = 4.79, p = .042, indicated that the beneficial

Figure 3. Proportion of undetected changes in each gaze distance from change (0 to 299, 300 to 699, and 700 or more pixels) conditions for all critical changes as well as for changes that received prechange and postchange fixations. Error bars represent 95% withinsubject confidence intervals calculated separately for each type of change.

impact of prechange fixation on change detection was more important for changed aircraft that were never fixated after the change than for those that received at least one postchange fixation. Gaze position at the time of change. We turn now to examine whether change detection depends on gaze position when the change occurred in dynamic displays, as it does in static scenes (e.g., Hollingworth et al., 2001). First, we compared the distance, in pixels, separating gaze position from the changed aircraft between detected and undetected changes. The data revealed that undetected changes occurred, on average, farther away from gaze position (M = 213.68 pixels, SE = 4.36) than did detected changes (M = 148.31 pixels, SE = 0.98), t(18) = 12.05, p < .001. As in static displays, detection in dynamic displays increases as a function of gaze proximity to the changed object (e.g., Hollingworth et al., 2001; O’Regan et al., 2000). The proportion of undetected changes varied with the distance between gaze position and the change. We delineated three distance intervals (i.e., 0 to 299 pixels, 300 to 699 pixels, and 700 or more pixels) and compared the proportion of undetected changes across these intervals. The results are presented on the left part of Figure 3.

Downloaded from hfs.sagepub.com at UNIVERSITE LAVAL on December 6, 2012

1002

December 2012 - Human Factors

verified whether this latter pattern depended on whether the undetected change was fixated. Whereas pupil size was not affected by undetected changes that were never fixated, t(18) = −1.18, p = .254, it significantly increased following undetected changes that were fixated, t(18) = 3.75, p < .001. However, this increase was not as large as that observed for detected changes, t(18) = 3.05, p = .007. Performance Figure 4. Average percentage change in pupil size (PCPS) evoked by a critical change according to whether the change was detected as well as whether an undetected change was fixated after the change. The PCPS is computed as the average pupil size within the 15-s interval after the change minus the pupil size averaged across the 15 s preceding the change (i.e., the baseline), divided by the baseline. Error bars represent 95% within-subject confidence intervals calculated separately for each type of change.

We used a stricter alpha level of .0167 to compensate for the increase in familywise error rate. The proportion of undetected changes significantly increased as a function of distance, F(2, 36) = 14.87, p = .001. Next, we verified whether this distance interval effect was applicable to fixated changes. As shown in Figure 3, change detection was no more modulated by the position of the gaze at the time of the change when the changing aircraft was fixated either before, F(2, 36) = 1.20, p = .298, or after the change, F(2, 36) = 0.93, p = .354. Pupillometry. The percentage change in pupil size (PCPS) after the change was computed, which is the average pupil size within the 15-s interval after the change minus the average pupil size in the 15 s preceding the change (i.e., the baseline), was divided by the baseline (see Beatty, 1982). We used a stricter alpha level of .01 to compensate for the increase in familywise error rate. The average change-evoked PCPS (see Figure 4) was significantly larger for detected changes relative to undetected changes, t(18) = 6.04, p < .001. In fact, there was no reliable variation in pupil size following an undetected change, t(18) = −1.19, p = .251. We then

A range of measures associated with the classification and neutralization tasks were recorded and related to change detection. Accordingly, we extracted the proportion of correct classifications as well as a measure of defensive effectiveness, namely, the proportion of time the ship was hit by a hostile aircraft. Given that CB has been previously associated with poorer decision making (e.g., Creaser, Edwards, Uggerslev, & Caird, 2001; Gabaude et al., 2009), we expected to find a negative relationship between the proportion of undetected changes and classification accuracy. Using a one-tailed test, we found such a negative correlation, r(17) = −.406, p = .042, indicating that classification accuracy tended to be lower when undetected changes were more frequent. We also computed the same correlation separately for fixated and nonfixated changes. The negative relationship between change detection and classification accuracy appeared to be confined to those changes that were never fixated, as the correlation was still moderate for nonfixated changes, r(17) = −.640, p = .002, but was not significant for fixated changes, r(17) = .024, p = .461. Conversely, we expected CB to be positively associated with defensive effectiveness, as newly hostile aircraft are more likely to reach the ship if never detected. Consistently, a strong positive (one-tailed) relationship was observed between the proportion of undetected changes and the proportion of ship hits, r(17) = .776, p < .001, indicating that the more changes that remained undetected, the more the ship was hit by hostile aircraft. When computing the same correlation separately for fixated and nonfixated changes, the positive relation between change detection and defensive effectiveness was mainly attributable, this time, to those

Downloaded from hfs.sagepub.com at UNIVERSITE LAVAL on December 6, 2012

Nonexplicit Change Detection 1003

changes that were fixated. Indeed, whereas the correlation was still strong for fixated changes, r(17) = .646, p = .001, it was not significant for nonfixated changes, r(17) = .225, p = .177. Discussion

The results indicated that the capability to implicitly detect critical changes appeared strongly affected by the pattern of eye movements over the dynamic display. Critical changes were more likely to remain unseen if the eye never looked directly at the changed aircraft in the moments preceding or following the change. If one assumes that eye movements can index the allocation of attention over a display (see, e.g., Rayner, 2009; Tse, 2004), such findings are consistent with the notion that attention is required for conscious change perception (e.g., Durlach, 2004; Simons & Ambinder, 2005). Although fixated (i.e., attended) changes were more likely to be noticed than were nonfixated (i.e., unattended) changes, attention to a changing aircraft did not seem sufficient for change detection, as almost 18% of undetected changes were fixated both before and after the change occurred. There is evidence that changes to attended objects can often remain unnoticed in both static (e.g., Caplovitz et al., 2008; O’Regan et al., 2000; Williams & Simons, 2000) and dynamic scenes (e.g., Levin & Simons, 1997; Simons & Levin, 1998), suggesting that looking at something does not mean one sees it. To account for such findings, O’Regan et al. (2000) proposed that “what an observer ‘sees’ at any moment in a scene is not the location he or she is directly fixating with the eyes, but the aspect of the scene he or she is currently attending to” (p. 209). As such, the elements of the scene that are attended to appear to be determined by behavioral goals and task demands (e.g., Triesch et al., 2003). Thus, it is possible that in our dynamic display, some fixated changes went undetected because at the time the changing aircraft was fixated, participants were engaged in cognitive activities that did not require allocating attentional resources to that particular aircraft (e.g., searching for or evaluating the threat or immediacy level of another aircraft).

Importantly, there was a double dissociation found between fixated and nonfixated changes. On one hand, fixation position on the display at the moment of change modulated the detection of nonfixated changes but not that of fixated changes. On the other hand, the pupillary response was sensitive to undetected changes that were fixated but not to those that were never fixated. A similar pattern of results was also observed with performance: Classification accuracy was correlated only with the detection of nonfixated changes, whereas defensive effectiveness covaried only with the detection of fixated changes. Such dissociation points toward two distinct sources of CB in dynamic displays. The first source relates to the allocation of attention to critical changes. As previously put forward for static scenes (e.g., Rensink, 2002; Simons & Ambinder, 2005), unattended changes are less likely to be consciously perceived and, consequently, more likely to remain undetected. The “no attention” source of CB would be responsible for the failure to detect nonfixated changes. That the probability of detecting a nonfixated (i.e., unattended) change diminished as a function of fixation distance from the change location is consistent with this interpretation (see also Hollingworth et al., 2001; O’Regan et al., 2000). In fact, this relationship between fixation position and nonfixated change detection likely reflects the preferential allocation of visual attention to fixated objects. Changes occurring in the peripheral region of the visual field appear less likely to be attended and, hence, perceived than do changes happening closer to the foveal region where overt attention is focused. The other source of CB is highlighted by pupillometry: An attended change can be missed because of a failure of attentional processes. Pupil dilation has been associated with increased attentional processing (e.g., Hoeks & Levelt, 1993); accordingly, significant dilation for changes undetected but fixated suggests that despite the failure to consciously detect fixated changes (cf. Privitera et al., 2010), some attentional effort was generated in response to the changed aircraft. That pupil response to fixated undetected changes was smaller than that to

Downloaded from hfs.sagepub.com at UNIVERSITE LAVAL on December 6, 2012

1004

December 2012 - Human Factors

detected changes may be taken as evidence of impaired attentional processing of changed objects. More specifically, we propose that such impairment pertains to automatic detection processes. Attended visual transients triggered by a critical change are more likely to be automatically (i.e., unconsciously) detected by the cognitive system (e.g., Durlach, 2004) and to then generate a “call for attention” toward it for further processing. There is recent evidence that automatic processes can be dependent on top-down attentional control and then can be severely limited during multitasking, when task demands are high (e.g., Kiefer & Martens, 2010; Logan & Gordon, 2001; Vachon & Jolicoeur, 2011, 2012; Vachon, Tremblay, & Jones, 2007). Thus, it is likely that this automatic “call for attention” remains unanswered in complex dynamic situations because attentional processes are overloaded by the need to process multiple sources of information that are constantly evolving under time constraints and high cognitive load, leading to the failure to consciously detect the change. Whereas this attention-failure source of CB is believed to be more specific to complex dynamic situations, the nondetection of unattended changes is more likely to occur in both static and dynamic scenes. In typical CB experiments, the failure of change detection is much more likely if the change coincides with a visual distraction, such as a flash, a partial occlusion, or distractors (see Durlach, 2004; Rensink, 2002). The goal of these visual disruptions, especially in static scenes, is to prevent the reception of visual transient signals that are typically used by the visual system to detect changes by automatically directing attention away from them (e.g., Durlach, 2004; O’Regan et al., 2000; Simons & Ambinder, 2005). However, another key finding of the present study was the demonstration that CB in dynamic displays can arise without contingent visual distraction (see also Bahrami, 2003). Indeed, critical changes were not made concomitant with extraneous, imposed visual disturbances. Such a result is likely to ensue from the fact that in C2 environments, observers have to deal with a high load of visual information constantly evolving over time. That is, multiple

changes and, consequently, numerous visual transients occur concurrently across the visual field at every moment. These transients compete for attention with the localized transient that was produced by the critical change, consequently reducing the propensity of that change to pop out from the scene and, hence, minimizing its detectability (cf. O’Regan et al., 2000). Accordingly, under “normal” circumstances, when there is no control over the presentation of distracting visual information, C2 environments would be more propitious to CB than would static scenes. Of course, one would expect to see more undetected changes in dynamic displays if they coincide with a visually distracting event (see, e.g., Boot et al., 2006; DiVita et al., 2004; Durlach & Chen, 2003), but the fact remains that visual distraction does not seem necessary for CB to occur in dynamic scenes. In a real C2 situation, optimal performance is critical for safety purposes. Importantly, the tendency for an operator to fail to detect critical events occurring right where he or she is looking could be amplified by the fact that people are generally unaware of the CB phenomenon and tend to overestimate their ability to detect change (a phenomenon known as CB blindness; e.g., Levin, Momen, Drivdahl, & Simons, 2000). To reduce the potential negative effects of CB, one solution may lie in the design of supportive tools for change detection in complex dynamic situations (see, e.g., Durlach, 2004; St. John & Smallman, 2008; Varakin et al., 2004). So far, change-detection tools have been designed mainly to attract attention toward the critical events occurring in the visual scene. For example, Smallman and St. John (2003) developed the Change History EXplicit (CHEX) tool, which employs automation for assisting change awareness during the active monitoring of airspace activity. By automatically detecting and logging every change in a flexibly-sorted table added to the operator’s interface and by linking each entry in that table to the corresponding object on the geospatial display, the CHEX tool has been proven effective in supporting explicit change detection (see also St. John et al., 2005). Such effectiveness remains to be demonstrated in the context of an implicit change-detection

Downloaded from hfs.sagepub.com at UNIVERSITE LAVAL on December 6, 2012

Nonexplicit Change Detection 1005

task, as the addition of a new source of information may amplify information overload and, consequently, hinder the proper execution of the operator’s main tasks. Moreover, the present study proposes that failure of automatic attentional processes can also contribute to CB in dynamic displays. It is not clear whether a tool such as CHEX could help to overcome such attentional failure. In fact, if the extraction of visual information is degraded, it is possible that the extraction of the information logged in the CHEX tool suffers the same complications. Conversely, the fact that the information displayed in the table is continuously accessible may reduce the negative impact of temporary attentional breakdowns. This effect remains to be tested. Nevertheless, the present findings suggest that designers of change-detection tools should not only focus on promoting the allocation of attention to critical changes but also consider risks of attentional failure as well as the factors that enhance them. Acknowledgments We are thankful to Julie Champagne, Sergei Smolov, Thierry Moisan, and Laurence Dumont for assistance in programming, data collection, and data analysis. This work was supported by an R&D partnership grant from the National Sciences and Engineering Research Council of Canada with Defence R&D Canada–Valcartier, Neosapiens, and Thales Canada, Systems Division, awarded to Sébastien Tremblay.

Key Points •• The capability of a system’s operator to detect significant changes during command-and-control dynamic situations is crucial because of the safetycritical nature of such operations. •• Empirical investigations of change detection typically require the explicit report of any critical change, whereas change detection is usually implicit to the operator’s task in complex dynamic environments. •• Eye tracking data uncovered two sources of “change blindness” in complex dynamic situations: Changes are missed because they are never attended or because of a failure of (automatic) attentional processes.

•• Whereas the nondetection of unattended changes is likely to occur in both static and dynamic scenes, the attention-failure cause of change blindness is assumed to be more specific to complex dynamic situations, characterized by severe constraints of time pressure, uncertainty, and high information load.

References Bahrami, B. (2003). Object property encoding and change blindness in multiple object tracking. Visual Cognition, 10, 949–963. Beatty, J. (1982). Task-evoked pupillary responses, processing load, and the structure of processing resources. Psychological Bulletin, 91, 276–292. Boot, W. R., Kramer, A. F., Becic, E., Wiegmann, D. A., & Kubose, T. (2006). Detecting transient changes in dynamic displays: The more you look, the less you see. Human Factors, 48, 759–773. Caplovitz, G. P., Fendrich, R., & Hughes, H. C. (2008). Failures to see: Attentive blank stares revealed by change blindness. Consciousness and Cognition, 17, 877–886. Creaser, J. I., Edwards, C. J., Uggerslev, K. L., & Caird, J. K. (2001). Detection of cars and pedestrians while making left turn decisions. In Proceedings of the Human Factors and Ergonomics Society 45th Annual Meeting (pp. 1607– 1611). Santa Monica, CA: Human Factors and Ergonomics Society. DiVita, J., Obermeyer, R., Nugent, W., & Linville, J. M. (2004). Verification of the change blindness phenomenon while managing critical events on a combat information display. Human Factors, 46, 205–218. Durlach, P. J. (2004). Change blindness and its implications for complex monitoring and control systems design and operator training. Human-Computer Interaction, 19, 423–451. Durlach, P. J., & Chen, J. Y. C. (2003). Visual change detection in digital military displays. In Proceedings of the Interservice/ Industry Training, Simulation, and Education Conference 2003 (n.p.). Orlando, FL: IITSEC. Gabaude, C., Paire-Ficout, L., Lafont, S., Bedoin, N., Knoblauch, K., Bruyas, M. P., & Vital-Durand, F. (2009). Relationship between change detection and decision making at left-turn intersection: Effects of age and distraction. In Proceedings of the 21st World Congress of the International Traffic Medicine Association (pp. 44–45), The Hague, Netherlands: The International Traffic Medicine Association. Henderson, J. M., & Hollingworth, A. (1999). The role of fixation in detecting scene changes across saccades. Psychological Science, 10, 438–443. Hoeks, B., & Levelt, W. J. M. (1993). Pupillary dilation as a measure of attention: A quantitative system analysis. Behavior Research Methods, Instruments, & Computers, 25, 16–26. Hollingworth, A., & Henderson, J. M. (2002). Accurate visual memory for previously attended objects in natural scenes. Journal of Experimental Psychology: Human Perception & Performance, 28, 113–136. Hollingworth, A., Schrock, G., & Henderson, J. M. (2001). Change detection in the flicker paradigm: The role of fixation position within the scene. Memory & Cognition, 29, 296–304. Johansson, P., Hall, L., & Sikström, S. (2008). From change blindness to choice blindness. Psychologia, 51, 142–155.

Downloaded from hfs.sagepub.com at UNIVERSITE LAVAL on December 6, 2012

1006

December 2012 - Human Factors

Kahneman, D. (1973). Attention and effort. Englewood Cliffs, NJ: Prentice Hall. Kiefer, M., & Martens, U. (2010). Attentional sensitization of unconscious cognition: Task sets modulate subsequent masked semantic priming. Journal of Experimental Psychology: General, 139, 464–489. Lafond, D., Vachon, F., Rousseau, R., & Tremblay, S. (2010). A cognitive and holistic approach to developing metrics for decision support in command and control. In D. B. Kaber & G. Boy (Eds.), Advances in cognitive ergonomics (pp. 65–73). Danvers, MA: CRC Press. Levin, D. T., Momen, N., Drivdahl, S. B., & Simons, D. J. (2000). Change blindness: The metacognitive error of overestimation change-detection ability. Visual Cognition, 7, 397–412. Levin, D. T., & Simons, D. J. (1997). Failure to detect changes to attended objects in motion pictures. Psychonomic Bulletin & Review, 4, 501–506. Logan, G. D., & Gordon, R. D. (2001). Executive control of visual attention in dual-task situations. Psychological Review, 108, 393–434. Mack, A., & Rock, I. (1998). Inattentional blindness. Cambridge, MA: MIT Press. McCarley, J. S., & Kramer, A. F. (2008). Eye movements as a window on perception and cognition. In R. Parasuraman & M. Rizzo (Eds.), Neuroergonomics: The brain at work (pp. 95–112). New York, NY: Oxford University Press. McCarley, J. S., Vais, M. J., Pringle, H., Kramer, A. F., Irwin, D. E., & Strayer, D. L. (2004). Conversation disrupts change detection in complex traffic scenes. Human Factors, 46, 424–436. O’Regan, J. K., Deubel, H., Clark, J. J., & Rensink, R. A. (2000). Picture changes during blinks: Looking without seeing and seeing without looking. Visual Cognition, 7, 191–211. Posner, M. I. (1980). Orienting of attention. Quarterly Journal of Experimental Psychology, 32, 3–25. Privitera, C. M., Renninger, L. W., Carney, T., Klein, S., & Aguilar, M. (2010). Pupil dilation during visual target detection. Journal of Vision, 10, 1–14. Rayner, K. (2009). The 35th Sir Frederick Bartlett Lecture: Eye movements and attention in reading, scene perception, and visual search. Quarterly Journal of Experimental Psychology, 62, 1457–1506. Rensink, R. A. (2002). Change detection. Annual Review of Psychology, 53, 245–277. Rensink, R. A., O’Regan, J. K., & Clark, J. J. (1997). To see or not to see: The need for attention to perceive changes in scenes. Psychological Science, 8, 368–373. Roy, J., Paradis, S., & Allouche, M. (2002). Threat evaluation for impact assessment in situation analysis systems. In I. Kadar (Ed.), Signal Processing, Sensor Fusion, and Target Recognition XI: Proceedings of SPIE (pp. 329–341). Orlando, FL: SPIE Press. Simons, D. J., & Ambinder, M. S. (2005). Change blindness: Theory and consequences. Current Directions in Psychological Science, 14, 44–48. Simons, D. J., & Levin, D. T. (1998). Failure to detect changes to people in a real-world interaction. Psychonomic Bulletin & Review, 5, 644–649. Smallman, H. S., & St. John, M. (2003). CHEX (Change History EXplicit): New HCI concepts for change awareness. In Proceedings of the Human Factors and Ergonomics Society 47th

Annual Meeting (pp. 528–532). Santa Monica, CA: Human Factors and Ergonomics Society. St. John, M., & Smallman, H. S. (2008). Staying up to speed: Four design principles for maintaining and recovering situation awareness. Journal of Cognitive Engineering and Decision Making, 2, 118–139. St. John, M., Smallman, H. S., & Manes, D. I. (2005). Recovery from interruptions to a dynamic monitoring task: The beguiling utility of instant replay. Proceedings of the Human Factors and Ergonomics Society 49th Annual Meeting (pp. 473–477). Santa Monica, CA: Human Factors and Ergonomics Society. Tobii Technology. (2006). Tobii 1750. Falls Church, VA: Author. Triesch, J., Ballard, D. H., Hayhoe, M. M., & Sullivan, B. T. (2003). What you see is what you need. Journal of Vision, 3, 86–94. Tse, P. U. (2004). Mapping visual attention with change blindness: New directions for a new method. Cognitive Science, 28, 241–258. Vachon, F., & Jolicoeur, P. (2011). Impaired semantic processing during task-set switching: Evidence from the N400 in rapid serial visual presentation. Psychophysiology, 48, 102–111. Vachon, F., & Jolicoeur, P. (2012). On the automaticity of semantic processing during task switching. Journal of Cognitive Neuroscience, 24, 611–626. Vachon, F., Tremblay, S., & Jones, D. M. (2007). Task-set reconfiguration suspends perceptual processing: Evidence from semantic priming during the attentional blink. Journal of Experimental Psychology: Human Perception and Performance, 33, 330–347. Varakin, D., Levin, D. T., & Fidler, R. (2004). Unseen and unaware: Implications of recent research on failures of visual awareness for human-computer interface design. Human-Computer Interaction, 19, 389–422. Wang, J. T. (2011). Pupil dilation and eye tracking. In M. SchulteMecklenbeck, A. Kuhberger, & R. Ranyard (Eds.), A handbook of process tracing methods for decision research: A critical review and user’s guide (pp. 185–204). New York, NY: Psychology Press. Williams, P., & Simons, D. J. (2000). Detecting changes in novel, complex three-dimensional objects. Visual Cognition, 7, 297– 322.

François Vachon is an assistant professor in the School of Psychology at Université Laval in Québec, Canada. His main research interests include the basic and applied cognitive psychology of attention and multitasking. He received his PhD in cognitive psychology from Université Laval in 2007. He then completed a postdoctoral fellowship in cognitive psychology at Cardiff University (2007); one in cognitive neuroscience at Université de Montréal, Canada (2008–2009); and another in human factors at Université Laval (2009–2011). Benoît R. Vallières is a PhD student in applied cognitive psychology at Université Laval. He received his bachelor’s degree in psychology from Université Laval in 2009.

Downloaded from hfs.sagepub.com at UNIVERSITE LAVAL on December 6, 2012

Nonexplicit Change Detection 1007

Dylan M. Jones is head of the School of Psychology at Cardiff University in Cardiff, United Kingdom (since 2003) and is also an adjunct professor in the Department of Psychology at the University of Western Australia in Perth. His extensive research program is concerned with both basic and applied aspects of human cognition. In 2001, he was awarded Officer of the Order of the British Empire (OBE) for services to military science.

honorary research fellow of Cardiff University (United Kingdom) and director of the GR3C, a group of researchers interested in collaborative work and team cognition. He has expertise in a wide range of cognitive human factors issues. He holds a PhD in psychology (1999, Cardiff University).

Sébastien Tremblay is a professor in the School of Psychology at Université Laval. He is also an

Date received: September 28, 2011 Date accepted: February 26, 2012

Downloaded from hfs.sagepub.com at UNIVERSITE LAVAL on December 6, 2012