System State Awareness: A Human Centered Design ...

5 downloads 1409 Views 610KB Size Report
A Human Centered Design Approach to Awareness in a Complex World. Nicholas Kasdaglis. Human Centered Design Institute. Florida Institute of Technology.
Proceedings of the Human Factors and Ergonomics Society 58th Annual Meeting - 2014

305

System State Awareness: A Human Centered Design Approach to Awareness in a Complex World Nicholas Kasdaglis

Olivia Newton

Shan Lakhmani

Human Centered Design Institute Florida Institute of Technology

University of Central Florida

University of Central Florida

Situation Awareness is a popular concept used to assess human agents’ understanding of a system and any error that may occur due to poor understanding. However, the popular conception of situation awareness retains assumptions better suited for linear, controlled systems. When assessing complex systems, rife with non-linear, emergent behaviors, current models of situation awareness frequently place much of the burden of system failure onto the human agent. We contend that the traditional concept of a fully controlled system is not the best fit for a complex system with networked loci of control, especially during abnormal system states. Instead, we recommend an approach that focuses on agents’ adaptation to environmental cues. We discuss how the concept of situation awareness, when enmeshed in the assumption of linearity, insufficiently deals with extended cognition, reliability, adaptation, and system stability. We conclude that an approach focusing on System State Awareness (SSA), instead, facilitates the adaptation of system goals during off-normal system states. Thus, SSA provides the theoretical underpinning for design of distributed networked systems that improve human performance in complex environments.

Introduction Since the late 20 century, research, industry and regulatory bodies have used Reason’s (1995) Swiss cheese model as the archetype for accident analysis and safety efforts in life critical domains of complex socio-technical systems. Thus, the common ethos persists that there exists an intangible accident chain that can be followed backwards to the source of an accident; this has motivated investigators to seek to find, punish, prevent, mitigate, block, correct, and trap the root cause of the accident. Current models of safety system management are still mired in linear methods and models rooted in early 20 century manufacturing safety science (Heinrich, 1931). Thus, accident reports consistently trace the aetiology of an accident to the human’s faulty information processing, Situation Awareness (SA). Satisfied that the imperfect human is once again the cause of misery, the inquiry is stopped at this point and efforts begin to insist that human agents in similar situations maintain SA better in the future. Yet, perhaps when an accident occurs, it is not a failure of the human to hold fast to an impalpable information-processing construct, but instead, that this construct lacks robustness to be embodied in the real world of complex systems. Analyses of undesired outcomes in systems often focus on human error in order to identify and mitigate aberrant mental processes by reducing human variability. This approach can be described as a systems approach to the extent that it seeks to design and erect barriers between the human and the committing of an error (Reason, 2000). Here, system complexity, the number of possible states that the system can take, requires “human error mitigation” to reduce the complexity to the extent that a human agent is predictable and that the system can actually be controlled (Hollnagel, 2011). So to prevent errors, the number of potential system states must be reduced so that the system can only be used in its most predictable way (Flach, 2012). Yet, in many domains complexity may be so high that complete control is unattainable. Further, attempts at control can actually add to the variability of the system because control must be distributed throughout networks of controllers (Flach, 2012). It th

Copyright 2014 Human Factors and Ergonomics Society. DOI 10.1177/1541931214581063

th

is obvious that the aforementioned approaches to reducing error by reducing system complexity may be of little avail when applied to complex systems. Conversely, a Human Centered Approach identifies seminal design of a system as the area of concern; in fact, the requisite acknowledgement of human variability in a system creates an opportunity for the purposeful design of the interaction between distributed cognitive agents for control in a different sense. That is, the Human Centered approach moves away from a reductionoriented approach that seeks to isolate error and instead, from a reliability perspective, focuses on the pursuit of goals through adaptation to changing system states. A Human Centered Approach, makes explicit the system to be controlled, and as such requires a revision to our understanding of awareness of a cognitive agent in a complex system. By expanding our view from a control-oriented perspective, which constrains awareness to individual parameters, to an environmental view, looking at composite cues to identify current system states, we prepare agents to contend with abnormal situations in complex systems. Unfortunately, a linear perspective is assumed for most of the concepts used to describe the human-system relationship. A widely applied concept is SA, an individual’s awareness of what is happening around them and what that information means to them now and in the future (Endsley, Bolte, & Jones, 2003, p. 15). Although used successfully in multiple domains (e.g. medicine, aviation, and nuclear power), this conception of awareness approaches a complex environment using an information processing oriented feedback loop, which has a number of blind spots in regards to these complex environments. First, SA fails to fully appreciate extended cognition. Frequently in complex systems, SA focuses on individual users making and communicating predictions about the system, when, in fact, these systems need networks of human agents whose behavior is predicated by environmental cues in the system. Second, SA focuses on error when it should be focusing on reliability. An error focus presumes a total control of system states, which is frequently not possible in complex systems. Third, SA is not explicit in facilitating

Downloaded from pro.sagepub.com by guest on October 28, 2015

Proceedings of the Human Factors and Ergonomics Society 58th Annual Meeting - 2014

adaptation of system goals in the presence of multivariate inputs. SA suggests that humans predict future system states based on the perception of the variables in the environment, but prediction is difficult enough with single variables; prediction of future states in a system with multiple interacting variables is even more daunting. Finally, SA does not focus on the overall stability of a system. Given these issues, operators and stakeholders of life critical systems need a better way to be dynamically aware of complex systems operations, especially during abnormal states. Yet, to say that SA is insufficient without qualification would be inaccurate. Rather, we claim that SA still remains ill-defined and fails to have a focal target, and as such, it has significant limitations in its application in today’s complex systems (Dekker & Hollnagel, 2004; Sarter & Woods, 1991). Although SA has dominated the Human Factors literature for the last 25 years, inquiry has mainly centered on SA as a state of the human agent. With few exceptions, little attention has been paid to the target system, joint cognition between human and machine, and human cognition in complex systems. To sum, SA as articulated by Endsley (1995), requires mental simulation of future dynamics, which is formidable in linear environments and nearly untenable in a complex system in the presence of nonlinearities. Complexity In recent years, the study of complex systems has captured the attention of researchers in a number of disciplines: Biology, Ecology, Chemistry, Physics, and Economics (Arthur, 1999; Bonchev & Rouvray, 2005; Sherrington, 2009). The science of complexity departs from the reductionist approach that has dominated modern scientific inquiry. Instead, at the core of complexity science rests the proposition that the whole does not equal the sum of its parts. As documented by Hollnagel (2012), the term complexity has resisted a consensus definition. Therefore, in this paper we limit our description of complexity to three major attributes: emergent properties, interactions among many system components, and non-linear behaviors. Kim (1999) concurs: as complex systems organize, “they begin to exhibit novel properties that in some sense transcend the properties of their constituent parts, and behave in ways that cannot be predicted on the basis of the laws governing simpler systems” (p.3). Due to these properties, accidents in complex systems are very different than the industrial accidents of the early 20th century (Leveson, 2011). With nuclear power reactors in mind, Perrow (1984) warned that interactive complexity and tight coupling result in a predictable outcome— a “normal accident”. Leveson echoed that complexity in technology creates a situation where “we are attempting to build systems that are beyond our ability to intellectually manage” (Leveson, 2004, p. 2). A component failure can cause reverberations through a system, causing unpredictable outcomes (Kasdaglis, 2012). The three properties of a complex system – emergence, interactivity, and non-linearity – make this so. These properties are significantly related to this present work. Given the emergent behaviors resulting from system interaction, system states can only be viewed from a macro level; individual parameters may not offer an indication of the

306

past, present, or future system states. SA, as it is conceptualized in Endsley’s (1995) model, attempts to stop the world, parse it, and view its interaction with the human agent as an orderly process. Parsing this interaction in terms of human information processing, from perception, comprehension, action, and to perception again, does not sufficiently account for the attributes of complexity and is thus limited in today's multi-agent world where technology and human agents share collaborative responsibilities in highly networked systems. Situation Awareness Understanding a complex system is predicated on an awareness of the system as a whole. Therefore, a mismatch between the state of the system and the user’s understanding of the state of the system can result in undesirable system states and is considered a failure of the agent’s SA. There are multiple models of SA available, but the most popular is Endsley’s (1995) model, which describes SA as the human agent’s dynamic understanding of a system or process (Salmon et al., 2009). Endsley’s model is very much an information processing model, which is divided into three cyclical levels: perception of relevant elements in the environment, comprehension of those elements’ meaning in regards to one’s goals, and finally the projection of future states (Endsley, 1995; Salmon et al., 2009). This model of SA, however, has two overall issues that may lower its effectiveness as a tool in analyzing the relationships between agents and complex systems, especially in abnormal situations. First, the base model itself is primarily a linear feedback loop, focusing on its three levels of awareness with a focus on mental simulation of future dynamics, which becomes more difficult as the complexity of the system increases (Endsley, 1995; Endsley, Bolte, & Jones, 2003). Second, the model is descriptive, while a more prescriptive model may be more useful during an emergency (Zhang & Hill, 2000); as previously stated, telling system users to ‘be more aware’ is not helpful during a crisis. Complex systems can transition from a simple error state to a catastrophic state due to cascading errors rapidly driving the situation out of control (Woods & Branlat, 2010). Looking to prevent the ’root cause‘ based upon perception and comprehension of the individual parameters that constitute an abnormal system state is a limited approach to dealing with a catastrophic system state shift in an environment of dynamic complexity—variety must be matched with variety, thus linearity cannot contend with nonlinearity (Ashby 1956). Trying to track all of the separate variables in a complex system can easily exceed the agent’s attentional capacity, so the focus must be shifted to a model more conducive of the recognition of patterns in the system’s environment (Endsley, 1995). We may be better served, in this abnormal situation of catastrophic failure, by focusing on a more environmentally focused view of the situation. Smith and Hancock’s (1994) model describes SA as an adaptive, externally directed consciousness which receives behavioral goals from an ever changing external environment. In this model, knowledge directs an agent’s activity in the environment, which changes the environment and the

Downloaded from pro.sagepub.com by guest on October 28, 2015

Proceedings of the Human Factors and Ergonomics Society 58th Annual Meeting - 2014

information available therein, which in turn affects the knowledge available. Agents in a system undergo a process known as adaptation, where agents use their knowledge and behavior to attain their goals, which are imposed by the task environment. This shift away from focusing on one aspect of the system, be it human agent or technical system, and towards a more integrated view of this human-agent relationship, allows for a more dynamic understanding of the environment which can be useful in systems analysis. In this model, goals are predicated on patterns of environmental cues, with SA being the product of awareness (Salmon et al., 2009). The environment provides cues to the users who must display the capability to achieve the environment-established goals. This ecological approach, focusing more on environmental cues works well for analyzing complex system and is a step in the right direction. In a complex system, there is significant interactive complexity, which can result in cascading and system-wide dysfunction. Users of these complex systems must keep themselves aware of these numerous variables, but under Endsley’s (1995) model, users are compelled to pay attention to and predict the activities of each of the variables in a system individually. Moreover, discerning what those variables of interest are, and the result of their interactions, is best suited for post-hoc analysis. Furthermore, while these errors can be dealt with neatly post-hoc, an analysis of all the antecedent factors and complexity involved is “impossible” (Sutcliffe, 2005, p. 423). However, in a complex system there are many variables with even more potential interactions so, during ‘run-time,’ reading these interactions and applying the appropriate protocol can be an unyielding endeavor. Most importantly, in many life-critical systems, this post-hoc analysis does not help the people involved in a crisis or error state. Rather, during a catastrophic system state, a model that elicits actionable goals established by current environmental variables seems ideal. For example, the domain of aviation has many procedures for possible events, but one can never provide procedures for all error states. So, users have to be ready to adapt to radical system state shifts, where variables that have been established as inviolate can in fact be violated. In the end, when the system state changes in a way unpredicted by designers, users of a system have to respond flexibly. This process, focusing on environmental cues and data, is referenced in the Catastrophic Information Entropy Theory (CIET) (Kasdaglis, 2012). In order to respond to a catastrophic system state successfully, users must recognize that their environment has undergone a radical state shift, they must anticipate resultant shifts in the environment, and they must adapt to this environmental change. These three core activities of abnormal state management provide the ingredients of the requisite awareness of a system state necessary to operate in complex environments. Extended Cognition Cognitive studies have experienced a recent shift in how to appropriately conceptualize cognition. From embodied, to distributed, to extended cognition, the proposed views share a common premise: cognition is not limited to the agent, rather it is spread out amongst other agents and their environment (Anderson, Richardson, & Chemero, 2012). In

307

these views of cognition, the environment receives a great deal of attention in terms of its role in cognition, both in how it affects cognition and more recently in how it helps to instigate cognition. As these views of cognition gain popularity, there are a number of implications which arise when considering how research should be carried out. In particular, from an interaction-dominant dynamics view, we must consider how accurate and efficient our cognitive and behavioral research can be (e.g. in attempting to create realistic and natural behavior) if the environment has yet to receive the credit it is due; the agent is intricately embedded within the environment (Anderson, Richardson, & Chemero, 2012). Appropriate consideration of the environment and other agents goes beyond theoretical discussions of cognition as there is a pragmatic need to move focus away from an individual agent’s cognition in order to make explicit the practical application of this information. Silberstein and Chemero (2012) regard certain cognitive systems as falling into the category of a nonlinear dynamic system; the brain, the body, the environment and other agents are intricately and inextricably linked. From their perspective, a better view of cognitive systems is gained by using a model which represents the varying aggregate system states which occur “over time” instead of interpreting the manner in which environmental variables alter the agent’s cognition and behavior. Conceptualizing extended cognition in this manner seems to support our view that SA is insufficient in accounting for the complexity of a dynamic system due to the inability to capture the entirety of the system behavior. SA requires taking apart the system and bringing focus to an individual agent, when, in reality, this view does not properly explain the functioning of the system nor does it provide a means from which to seek out solutions to problems associated with complexity. By understanding this non-linear dynamic system as coupled, we can better understand its functioning and, in turn, better deal with the complexity of dynamic systems. In other words, the complex dynamic system we are attempting to account for is formed from an interaction between multiple agents and the environment. SA is a component of a single agent’s individual cognition, and as such, it does not encompass all the cognition which transpires within the system. Varying views of SA suggest that a system occurs within the environment, but this is not a proper consideration of the interaction-dominant dynamics of a system (Endsley, 1995; Smith & Hancock, 1994). The environment is an aspect of the system; that is, the system is in part formed from the environment. Reliability As previously asserted, analysis of complex systems needs to shift from a model of attempting to reduce complexity in a system to prevent error to a more robust model where human agents use environmental cues to anticipate changes in system state and to adapt to them as necessary. Boy (2011) describes approaches to human reliability in complex systems: one oft used approach contends that humans are inherently prone to error, and therefore barriers and defenses in depth should be constructed to stop those errors from propagating into a system failure. As such,

Downloaded from pro.sagepub.com by guest on October 28, 2015

Proceedings of the Human Factors and Ergonomics Society 58th Annual Meeting - 2014

SA is described from a localized phenomenological perspective. The human, though error prone in management of a complex system, is expected to be master of their cognitive processes and states of experience. Therefore, the human is implored to work and train harder, and they are supported to maximize their capability. Subsequently, if failure occurs, the human will be reminded that they have lost SA and must learn to hold on to it in the future. However, the human agent is set up to fail in this situation. A complex system can easily become too complex for a single operator to hold the entire picture in mind, let alone predict future states. There are too many possible error states. If we switch from discussing error prevention, which presumes both total control of the situation and available matched prototypical situations available to the agent, and move to a model of reliability, which depends on dynamic adaptation focusing on pattern recognition of the features of the system state, the situation is more easily handled. Using patterns of environmental cues in a system, agents are able to more easily learn the results of complex interactions in a system and categorize the situation based on experience with those patterns (Flach, 2012; Klein et al., 2003). This macrocognitive approach better suits systems with networked loci of control. In the original conception of SA, the human remains the focal point for the system and the problem of an information gap is dealt with through communication training and better system design. In a system focused on adaptation, each human agent is not required to deal with such a large influx of information. Instead, each agent can focus on the details of their specific area of the system and maintain knowledge of the larger system through pattern recognition of environmental cues; here, cognition resides in all the agents and in the emergent intelligence of their interactions (Hayek, 1945; Klein et al., 2003; Mitchell, 2009). Stability, Control and System State Awareness We contend that System State Awareness (SSA) is the awareness that is requisite for complex socio-technological systems. Complexity requires an awareness that is beyond the locus of a phenomenological operator—an awareness that is distributed across multiple agents, with various mechanisms of information processing, operating in parallel, and focused upon the target system— all for the purpose of adaptive responses. SSA elicits system response for optimal functioning in a multivariate context, and therefore offers human and artificial agents information for explicit system action based upon the requisite contextual knowledge of what is true in the world. In this sense, because we understand SA to primarily describe cognition in the individual human, SSA is better suited to capture joint cognition for action of a multiagent system. Thus, the presentation of SSA information will provide the venue for adaptive responses to undesirable system states. We assert this claim based upon our understanding that an accident is an undesirable system state where agents lack the requisite knowledge to maintain or recapture system stability. This lack of information utilization can instigate a system breakdown of stability (Kasdaglis, 2012, Kasdaglis & Oppold, in press) when an initial system perturbation

308

encourages a system to seek a new, more chaotic, equilibrium. However, recovery from this condition is different from a linear system, as a nonlinear system may have several optimum states. Therefore, specific elements of state information must be made explicit based upon the dynamic optimum state. Thus, the core activity, comprehension, that is integral to the Endsley (1995) SA model finds the human addled in the unfamiliar environment of nonlinearity and emergence. Helbing and Lammer (2008) describe the strange world of complexity: “when complex systems are driven far from equilibrium, non-linearities dominate, which can cause many kinds of strange and counterintuitive behaviors” (p. 1). CIET contends that the mediator of dynamic stability is the quantity of information in the system (Kasdaglis, 2012; Hayek, 1945). Therefore, consistent with Shannon’s Information Entropy (Shannon, Weaver, Blahut, & Hajek, 1949) Kasdaglis and Oppold (in press) maintain that rapid shifts in system states occur when there is a dearth of information. Kasdaglis (2012), in proposing that catastrophic system state changes occur in the vicinity of information entropy, offers that the measurements of information exchange and use make possible identifying when a system is getting close to those higher states of entropy (Corning, 2007; Shannon, Weaver, Blahut, & Hajek, 1949). Such a prospect requires compilation of all information present in a system to form a representation of overall system state and a recognition of the corresponding topography that is created by that information; with knowledge of system state, stability and recovery is made possible. Thus, we argue here that SSA elicits, specifies, and represents the knowledge necessary for effective agency within a complex socio-technological system. SSA is composed of three core activities: Recognition of a system state is a core activity of SSA, as identification of what is “true in the world” is of paramount and pragmatic importance in life critical environments. We believe that in a distributed cognitive network, what is “true in the mind” of the individual does not always have to be precise as the metacognition of the system, from which the optimal behavior will proceed, is of central importance. Arguably, SA is a phenomenological construct that primarily describes what is true in the mind and secondarily is interested in what is true in the world; the ambiguous treatment of the term “environment” in comparison to the precise mapping of mental processing functions that are present in the archetype model of SA confirms this subjectivity (Endsley, 1995). Anticipation of corresponding state behaviors is the second core activity of SSA. Systems that are seeking equilibrium are unstable and may transition to other states. This transition can be anticipated as a function of system maturity and autonomy (Boy, 2013). Maturity is important, as although system states cannot be predicted by examination of the parameters, overall behaviors and properties of that system state can be anticipated through data collection, modeling, and human-inthe loop (HITL) simulation. Furthermore, the end state of the system, although populated by random and discrete variables,

Downloaded from pro.sagepub.com by guest on October 28, 2015

Proceedings of the Human Factors and Ergonomics Society 58th Annual Meeting - 2014

is still deterministic. Thus, for the given variables, there exists a state to which the system will come to rest. Adaptation to a desirable state is the third core activity of SSA. Cause and effect are apparent and tangible to human agents who control linear systems; they are less clear in nonlinear systems. With linear systems, control is required for safety and predictability. Although today’s systems often possess non-linear properties and behaviors, traditional concepts of control persist. Control, requiring feed-forward imagination of what can go wrong, falters when such imagination is paired against innumerable contexts and events. Alternative approaches to system control and management of risk have been pursued. Leveson and Dulac (2005) advocated defining system constraints to serve as boundaries for safe system operation; Hollnagel (2006) advocated making systems more resilient. As a consequence, human agency has been increasingly coupled with technology for poly-centric control, where cognitive activities are jointly pursued across a distributed network of technology, organizations, and people (Boy, 2013; Woods and Branlat, 2010). With this advent, boundaries of authority, responsibility, and influence upon system health have been decentralized to multiple actors. Truly, Ashby’s (1958) doctrine has begun to materialize. Given this variety, feed-forward control methods are less than adequate with increased dimensionality and interactivity of actors in a system. Therefore, rigid adherence to system goals and corresponding prescribed control actions are bound to fail; action in accordance with system feedback is necessary. Primarily, this includes pointed discernment of changes to specific system variables that affect system functionality and stability. Adaption requires feedback of the immediate system state; adaptation, rather than control, is the basis for mastery of a system in complexity (Woods & Shattuck, 2000). Today’s systems require adaption of systems goals for reliable operations. Thus, the concept of SSA facilities identification of specific system variables that necessitate adaptation of system goals. Concluding Remarks Within this paper, we have outlined the ways in which SA is insufficiently robust for effective agency in complex systems. The complexity of non-linear dynamic systems calls for a new perspective, such as CIET, which accurately encompasses the functioning of these systems and allows for the development of efficient solutions to the inherent difficulties associated with complexity. By understanding the functioning of dynamic systems by means of system states, agents are better suited to recognize potential patterns, anticipate coming behaviors, and adapt accordingly. Implications extend to a variety of settings, such as NextGen, the modernization initiative of the U.S. National Airspace System—a system characterized by high levels of humancomputer interaction and corresponding levels of complexity (Chiappe, Yu, & Strybel, 2012). We hope that insight provided by a system state perspective will increase safety within dynamics systems and aid in the prevention of future accidents.

309

References Anderson, M. L., Richardson, M. J., Chemero, A. (2012). Eroding the boundaries of cognition: Implications of embodiment. Topics in Cognitive Science, 4(4), 717-730. Arthur, W. B. (1999). Complexity and the economy. Science, 284(5411)107-109. Ashby, W. R. (1958). Requisite variety and its implications for the control of complex systems. Cybernetica, 1(2), 83-99. Bonchev, D., Rouvray, D. H., & SpringerLink (Online service). (2005). Complexity in chemistry, biology, and ecology. New York: Springer. Boy, G. A. (2011). The handbook of human-machine interaction: A humancentered design approach. Farnham, Surrey, England; Burlington, VT: Ashgate. Boy, G. A. (2013). Orchestrating situation awareness and authority in complex socio-technical systems. Complex systems design & management (pp. 285296). London: Springer. Chiappe, D., Vu, K. L., & Strybel, T. (2012). Situation awareness in the NextGen air traffic management system. International Journal of Human-Computer Interaction, 28(2), 140-151. Dekker, S., & Hollnagel, E. (2004). Human factors and folk models. Cognition, Technology & Work, 6(2), 79-86. Endsley, M.R. (1995). Towards a theory of situation awareness in dynamic systems. Human Factors, 37(1), 32-64. Endsley, M.R., Bolté, B. & Jones, D.G. (2003). Designing for situation awareness: An approach to user-centered design. London: Taylor & Francis. Endsley, M. R., & Jones, D. G. (2012). Designing for situation awareness: An approach to user-centered design. Boca Raton, FL: CRC Press. Flach, J.M. (2012). Complexity: Learning to muddle through. Cognition, Technology, & Work, 14, 187-197 Hayek, F.A. (1945). The use of knowledge in society. American Economic Review, 4, 519-530. Heinrich, H. (1931). Industrial accident prevention. New York: McGraw-Hill. Hollan, J., Hutchins, E., & Kirsh, D. (2000). Distributed cognition: Toward a new foundation for human-computer interaction research. ACM Transactions on Computer-Human Interaction, 7(2), 174-196. Hollnagel, E. (2012). Coping with complexity: past, present and future. Cognition, Technology, & Work 14(3), 199-205. Kasdaglis, N. (2013). For the lack of information: Catastrophe Information Entropy Theory (CIET): A perspective on aircraft accident causation. (Unpublished Master’s thesis). Florida Tech, Melbourne, FL. Kasdaglis, N., Oppold, P. (in press) Surprise, attraction, and propagation: An aircraft is no place for a catastrophe, Florida Tech, Melbourne, FL Kim, J. (1999). Making sense of emergence. Philosophical Studies: An International Journal for Phil. in the Analytic Tradition, 95(1), 3-36. Klein, G., Ross, K. G., Moon, B. M., Klein, D. E., Hoffman, R. R., & Hollnagel, E. (2003). Macrocognition. IEEE Intelligent Systems, 18(3), 81-85. Leveson, N. J. (2011). Engineering a safer world: Systems thinking applied to safety. Cambridge, MA: The MIT Press. Mitchell, M. (2009). Complexity: A guided tour. New York: Oxford University Press Reason, J. (1995). Understanding adverse events: Human factors. Quality in Health Care: QHC, 4(2), 80-89. Reason, J. (2000). Education and debate: Human error: Models and management. British Medical Journal, 320(7237), 768. Salmon, P.M., Stanton. N.A., Walker, G.H., Baber, C., Jenkins, D.P., McMaster, R., & Young, M.S. (2008). What is really going on? Review of situation awareness models for individuals and teams. Theoretical Issues in Ergonomics Science, 9(4), 297-323. Sarter, N. B., & Woods, D. D. (1991). Situation awareness: A critical but illdefined phenomenon. The International Journal of Aviation Psychology, 1(1), 45-57. Sherrington, D. (2009). Physics and complexity. Phil. Trans. R. Soc. A, 368(1914), 1175-1189. Silberstein, M. & Chemero, A. (2012). Complexity and extended phenomenological-cognitive systems. Topics in Cognitive Science, 4(1), 3550. Smith, K. & Hancock, P.A. (1994) Situation awareness is adaptive, externallydirected consciousness. In: R.D. Gilson, S.J. Garland, and J.M. Koonce (Eds.). Situational awareness in complex systems. Daytona Beach, FL: Embry-Riddle Aeronautical University Press. Zhang, W., & Hill Jr, R. W. (2000). A template-based and pattern-driven approach to situation awareness and assessment in virtual humans. In Proceedings of the Fourth International Conference on Autonomous Agents (pp. 116-123). ACM.

Downloaded from pro.sagepub.com by guest on October 28, 2015