Paper Title (use style: paper title)

10 downloads 0 Views 680KB Size Report
states of driver aberrance corresponding to distraction, ... distraction/inattention and volume/area-based cognitively .... Driven to distraction and back again.
Demonstration and Integration of an Eye-Based Driver State Monitor in Automated Driving Christopher D. Cabrall, Joost C.F. de Winter, R. Happee Delft University of Technology/Intelligent Vehicles & Cognitive Robotics, Delft, the Netherlands [email protected]; [email protected]; [email protected]

Figure 1. Christopher Cabrall leans in to adjust and explain the eye tracking calibration for a demo participant. Abstract—At the 2016 IEEE International Conference on Systems, Man, and Cybernetics, we demonstrated the latest version of an integrated Driver State Monitoring (DSM) system we are developing at TU Delft (Figure 1). In our system, eye-tracking measures driver states automatically to trigger both warnings and/or transitions of control (ToC) between automated and manual driving. Gaze direction, gaze variability, and eye lid opening are taken separately for states of driver aberrance corresponding to distraction, cognitive overload, and drowsiness, respectively. Given situational complexities (i.e., course and/or collision conflicts) an aberrant driver state triggers a ToC, otherwise symptom-relevant resolution warnings can be produced. As cameras become increasingly commonplace within vehicles, our system helps inform relevant future research and development towards improved human-computer interaction and driving safety.

I. INTRODUCTION Non-intrusive physiological measurements such as eye tracking of a driver would not require him/her to keep hands on the wheel or feet on the pedals and so may achieve utility across a wide spectrum of scenarios. Driver state monitoring (DSM) can help achieve goals of safety and driver acceptance by ensuring appropriate roles/responsibilities for both automated and human drivers within a full driving system. At the Delft University of Technology, we are involved in the development of an eye-based DSM as part of a Marie Curie International Training Network consortium project known as Human Factors of Automated Driving (HF Auto). Here in this article, we summarize and highlight several innovative values of our approach for the

envisioned system, the details of the current prototype instantiation, and conclude with upcoming follow-on research and development activities for this system. II.

SYSTEM INTEGRATION DESIGN VALUE

A. Situational complexity to direct warning vs. control transition In our system design, an aberrant driver state alone is not enough to warrant a transition of control (ToC) from manual to automated driving. False detections of driver aberrance may, on the one hand, annoy/frustrate a ready capable driver who has control stolen from him/her or even worse, on the other hand, preclude an actually in-theloop driver from over-ruling automated control. Instead, our approach here is to take into account the complexity of the current driving scene situation first before turning to assess the driver state second, and from there determining whether to implement only a warning or a full ToC (note: this progressive consideration logic actually transpires in essentially the same instant according to sub-second computational processes). In terms of complexity, our system continuously assesses whether there are any collision (e.g., other traffic) or course conflicts (e.g., road curvature). If there are none, the situation is determined to be non-critical and hence warnings may suffice, as noncompliance by the driver does not result in imminent and certain danger (Figure 2a). However, in a critical situation, our system protects the driver by automated control rather than relying on response to a warning (Figure 2b). B. Symptom state based warning resolutions Coordination of warnings to a specific underlying driver state rather than just any detected aberrance follows established theory where the performance of drivers degrade and rebound differently based on underlying casual mechanisms and so may have implications for warning strategies in driver support systems [1]. For example, directional cueing messages like “Please, pay attention to the road” or “ Please, check your surroundings” may be appropriate for location-based distraction/inattention and volume/area-based cognitively induced tunnel-vision states respectively, as these may more elastically be recuperated from in sharp periodic manner. Contrastingly, drowsy drivers evidence more rhythmicity in the degradation and recovery of their performance and simple suggestions to wake up or seek a break (e.g. coffee cup messages) may not be as effective as providing persuasive information such as the extent to which their drowsiness symptoms have progressed, the size of impact that their driving performance is currently/predicted to suffer, etc. People often know they are tired and shouldn’t be driving but may feel they have no choice, so providing relevant information to impact their choice and decision making is expected to have greater benefit than chastising against a state they cannot physiologically change in an immediate way. C. Automated control transitions as safety back-up Unfortunately, present day on-road automated driving systems are not perfect and require human supervision (i.e., for challenging cases beyond the current technological capabilities). While some of these challenging cases such as decreased visibility (e.g., snow, rain, fog) are more known/predictable, others may not be

Figure 2. Pilot tests of our integrated system verified intended (non)transitions of control and complacency effects within on-demand imperfect partially automated driving systems that require continuous human supervision. yet determinable for years to come beyond the limited scales of demonstrations and out into fleet-wide deployment like reliability/robustness per billions of miles (e.g., outages both accidental and malicious). Moreover in partially automated systems, consumers are susceptible to misattribution of driving authority/responsibility (e.g., over-reliance) for example, mistakenly elevating an “Auto-Pilot” feature from Tesla as being one in which the manufacturer allows the driver to remove their hands from the wheel and even their eyes from the driving (Figure 2c). A key benefit of the automated control as back-up stance taken within our present system design integration scheme, is that automation which falls short of full autonomy is not precariously left to human supervisors who may become susceptible to degraded vigilance [2]. Instead, the automated driving functions as an active safety system rather than convenience commodity, becoming active only on an as-needed basis (i.e., in critical situations with aberrant eyes) and disengaging per the situation no longer requires it (i.e., in non-critical situations or with eye-based

Figure 3. Depiction of two separate MATLAB-Simulink models which are run simultaneously on separate computers affirmation of normal driver state). Essentially from the present safety position, while a normal non-compromised human driver far exceeds the range of operating conditions that a partially automated driver can cover with like levels of flexible/smooth and robust performance, his/her hands, feet, and eyes should remain nominally in control (i.e. for the majority of the time). Furthermore, when that human driver may periodically become compromised to a state of aberrance, it is in those infrequent/limited moments that the role of the imperfect automated driving control benefits as a back-up system. D. Modular developmental design Another major value within our implemented and developing prototype system is its purposively modular structure. Proceeding with a modular design has and continues to afford us with a wide swath of customizable

control, independent and incremental development, isolated enhancements to specific features per our differentiated collaborator skill sets, and advantages for de-bugging and trouble-shooting. Lastly, we aim that our system is readily abstracted, extended, and able to work with different eye trackers and driving sensor and control systems (whether simulated or actualized) due to the modular stand-in nature of its component structures presented in an intuitive layout. In Figure 3a, the eye state classification model is presented from left to right by receiving UDP messages containing eye measurement data (i.e., here from the SmartEye DR120 eye tracker). These messages are unpacked (circled in red) by a function that isolates eye lid opening data as well as parses gaze angles into pixel coordinate locations on the stimulus screen. From there three different aberrant driver eye states are classified in

Figure 4. Overview of classified aberrant driver states with specific thresholds and time windows unique to each. parallel (green rectangles). Next an “OR” logic gate with run-time selectable inputs constructs a final assessment of normal or aberrant driver eyes which is passed through UDP (blue circle) as a binary “1” or “0” signal to the driving control simulation computer. In Figure 3b, the networked driving control simulator processes are arranged in seven major block areas. Three of these (circled in red) representing vehicle state, automatic path following, and vehicle dynamics, were exported fairly directly out of the menu selectable items of

the TASS International PreScan software used to run and house the driving simulation. Next, we added a manual vehicle control block to manage joystick inputs (i.e., Logitech G27 Steering Wheel and Floor Pedals) (green rectangle on bottom). The first stages of the central runtime functional allocation switch arrays (blue diamond) were then implemented allowing for us to switch between automated and manual driving with a single button press, as well as take into account or ignore the incoming driver eye aberrance data (blue circle). Next, lateral and

longitudinal sensors and controllers were added for course and collision conflicts respectively (green rectangles on top). With the addition of a few more switches and logic gates to the functional allocation section, (as well as data logging outputs outlined under the black right hand arrow) our prototype system was ready for its first public demonstrations.

Additionally, sincere gratitude is owed to the many different vehicular modeling improvements contributed by Barys Shyrokau and the diligence in repeated rounds of pilot testing by our colleagues: Jork Stapel, Zhenji Lu, and Daphne Rein-Weston.

III. PRESENT PROTOTYPE DETAILS Essentially, the eyes are used to communicate in our system to the car that the driver is “in” or “out” of control (a.k.a. in-the-loop or out-of-the-loop, ready or not, normal or aberrant, etc.). More specific values and motivations are detailed in our IEEE SMC 2016 Conference Demo Paper [3] and instead are only summarized herein. In Figure 4a, the driver has been looking away from the screen (e.g. the road) long enough to denote “distracted” and additionally has been measured to show symptoms of “overload” and “fatigue.” In Figure 4b, the states of “overload” and “fatigue” have been resolved, but the state of “distraction” (via a built in residual) requires about another 2 seconds of on screen looking in order to register as “attentive”. State of Distraction/Inattention: 90 consecutive offscreen gaze samples (i.e., 1.5 seconds at 60 hz) = “distracted”; 270 cumulative on-screen gaze samples (i.e., at least 4.5 seconds at 60 hz) = “attentive”. State of Drowsiness/Fatigue: Across a moving time window of the last 60 seconds, at least 80% (i.e., 48 seconds) counted where the eye lid opening was registered as 80% closed. State of Cognitive Overload/Mental Workload: Across a moving time window of the last 120 seconds, if the product gaze standard deviations (X*Y) < 15.

Christopher D. Cabrall received the B.A. degree (summa cum laude) of Psychology and Linguistics with a minor in Computer Science from Northeastern University, Boston, MA in 2007 and the M.Sc. degree (summa cum laude) of Human Factors and Ergonomics from San Jose State University, San Jose, CA in 2010. From 2014, he is currently working toward the Ph.D. degree at Delft University of Technology and within the HFAuto project he investigates issues of human automation interaction, driver vigilance and mode awareness.

IV. UPCOMING NEXT STEPS Regarding future progress for this work, immediately next a between-subjects controlled experiment is planned with approximately 90 MSc students from a TU Delft Man-Machine Systems course whereby each participant will be assigned either to a normal manual driving baseline, a full on-demand automation comparison case, or the present adaptive eye-based DSM automation as backup system condition. Results will be compared regarding driving performance and acceptance. Subsequently, on-road closed course test track trials are being planned in conjunction with Jaguar Land Rover at Gaydon Proving Grounds, in Warwick, UK. Further developments to the human machine interface design of warnings and particularly to the specific thresholding of the classified aberrant driver eye states (i.e., distraction, drowsiness, cognitive overload) are on-going to validate/extend upon those best practices identified from the literature. ACKNOWLEDGMENT The authors are involved in the Marie Curie Initial Training Network (ITN) HF Auto – Human Factors of Automated Driving (PITN-GA-2013-605817). The authors are indebted to the remaining HFAuto DSM Work Package members for their collaborative support: Joel Goncalves, Matt Sassman, and Alberto Morando, as well to Nico Janssen for the great contributions from his MSc thesis research by which this work is made possible.

ABOUT THE AUTHORS

Joost C. F. de Winter received the M.Sc. degree in Aerospace Engineering and the Ph.D. degree (cum laude) from Delft University of Technology, Delft, The Netherlands, in 2004 and 2009, respectively. He is currently Associate Professor with the Mechanical Engineering Department, Delft University of Technology. His research interests focus on human factors and statistical modeling, including the study of individual differences, driver behavior modeling, multivariate statistics, and research methodology. Riender Happee received the M.Sc. degree in Mechanical Engineering and the Ph.D. degree from Delft University of Technology in 1986 and 1992, respectively. He is an Assistant Professor and Program Manager Automotive at Delft University of Technology with research areas of automated driving, automotive human factors, driver modeling, and neuromuscular control and biomechanics. REFERENCES [1]

[2]

[3]

P. Hancock. Driven to distraction and back again. In M. Regan, J. Lee, & T. Victor (Eds.). Driver Distraction and Inattention, Advances in Research and Countermeasures, vol. 1. SAE International, Ashgate Publishing, Burlington, VT, USA. Chapter 2. 2013. C. Cabrall, R. Happee, & J.C.F. de Winter. “From Mackworth’s clock to the open road: A literature review on driver vigilance operationalization”. Transportation Research part F: Traffic Psychology and Behavior, 40, pp.169-189. 2016. C. Cabrall, N. Janssen, J. Goncalves, A. Morando, M. Sassman, & J.C.F. de Winter. “Eye-based driver state monitor of distraction, drowsiness, and cognitive load for transitions of control in automated driving”. Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics. Budapest, Hungary. Demo Paper no. 1548. 2016.