A System Design for Automotive Augmented Reality ... - IEEE Xplore

1 downloads 0 Views 2MB Size Report
automotive augmented reality systems for driver assistance at night which detects the 3D positions of potential collision partners as well as the driver's eyes to ...
2014 IEEE Intelligent Vehicles Symposium (IV) June 8-11, 2014. Dearborn, Michigan, USA

A System Design for Automotive Augmented Reality Using Stereo Night Vision Amin Hosseini1 , Daniel Bacara2 and Markus Lienkamp1 proposed system detects the position of collision partners as well as the driver’s eyes position and visualizes the warning graphics at the exact position on the full-windshield according to the driver’s viewing direction. The visualized graphics interact with the driver and their positions on the windshield are updated in line with the movement of the vehicle, collision partners or driver’s eyes position. In order to visualize the graphics at the exact position on the windshield, 3D positions of collision partners have to be calculated. Stereo camera systems are well-known sensors to achieve a 3D model of the environment during day light, but these sensors are not suitable for conditions in which there is not enough light. To deal with low light conditions, we have developed a passive stereo night vision system which captures heat energy from objects such as pedestrians or animals located in front of the vehicle. In addition, a novel approach for object detection based on this vision system is introduced in this work. Since the proposed system is designed to work in low light conditions, another passive night vision camera has been employed to detect the driver’s eyes. Eye detection within the images captured by this camera needs specified methods which are described in this work. This paper is structured as follows. The next chapter introduces some related works in the field of warning visualization in vehicles as well as stereo night vision. The third chapter describes the design of the proposed approach to dynamic visualization of collision warnings on the fullwindshield. Chapter IV proposes a method to calibrate all system components including stereo night vision based obstacle detection, eye tracking as well as full-windshield projection. Chapter V demonstrates a working prototype based on the proposed design and discusses the results. Chapter VI concludes the paper.

Abstract— The use of Head-Up Displays (HUD) in automobiles to visualize various types of driving information on the windshield has rapidly increased in recent years. However, these HUDs display the graphics on only a specific area of the windshield. This paper introduces a new generation of automotive augmented reality systems for driver assistance at night which detects the 3D positions of potential collision partners as well as the driver’s eyes to display the warning information at the exact position according to driver’s viewing direction on the full-windshield (FWD). Furthermore, a generic method for unified calibration of various components of this system is proposed. The introduced concepts are validated on a prototype with an external stereo night vision system for obstacle detection, an interior mono night vision system for eye tracking as well as different projection units.

Keywords. Automotive Augmented Reality, Stereo Night Vision, Eye Tracking, HUD, Full-Windshield Display I. I NTRODUCTION Parallel to the fast development of advanced driver assistance systems (ADAS) in the recent decade, the need to develop more ergonomic systems for a better interaction between drivers and vehicles has increased rapidly. Following the successful application of HUDs in aeronautical field [1], these systems have been applied by some car manufacturers to visualize various types of driving information on the windshield. Using HUDs, navigation information can be displayed on the windshield and drivers do not need to take their eyes off from the street to look at the vehicle navigation display. Thus, drivers can pay more attention to traffic and can improve vehicle safety through better human-vehicle interaction. However, the applied HUDs in automobiles can project graphics on only a specified constant area of the windshield in front of drivers. Despite the employment of HUDs for collision warning on the windshield, driving at night is still one of the most risky driving situations. The majority of car accidents at night are caused by the inability to see the collision partners such as vehicles, pedestrians and animals on the street. Regarding to the indicated problems for collision warning using HUDs on the windshield as well as human vision at night, this paper introduces an innovative concept to use full-windshield as an augmented reality display to increase drivers’ attention to potential collision partners at night. The

II. R ELATED W ORK In order to improve visibility at night, some driver assistance systems like [2] capture the environment with a monocular night vision camera and visualize the captured images on the navigation display. Abel et al. [3] proposed a system which captures the environment with a monocular night vision camera and visualizes the images at a specified constant area on the windshield using a HUD. Visualization of graphics on the vehicle full-windshield has been investigated in some recent works. Wu et al. [4] developed a prototype in which navigation information is visualized on the full-windshield. Driver’s eyes detection is

1 Amin Hosseini and Prof. Markus Lienkamp are with the Institute of Automotive Technology, Technische Universit¨at M¨unchen, Munich, Germany.

[email protected]; [email protected] 2

Daniel Bacara is with Signum Bildtechnik, Munich, Germany.

[email protected] 978-1-4799-3637-3/14/$31.00 ©2014 IEEE

127

not considered in this system and because of that, visualized graphics do not interact with the position of the driver’s head. In addition, the proposed projection concept needs a big free space between the projection unit and the windshield which is not suitable to be implemented in vehicles. For night vision the methods presented in [5] and [6] extract the objects based on their temperature, but offer no extra information about the distance to objects or properties such as width and height, which can be used for further analysis to classify and to extract valid objects. Such an example of critical cases is shown in Fig. 1 where two overlapping persons are presented. With a normal approach in 2D, it would be difficult to perform a precise separation of the presented objects, but with stereo night vision a separation of these objects would be possible.

(a)

(b)

(c)

(d)

(e)

(f)

Fig. 2. Stereo night vision: (a,b) Left and right raw images; (c,d) Results of foreground extraction using the global adaptive threshold approach; (e) Disparity map; (f) Object segmentation Fig. 1.

Overlapping pedestrians in a thermal image

In order to automatically extract the foreground objects, a global adaptive threshold algorithm based on entropy maximization can be utilized. It uses the assumption that the image histogram provides a clear separation between the foreground and the background pixels, which is the case since the background objects have lower temperatures than the foreground objects. The following equation based on [10] provides the optimal threshold (T) selection when the sum of the entropies of the presented probability distributions is maximized:

An approach for the stereo night vision is presented in [7] where the pedestrians are detected using an adaptive threshold method together with a histogram segmentation, but only pedestrians with a certain aspect ratio are taken into account. T¨onnis et al. [8] experimented the the effects of augmented reality visualization on driver’s attention. Furthermore, Seitz et al. [9] evaluated the impacts of mental workload on electrodermal activities of car drivers. These studies can be further used to evaluate the ergonomic and physiologic aspects of the proposed collision warning system in this work on car drivers.

Ej En − Ej − log Aj + − log (An − Aj ) Aj An − Aj

III. S YSTEM D ESCRIPTION

(1)

where

A. Obstacle Detection with Stereo Night Vision Ej =

This work proposes a new approach to night-time obstacle detection through the fusion of stereo vision with passive thermal cameras. On one hand, the system can use the properties of thermal cameras to better segment the objects from the background based on their temperature and on the other hand, it can use the stereoscopic information to perceive the 3D information for all the foreground objects. Even though the system is designed to work for night-time conditions, very good results can be achieved also during day-time lighting conditions with the requirement that the ambient temperature of the environment should not be above 36 ◦ C, due to the fact that multiple invalid objects will then have the human body temperature and they will be considered as obstacles.

j X

yi log yi ,

j ∈ [0, n]

(2)

i=0

and Aj =

j X

yi

j ∈ [0, n]

(3)

i=0

in which yi represents the number of pixels for a gray level (i) while n is the maximum gray level. Example results of foreground object extraction using the global adaptive threshold approach are illustrated in Figs. 2(c) and 2(d). The image now obtained is further processed to filter out the objects that are collision-free, such as street lights or other objects that can affect the final decision. The properties 128

used to filter out these invalid objects are: width, height, center of gravity, aspect ratio and the depth value. Furthermore the stereo correspondence can be calculated using the sum of absolute differences (SAD) [11]. Its expression is presented in the following equation: X SAD(u, v) = |I1 (u, v) − I2 (u + d, v)| (4)

A classification result using the proposed approach is illustrated in Fig. 3, in which the type of the detected objects and their features are extracted. B. Driver’s Eye Detection with Night Vision Camera In order to detect the driver’s eyes in thermal images two different approaches depending on the driver’s visibility (i.e. with and without eyeglasses) can be used. In both cases, a pre-processing step should be done to extract the foreground environment from the original image. This can be achieved using the global adaptive threshold algorithm presented in the previous section. Here the target is not only to segment the body from the background, but to segment also the face from the body. The results of the foreground extraction method are displayed in Fig. 4.

u,v

where I1 , I2 represent the stereo pair, u and v are the pixel coordinates and d represents the disparity between the correspondent pixels in the stereo pair. The motivation lies on the fact that SAD algorithm uses only simple mathematical operations to find the disparity value, therefore it is the best candidate for real time processing. After the computation of the disparity map, an adapted version of the segmentation algorithm presented in [12] can be implemented in real time using the GPU architecture. It uses the blob detection algorithm with a pixel level parallelism performed directly on the depth map. The distance to the objects is directly used in the segmentation process to precisely segment the obstacles, especially the overlapping objects. The √ euclidean distance can be then calculated through D = X 2 + Y 2 + Z 2 , where D is the distance to the objects and X, Y , Z are the coordinates of candidate objects. The real time objective is achieved by implementing the algorithm in a parallel way using the GPU architecture. It uses the index of the thread units (tID) to associate for every pixel a label that will be used afterwards in the scanning procedure. Using the eight neighbors of a pixel, the minimum value (label) is searched and saved. Furthermore, the found minimum label is analyzed and checked whether it belongs to the tID the candidate pixel is associated with, otherwise the default label (tID) is further searched using the current label as an address. The algorithm stops when during the scanning there is no modification of the central pixel value. In order to classify the candidates into different groups (animals, pedestrian, vehicles) a database with common properties such as height, width, area, perimeter, aspect ratio and shape has been established and utilized. The comparison is done in the classification module where for each object a label with the probability of belonging to a group is provided. Ultimately, all detected objects are sent to the augmented reality module to be projected onto the windshield.

Fig. 3.

Fig. 4. Raw image from a passive infrared camera (left); Result of global adaptive threshold to extract the foreground environment (right)

For face detection, the geometrical shape of the face can be utilized. Here the edges of the face are extracted, enhanced and afterwards the rectangle of the face is detected using the vertical and horizontal histograms, see Fig. 5. The coordinates on the horizontal axis can be determined using a threshold value to eliminate the situation when noise pixels appear around the face. In order to also obtain the coordinates on the vertical axis of the face region, the same approach can be used but a special algorithm to separate the face from the rest of the body (i.e. neck) needs to be developed. The top coordinate can be determined based on the vertical histogram using a threshold to eliminate discontinuities created by the driver’s hair and the bottom coordinate of the face region can be calculated using the developed algorithm to detect the local minimum corresponding to the transition from face to neck region. On the detected face region, the most common features for the eyes found in the recorded sequence with different drivers can be used. These features are related to the intensity of eyebrows and eyes in thermal images. They

A classification result

Fig. 5.

129

Horizontal and vertical histograms for detecting the face ROI

provide a variation in terms of temperature and hence are the best candidate for finding the exact position of the eyes. Thereby, it is assumed that the region around the eyes and the eyebrows provides a higher temperature and this is exactly what is detected using the vertical histogram and window comparison. The result of eye detection without eyeglasses is displayed in Fig. 6(a). For the second case, when the driver is wearing eyeglasses, an adapted approach can be used. This approach is similar to the first one but the eye region is searched directly without detecting the face first. The reason is that the effect created by the eyeglasses in the image histogram is highly visible resulting in an easier detection. This effect is illustrated in Fig. 6(b). As a result, the region of the eyeglasses is identified and afterwards the line between the lenses is found in order to separate the eyeglasses vertically. Then the center of the found region is selected as eye center and the distance between the eyes is further calculated.

Furthermore, by installing a flat display on the dashboard, the full-windshield can be covered. This solution is similar to the projection concept of HUDs, in which images emit from a picture generation unit (PGU) and after two reflections by mirrors, they are visualized on the windshield [15]. Although setting a display on the dashboard causes acceptable visualization results and avoids emitting to oncoming traffic, covering the full-windshield in this way requires changing the design of vehicle dashboard and installation of a wide display on it, see Fig. 7.

Fig. 7. Projection onto the windshield by installing a flat display on the dashboard

An alternative projection concept could be indirect emitting of graphics to the windshield using laser projectors. This concepts is illustrated in Fig. 8, in which two mini laser projectors are integrated in the top of the windshield and emit directly to the dashboard. After a reflection from the dashboard, the images are visualized on the windshield. This method enables a low-cost full-windshield projection without making major changes to the vehicle. Furthermore, ultimately the graphics emit upwards and not to oncoming traffic.

Fig. 6. Results of eye detection module without (left) and with eyeglasses (right)

To display precise warning information on the windshield, the 3D locations of the eyes needs to be calculated. To calculate the distance between the camera and the eyes, a look up table (LUT) with calibrated distances can be set up in the initialization phase. The values inside the LUT are directly proportional to the distance between the eyes. Together with the LUT, an image calibration procedure can be performed to convert the 2D image coordinates to 3D world coordinates. C. Projection onto the Full-Windshield There are various possibilities to project the graphics on the full-windshield. [13] and [14] set a projector on the roof of the vehicle and reflected its spread on the dashboard using a mirror. The light reflected from the mirror is not visible on the windshield and hence, they set a retroreflector on the dashboard which returns the light to the direction from which it came. This concept avoids spreading to the oncoming traffic but simultaneously suffers from complexity and the need to be installed outside of the vehicle. Wu et al. [4] proposed another concept to project on the full-windshield, in which the windshield is covered with a transparent foil of phosphors to reflect blue or red light emitted by a laser projector. The main disadvantage of this projection concept is that the visualized graphics on the fullwindshield can be seen not only from the inside of the vehicle but also from the outside which can lead to inattention of oncoming traffic.

Fig. 8. Projection using two mini laser projectors to cover a wide area of the windshield

Fig. 9 illustrates the visualization results of the two proposed projection concepts. As can be seen, with both concepts acceptable graphics can be visualized at night on the windshield. In the next future, the described projection concepts could be replaced by wide-angle head-up displays which cover a wide area of the windshield. An example concept for wideHUDs can be found in [16]. IV. S YSTEM C ALIBRATION A. Automatic Warping Correction Projected graphics on the windshield are displayed with considerable shape deformations which need to be corrected. 130

image to the pattern image: Σx [I(W (x; P ) − T (x)]2 .

(6)

Expansion of this equation using Taylor series results the following cost function: Σx [I(W (x; p + ∆p) − T (x)]2 , Fig. 9. Visualization with different projection units: display (left) and laser projector (right)

in which ∆p is expressed as follows: ∆p = H −1 Σx [∇I

The non-flat shape of the vehicle windshield as well as nonorthogonal angle of driver’s viewing direction to the visualized graphics onto the windshields cause a non-homogeneous warping. In the case of indirect projection of graphics onto the windshield, the non-flat shape of the reflectance surface e.g. vehicle dashboard will deteriorate this deformation. In order to correct the visualization on the windshield, at the first step warping function of the visualized graphics on a windshield needs to be determined. Using this function, the graphics to be displayed can be warped back before projection and thus, the effect of warping can be neutralized after projection on the windshield. To calculate the warping function, the Lucas-Kanade method [17] can be employed which achieves the warping function W (x; P ) through iterative minimizing of the error between a known pattern image T and the warped back image I(W (x; P )). This approach is illustrated in Fig. 10 in which the to be displayed pattern is ultimately visualized on the windshield with warping correction. In this method, deformation should be modeled as an affine transformation:  1 + p1 W (x; P ) = p2

p3 1 + p4

   x p5 y  , 1 + p6 1

∂W T ] [T (x) − I(W (x; p))], ∂p

(8)

and H is the Hessian matrix of W [17]. Starting with an arbitrary vector of warping parameters and using equation (8), P can be calculated iteratively: p ← p + ∆p

(9)

Having P and regarding to equation (5), the warping function can be modeled. Through warping back of to be displayed graphics before projection, the warping function will be neutralized and distortion in visualized graphics will be automatically removed. B. Calibration of Stereo Night Vision System One of the main challenges in computation of the thermal stereo disparity map is the calibration procedure. On visible images the standard procedure to calibrate a stereo system uses a chessboard platform to detect the corners of the squares on both views computing the correlation between the stereo pair. On thermal images instead the chessboard is not visible and it cannot be used in the original form for the stereo calibration. To make it visible in the long wave infrared spectrum, it is necessary to warm up the white squares and to cool down the black squares keeping the edges very sharp without having interferences from warm square to the cold ones.

(5)

where P = (p1 , ..., p6 ) represents a vector of warping parameters [17].

(a)

(7)

(b) (a)

(b)

(c)

Fig. 11. (a) Back and (b) front side of the developed thermal calibration board to calibrate stereo night cameras; (c) Illustration of calibration board in a thermal image

(c)

Such a chessboard is presented in Fig. 11, where a constant temperature of 35 ◦ C is used for heating up of the white squares and on the black squares special plastic plates together with aluminum foil are used to have no interferences from the warm squares. As a result the calibration board offers an excellent distinctive chessboard pattern in the thermal images with very sharp edges.

(d)

Fig. 10. Deformation Correction: (a) known pattern image, T (x); (b) warped image after projection on the windshield; (c) warped back image, I(W (x; P )); (d) projection of warped back image onto the windshield

Assuming I(W (x; P )) as the warped back image, a cost function can be defined which measures the similarity of this 131

In order to integrate the stereo thermal camera system in a vehicle, the structure of the camera system should be resistant to vibrations. Furthermore, the integration position of the camera should enable a wide opening angle which gives the possibility to also detect lateral objects found in the vicinity of the vehicle. The integrated stereo night vision system in this prototype makes use of two Flir PathFindIR cameras with 8-12 µm wavelength in the infrared spectrum (LWIR) which deliver PAL signal at 25 Hz with a QVGA resolution (320 x 240) and a fixed focal length of 19mm (36 ◦ x 27 ◦ FOV), see Fig. 13(a). The images recorded with the thermal cameras are sent to the processing unit, that uses an embedded PC to process in real time. The time-consuming algorithms have been implemented in parallel using a GPU with the Nvidia CUDA architecture. To capture an infrared image from the driver’s face, active infrared cameras require sending a pulse to the driver’s eyes, the effects of which to human health are not yet known. Therefore, in this prototype a passive camera for detection of the driver’s eyes has been employed. The proposed architecture uses a Tau camera from Flir providing a PAL signal with 25 Hz, 9 mm focal length and 320 x 240 resolution. In order to visualize the graphics onto the windshield, the concept of indirect projection because of its integration convenience has been applied. For this purpose, two mini laser projectors have been integrated on the top of the windshield which are marked in Fig. 13(b).

C. Calculation of Intersection Points on the Windshield After measuring distances of detected obstacles as well as driver’s eyes from the related sensors, all measured values should be converted to a unified coordination system. Thus, all obtained values from different sensors can be combined and the warning graphics can be visualized at the exact positions on the full-windshield, see Fig. 12.

Fig. 12. Calibration of all system components to calculate the intersection of driver’s looking line to obstacles and the windshield

Since manually measuring of extrinsic parameters of all sensors regarding an arbitrary coordination system could include failures, these system components should be calibrated automatically. As illustrated in Section III-A, stereo night vision systems can provide the 3D position of each pixel of the captured images. With knowledge of the relation between the coordination system of the stereo night vision system and an unified coordination system, 3D information of stereo vision can be translated to the unified coordination system. This relation can be formulated as →

r = λ.Tφ .Tθ .Tψ ,

(10)



in which r is the vector between two coordination systems, Tφ , Tθ and Tψ represent the well-known rotation matrices of Euler angles [18] in directions pitch, roll and yaw, → respectively, and λ represents the length of r . This equation with four unknown parameters can be solved by setting a pattern of points (n >= 4) in a known position to the unified coordination system and measuring 3D positions of these points by the stereo vision system [19] [20]. This method can be used as the same for extrinsic calibration of the eye tracking camera regarding to the unified coordination system. Thus, all measured values can be translated to the unified coordination system and the intersection point of detected obstacles with the driver’s eyes on the windshield can be calculated geometrically. With the movement of driver’s eye position or obstacle position, this intersection point will be updated and the warning graphics will be visualized at the exact position on the windshield.

(a)

(b)

Fig. 13. Integration of cameras (a) and laser projectors (b) in the test vehicle. The eye tracking camera is integrated within the rear-view mirror

Fig. 14 illustrates the visualized collision warnings on the full-windshield which are located at the driver’s viewing direction. VI. C ONCLUSION AND F UTURE W ORKS In this work, a system design for the dynamical projection of collision warnings on the vehicle full-windshield has been proposed. This system consists of a stereo night vision module to calculate the 3D position of potential collision partners at night, an interior infrared vision system to detect the driver’s eyes and an integrated projection unit to visualize the warning graphics without any distortion on the windshield. In order to calculate the 3D positions of obstacles as well as driver’s eyes using thermal cameras two new approaches have been developed. In addition, a method to calibrate all

V. S YSTEM I NTEGRATION AND D EMONSTRATOR The described approaches for the interactive visualization of collision warnings on the full-windshield have been implemented and tested on a prototype. 132

Fig. 14.

[9] M. Seitz; T. Daun; A. Zimmermann; M. Lienkamp: Measurement of Electrodermal Activity to Evaluate the Impact of Environmental Complexity on Driver Workload. Proceedings of the FISITA 2012 World Automotive Congress, Springer-Verlag, 2013. [10] C. A. Glasbey: An analysis of histogram-based thresholding algorithms, CVGIP: Graphical Models and Image Processing, vol. 55, pp.532-537, 1993. [11] M. Z. Brown; D. Burschka and G. D. Hager: Advances in computational stereo, Pattern Analysis and Machine Intelligence, vol. 25, pp.993-1008, 2003. [12] A. Leu; D. Bacara; D. Aiteanu and A. Grser: Hardware acceleration of image processing algorithms for collision warning in automotive applications, 32nd 33rd Colloquium of Automation, Salzhausen/Leer, Germany, pp.1-12, Shaker Verlag GmbH, 2012. [13] A. Sato; I. Kitahara; Y. Kameda and Yuichi Ohta: Visual navigation system on windshield head-up display, 13th World Congress on Intelligent Transport Systems and Services. ExCel, London. 8-12 October 2006. [14] M. T¨onnis: Towards Automotive Augmented Reality, Dissertation, Faculty of Informatics, Technische Universit¨at M¨unchen, 2008. [15] R.W. Evans; A. P. Ramsbottom; D. W. Sheel: Head-up displays in motor cars, Holographic Systems, Components and Applications, 1989., Second International Conference on , vol., no., pp.56,62, 11-13 Sep 1989 [16] W.-H. Cho; C.-T. Lee; C.-C. Kei; B.-H. Liao; D. Chiang; C.-C. Lee: Head-up display using an inclined Al2O3 column array, Applied Optics, Vol. 53, Issue 4, pp. A121-A124 (2014). [17] S. Baker and I. Matthews: Lucas-Kanade 20 Years On: A Unifying Framework, International Journal of Computer Vision 56, 3 (February 2004), 221-255. [18] O.M. OReilly: Intermediate Dynamics for Engineers: a Unified Treatment of Newton-Euler and Lagrangian Mechanics, Cambridge University Press (2008). [19] T. Marita; F. Oniga; S. Nedevschi; T. Graf and R. Schmidt: Camera Calibration Method for Far Range Stereovision Sensors Used in Vehicles, Intelligent Vehicles Symposium, 2006 IEEE, pp.356,363. [20] Dang, T.; Hoffmann, C.; Stiller, C., ”Self-calibration for Active Automotive Stereo Vision,” Intelligent Vehicles Symposium, 2006 IEEE , pp.364,369.

Visualization of warning graphics on the full-windshield

system components regarding a unified coordination system in the vehicle has been proposed. Thus, all warning graphics can be visualized at the exact position according to driver’s viewing direction on the windshield. The proposed automotive augmented reality system has been implemented in a prototype and its functionality for dynamical visualization of collision warnings on the fullwindshield has been demonstrated. In the future research work, we will evaluate the ergonomic aspects of this new generation of augmented reality displays in vehicles and examine their abilities to improve the driving safety at night. ACKNOWLEDGMENT This article contains results of theses from Fabian Fischer and Andrej Tupikin. The authors would like to thank them for their contributions. Furthermore, the authors would like to thank Georgica Iacobescu from Signum Bildtechnik for his support in stereo night vision. This work was done within the project HeatVision and was funded by the bavarian ministry of economic affairs and media, energy and technology. R EFERENCES [1] K. Funabiki; T. Iijima: Attention allocation in Tunnel-in-the-Sky on HUD and HDD for visual flight, Digital Avionics Systems Conference, 2007. DASC ’07. IEEE/AIAA 26th , vol., no., pp.6.A.3-1,6.A.3-10, 21-25 Oct. 2007. [2] BMW, Night Vision with dynamic light spot, Homepage: http://www.bmw.co.za/products/automobiles/7/ 7series_sedan/safety_nightvision.asp, 01.01.2014 [3] H. Abel; H. Adamietz; B. Leuchtenberg and N. Schmidt : Integration of night-vision and head-up display in vehicles, In ATZ Worldwide 107 (11), pp.13-15, 2005. [4] W. Wen; F. Blaicher; J. Yang; T. Seder and D. Cui: A Prototype of Landmark-Based Car Navigation Using a Full-Windshield Head-up Display System, In Proceedings of the 2009 Workshop on Ambient Media Computing, 2128, 2009. [5] F. Suard; A. Rakotomamonjy; A. Bensrhair and A. Broggi: Pedestrian Detection using Infrared images and Histograms of Oriented Gradients, Intelligent Vehicles Symposium, pp.206-212, Tokyo, 2006. [6] J. Ge; Y. Luo and G. Tei: Real-Time Pedestrian Detection and Tracking at Nighttime for Driver-Assistance Systems, Intelligent Transportation Systems, vol.10, pp.283-298, 2009. [7] M. Bertozzi; A. Broggi; A. Lasagni and M. Del Rose: Infrared Stereo Vision-based Pedestrian Detection, Intelligent Vehicles Symposium, pp.24-29, 2005. [8] M. T¨onnis; C. Sandor; C. Lange and H. Bubb: Experimental Evaluation of an Augmented Reality Visualization for Directing a Car Drivers Attention. In Proceedings of the 4th IEEE/ACM International Symposium on Mixed and Augmented Reality, 5659, 2005.

133