Passive Sensor Integration for Vehicle Self ... - Semantic Scholar

4 downloads 142 Views 10MB Size Report
Dec 3, 2015 - Self-Localization in Urban Traffic Environment ...... Seo, Y.W.; Rajkumar, R. Tracking and estimation of ego-vehicle's state for lateral localization.
Article

Passive Sensor Integration for Vehicle Self-Localization in Urban Traffic Environment † Yanlei Gu *, Li-Ta Hsu and Shunsuke Kamijo Received: 12 October 2015; Accepted: 26 November 2015; Published: 3 December 2015 Academic Editor: Felipe Jimenez Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505, Japan; [email protected] (L.-T.H.); [email protected] (S.K.) * Correspondence: [email protected]; Tel.: +81-354-526-273 (ext. 56273); Fax: +81-354-526-274 † This paper is an extended version of the paper entitled “GNSS/INS/on-board camera integration for vehicle self-localization in urban canyon”, presented at IEEE 18th International Conference on Intelligent Transportation Systems, Gran Canaria, Spain, 15–18 September 2015.

Abstract: This research proposes an accurate vehicular positioning system which can achieve lane-level performance in urban canyons. Multiple passive sensors, which include Global Navigation Satellite System (GNSS) receivers, onboard cameras and inertial sensors, are integrated in the proposed system. As the main source for the localization, the GNSS technique suffers from Non-Line-Of-Sight (NLOS) propagation and multipath effects in urban canyons. This paper proposes to employ a novel GNSS positioning technique in the integration. The employed GNSS technique reduces the multipath and NLOS effects by using the 3D building map. In addition, the inertial sensor can describe the vehicle motion, but has a drift problem as time increases. This paper develops vision-based lane detection, which is firstly used for controlling the drift of the inertial sensor. Moreover, the lane keeping and changing behaviors are extracted from the lane detection function, and further reduce the lateral positioning error in the proposed localization system. We evaluate the integrated localization system in the challenging city urban scenario. The experiments demonstrate the proposed method has sub-meter accuracy with respect to mean positioning error. Keywords: vehicle self-localization; sensor integration; 3D map; GNSS; inertial sensor; vision; lane detection; particle filter

1. Introduction Vehicle self-localization in urban environment is a challenging but significant topic for autonomous driving and driving assistance. Both motion planning and vehicle cooperation need accurate localization. Localization methods can be categorized into two types based on the sensors used: passive sensor-based and active sensor-based. Passive sensors such as Global Navigation Satellite System (GNSS) receivers and vision sensors collect signals, radiation, or light from the environment, while the active sensors have transmitters, which can send out light waves, electrons or signals. The reflection bounced off the target will be collected by the active sensor. Velodyne, a type of Light Detection and Ranging (LIDAR) sensor is a representative active sensor for vehicle self-localization [1–7]. Active sensor-based localization has a weakness from a practical point of view—the price of these sensors is currently very high. Although we expect that the cost of these sensors will be reduced in the near future, they suffer from another critical problem, which is the high energy consumption. Therefore, it is necessary to consider low-cost solutions such as GNSS receivers, cameras and inertial sensors—i.e., passive sensors—as an alternative or compensation.

Sensors 2015, 15, 30199–30220; doi:10.3390/s151229795

www.mdpi.com/journal/sensors

Sensors 2015, 15, 30199–30220

Global Positioning System (GPS) is the main information source in vehicle navigation, which was indicated from the famous Defense Advanced Research Projects Agency (DARPA) GRAND CHALLENGE [8]. Under ideal conditions, namely eight GPS satellites or more in the open sky field, the error standard deviation of the differential mode GPS positioning is approximate 0.3 m [9]. However, vehicles often run in urban areas, in which GNSS positioning performance is severely degraded due to Non-Line-Of-Sight (NLOS) and multipath effects. These effects may lead to a 100 m positioning error in the city urban environment [10]. Various GNSS technologies were proposed to mitigate the NLOS and multipath effects [11]. Recently, building information was considered to analyze the effects of NLOS and multipath. Marais et al., proposed to exclude the satellites located in non-sky regions for localization using the omnidirectional fisheye camera [12]. In addition, 3D building model, shadow mapping, and omnidirectional infrared (IR) cameras were employed for excluding the unhealthy signals and mitigating the effects of NLOS and multipath [13–15]. However, the exclusion of satellites will cause the distortion of Horizontal Dilution of Precision (HDOP). Another famous urban positioning method is shadow matching. This method evaluates the satellite visibility using city building models, and compares the predicted satellite visibility with the visibility measurement to reduce the positioning error [16–18]. In order to mitigate the NLOS and multipath effects while reducing the HDOP distortion, we proposed a candidate distribution based positioning method using a 3D building map (3D-GNSS) [19–21]. 3D-GNSS takes the advantage of 3D building map to rectify the pseudorange error caused by NLOS and multiple effects. The developed 3D-GNSS has been evaluated for pedestrian applications. The result demonstrated 3D-GNSS can obtain high positioning accuracy in urban canyons. Following the idea of the integration of GPS and inertial navigation systems [22–26], we integrated inertial sensors and 3D-GNSS for vehicle applications as well [27,28]. The evaluation result indicated the inertial sensor can smooth the positioning trajectory, however, the combination of 3D-GNSS and inertial sensors still cannot satisfy the sub-meter accuracy requirement of autonomous driving [29,30]. Moreover, the inertial sensor can output accurate heading direction information during a short time period, but the drift will become obvious as time increases. In addition to the inertial sensor and GNSS positioning, sensing techniques were also widely used for positioning. The most famous one is Simultaneous Localization and Mapping (SLAM). SLAM is considered as a problem of constructing or updating a map while at the same time tracking the vehicle position in the map. Optical sensors used may be a Velodyne Laser Scanner [2,4,31], camera [32–34] or the fusion of both [35]. However, SLAM may display error accumulation problems [4], because the localization and mapping are conducted simultaneously. Therefore, the two-step method which firstly constructs an accurate map off-line, and then matches the observation with the constant map for the localization on-line, is more preferred than SLAM. Lane markings are distinctive objects on road surfaces. From the mid-1980s, lane detection from camera sensors has received considerable attention. Those researches mainly focused on recognizing the lane markings on the road surface, but not localization [36–40]. Tao et al., presented a localization method that built a map of the lane markings in a first stage, then the system conducted a map-matching process for improving the stand-alone GPS positioning error [41,42]. Schreiber et al., developed a vision-based localization system using a stereo camera and a highly accurate map containing curbs and road markings [43]. Norman et al., focused on the intersection scenario, and converted the map data to an image in a camera frame [44]. Nedevschi et al., also developed an intersection localization system based on the alignment of visual landmarks perceived by an onboard stereo vision system, using the information from an extended digital map [29]. Christopher et al., proposed to use a lane detection system for improving the localization performance when GPS suffers an outage [45]. Young et al., developed an algorithm aimed at counting the sequence number of the occupied lanes based on multiple-lane detection, which were used for improving lateral localization [46]. In addition, the stop line detection function was employed to reduce the longitudinal error in localization as well [47,48]. All of these researches proposed to use the sensing technology to improve the positioning accuracy.

30200

Sensors 2015, 15, 30199–30220

Different with those related works, this paper focuses on the problem of vehicle self-localization in the most challenging environment, a city urban environment, and improves the positioning performance from two aspects: both GNSS positioning technology and sensor integration. The main Sensors 2015, 15, page–page contribution of this paper is to propose a multiple passive sensor-based localization system for the lane-level localization, which integrates innovative3D-GNSS 3D-GNSSpositioning positioningtechnique technique with with an an lane-level localization, which integrates thethe innovative inertial sensor sensor and and onboard onboard monocular monocular camera. camera. The The flowchart flowchart of of the the proposed proposed system system is is shown shown in in inertial Figure 1. In the integrated system, 3D-GNSS provides the global positioning result. The motion Figure 1. In the integrated system, 3D-GNSS provides the global positioning result. The motion of of vehicle is described via speedometer data afrom a Control Area Network and direction heading vehicle is described via speedometer data from Control Area Network bus, and bus, heading direction the Measurement Inertial Measurement Unit (IMU). The vision-based lane detection is firstly from the from Inertial Unit (IMU). The vision-based lane detection is firstly usedused for for controlling the drift of the IMU. Moreover, the lane keeping and lane changing behaviors are controlling the drift of the IMU. Moreover, the lane keeping and lane changing behaviors are obtained obtained the lane detection These behaviors describe themovement relative movement of the from the from lane detection function. function. These behaviors describe the relative of the vehicle, vehicle, which can further improve the lateral positioning error. Finally, we use a partite filter to which can further improve the lateral positioning error. Finally, we use a partite filter to integrate integrate this information and the 2D lane marking map. this information and the 2D lane marking map.

Figure Figure 1. 1. Flowchart Flowchart of of the the proposed proposed localization localization system. system.

Our previous works [19–21] focused on the development and evaluation of 3D-GNSS for Our previous works [19–21] focused on the development and evaluation of 3D-GNSS for pedestrian applications. In addition, we only combined 3D-GNSS and inertial sensors for improving pedestrian applications. In addition, we only combined 3D-GNSS and inertial sensors for improving the localization performance in our previous works [27,28]. This paper further improves the the localization performance in our previous works [27,28]. This paper further improves the localization performance by additionally integrating vision-based lane detection. This paper is an localization performance by additionally integrating vision-based lane detection. This paper is an extension of the work presented in our conference paper [49]. Compared to the conference work, an extension of the work presented in our conference paper [49]. Compared to the conference work, an idea of heading direction rectification is additionally described and evaluated, and more idea of heading direction rectification is additionally described and evaluated, and more experimental experimental results and discussion are presented in this paper as well. results and discussion are presented in this paper as well. We organize the remainder of the paper as follows: the 3D-GNSS positioning method and We organize the remainder of the paper as follows: the 3D-GNSS positioning method and vision-based lane detection are presented in Sections 2 and 3, respectively. The drift rectification vision-based lane detection are presented in Sections 2 and 3 respectively. The drift rectification method for an inertial sensor is described in Section 4. Section 5 explains the integration algorithm. method for an inertial sensor is described in Section 4. Section 5 explains the integration algorithm. The experimental results are demonstrated in Section 6. Finally, we will end the paper in Section 7 The experimental results are demonstrated in Section 6. Finally, we will end the paper in Section 7 with conclusions and proposals of future work. with conclusions and proposals of future work. 2. 2. 3D-GNSS 3D-GNSS The The integrated integrated localization localization system system considers considers the the GNSS GNSS positioning positioning result result as as the the main main information source. Generally speaking, GNSS positioning provides an estimation for localization, information source. Generally speaking, GNSS positioning provides an estimation for localization, and sensors is to the position around the GNSS result. result. A more accurate and the the role roleofofother other sensors is optimize to optimize the position around the GNSS A more GNSS result makes it easier for the integrated system to achieve lane-level performance. Therefore, the accurate GNSS result makes it easier for the integrated system to achieve lane-level performance. performance the GNSS positioning technique should be improved first. The satellite positioning Therefore, theofperformance of the GNSS positioning technique should be improved first. The satellite techniques suffer from NLOS and multipath effects in urban canyons, as shown in Figure 2. This positioning techniques suffer from NLOS and multipath effects in urban canyons, as shown in paper proposes to utilize 3D-GNSS to reduce both the multipath and NLOS effects. We briefly Figure 2. This paper proposes to utilize 3D-GNSS to reduce both the multipath and NLOS effects. describe thedescribe idea of 3D-GNSS this section, while the details 3D-GNSS been published in We briefly the idea ofin3D-GNSS in this section, whileofthe details have of 3D-GNSS have been our previous works [19–21]. published in our previous works [19–21]. Figure 3a shows the flowchart of 3D-GNSS method. In order to analyze and correct the pseudorange error caused by multipath and NLOS, a 3D building map is needed. 3D-GNSS recognizes the reflected path of satellite signals 30201 using ray-tracing, and further estimates the reflection delay. Theoretically, the pseudorange simulation at the position of receiver should be the same as the pseudorange measurement. Thus, 3D-GNSS employs the candidate distribution algorithm to rectify the positioning error. Firstly, a set of position hypothesis, named candidates, are distributed around

Sensors 2015, 15, 30199–30220

Figure 3a shows the flowchart of 3D-GNSS method. In order to analyze and correct the pseudorange error caused by multipath and NLOS, a 3D building map is needed. 3D-GNSS recognizes the reflected path of satellite signals using ray-tracing, and further estimates the reflection delay. Theoretically, the pseudorange simulation at the position of receiver should be the same as the pseudorange measurement. Thus, 3D-GNSS employs the candidate distribution algorithm to Sensors 2015, 15, page–page Sensors 2015, 15, page–page rectify the positioning error. Firstly, a set of position hypothesis, named candidates, are distributed around the previous positioning result the position given by conventional receiver. the previous positioning result and theand position given by conventional GNSS GNSS receiver. Next, the previous positioning result and the position given by conventional GNSS receiver. Next, Next, 3D-GNSS generates the pseudorange simulation from each candidate using the ray 3D-GNSS generates the pseudorange simulation from each candidate positionposition using the ray tracing, 3D-GNSS generates the pseudorange simulation from each candidate position using the ray tracing, tracing, asinshown Figure 3b. as shown Figurein3b. as shown in Figure 3b.

(a) Multipath effect (a) Multipath effect

(b) NLOS effect (b) NLOS effect

Figure [19]. Figure 2. 2. (a) (a) Multipath Multipath effect effect [19]; [19]; (b) (b) NLOS NLOS effect effect [19]. Figure 2. (a) Multipath effect [19]; (b) NLOS effect [19].

GNSS Measurements GNSS Measurements

Ray-tracing Ray-tracing

3D Building Map 3D Building Map

Distribution of Distribution of Position Candidates Position Candidates

Pseudorange Pseudorange Similarity Similarity

Calculation of Calculation of Positioning Result Positioning Result

(a) Flowchart of 3D-GNSS (a) Flowchart of 3D-GNSS

 Invalid candidate  Invalid candidate Likelihood of valid candidate : Likelihood of valid candidate High Low: High Low

Ground truth Ground truth

(b) Example of ray-tracing (b) Example of ray-tracing

(c) Likelihood of candidates (c) Likelihood of candidates

Figure 3. (a) Flowchart of 3D-GNSS; (b) Example of ray-tracing [19]; (c) Likelihood of candidates. Figure 3. 3. (a) (b) Example Example of of ray-tracing ray-tracing [19]; [19]; (c) (c) Likelihood Likelihood of of candidates. candidates. Figure (a) Flowchart Flowchart of of 3D-GNSS; 3D-GNSS; (b)

There are three types of satellite conditions. The LOS signal is not affected by the buildings, and There are three types of satellite conditions. The LOS signal is not affected by the buildings, and does There not include the reflection delay. Inconditions. the NLOS case, as shown inisFigure 2b, the reflection delay is are three types of satellite The LOS signal not affected by the buildings, does not include the reflection delay. In the NLOS case, as shown in Figure 2b, the reflection delay is equal to the difference between the direct path and the reflection path. However, the third condition and does not include the reflection delay. In the NLOS case, as shown in Figure 2b, the reflection equal to the difference between the direct path and the reflection path. However, the third condition is more is the multipath effect path shown in the Figure 2a. This research assumes that delay is ambiguous, equal to the which difference between the direct and reflection path. However, the third is more ambiguous, which is the multipath effect shown in Figure 2a. This research assumes that the signalisinmore the ambiguous, multipath condition is about 6 dBeffect weaker than the LOS and the strobe condition which is the multipath shown in Figure 2a.signal, This research assumes the signal in the multipath condition is about 6 dB weaker than the LOS signal, and the strobe correlator [50] with 0.2 chip spacing is adopted in the commercial receiver based on the experience. correlator [50] with 0.2 chip spacing is adopted in the commercial receiver based on the experience. The research uses these principles to estimate the pseudorange delay in the multiple path condition. 30202 The research uses these principles to estimate the pseudorange delay in the multiple path condition. By using ray tracing, both the satellite condition and the reflection delay can be estimated. By using ray tracing, both the satellite condition and the reflection delay can be estimated. The 3D-GNSS technique additionally checks the consistency of the satellite conditions by The 3D-GNSS technique additionally checks the consistency of the satellite conditions by considering the signal strength. If the satellite conditions estimated using the signal strength conflict considering the signal strength. If the satellite conditions estimated using the signal strength conflict

Sensors 2015, 15, 30199–30220

that the signal in the multipath condition is about 6 dB weaker than the LOS signal, and the strobe correlator [50] with 0.2 chip spacing is adopted in the commercial receiver based on the experience. The research uses these principles to estimate the pseudorange delay in the multiple path condition. By using ray tracing, both the satellite condition and the reflection delay can be estimated. The 3D-GNSS technique additionally checks the consistency of the satellite conditions by considering the signal strength. If the satellite conditions estimated using the signal strength conflict with the result from ray tracing, then the 3D-GNSS excludes the satellite from the following process. The probability of each position hypothesis is evaluated based on the similarity between the pseudorange measurement and the pseudorange simulation. Figure 3c indicates the different likelihood values of candidates using different colors. It is easy to see that the candidates around the ground truth position have higher weighting. The black dots are the invalid candidates, whose pseudorange similarity exceeds the defined threshold in this research of 10 m. Finally, the 3D-GNSS provides the final rectified result by weighted averaging the positions of all the valid candidates. In this paper, we adopt the multiple satellite systems in 3D-GNSS, which includes not only GPS, but also GLObal NAvigation Satellite System (GLONASS) and Quasi Zenith Satellite System (QZSS). 3. Lane Detection from Monocular Camera The lane detection method used in this research is highly inspired by Aly’s work [38]. However, different from the lane detection research described in [38], this research does not focus on the lane detection itself, but rather employs the vision-based lane detection to improve the accuracy of localization. 3.1. Inverse Perspective Mapping (IPM) Before the lane detection, a top view of the road surface is generated, which is named Inverse Perspective Mapping (IPM). This has two advantages: the first one is that the perspective effect can be eliminated from the images, which makes the lanes become parallel [38]. The other one is that the road surface is represented in meters by the IPM. Thus, the prior information about the lane markings can be used easily. In order to generate the IPM, a flat road is assumed. In addition, for conducting this transformation, camera intrinsic parameters and camera extrinsic parameters are also used. As shown in Figure 4a, an image coordinate {Fi } = {u, v}, a camera coordinate {Fc } = {Xc , Yc , Zc } and a world coordinate {Fw } = {Xw , Yw , Zw } are defined, respectively. The world coordinate is centered at the optical center of the camera, and Zw is perpendicular to the road surface. The camera coordinate Xc axis is assumed to stay in the world coordinate Xw -Yw plane. It means that the systems has a pitch angle α and yaw angle β for the optical axis, but the roll angle is zero. In addition, focal length of camera is (fu , fv ), and optical center is (cu , cv ). h is the height of the camera coordinate above the road plane. Thus, any point in the image coordinate can be projected to the road plane by the homogeneous transformation [38]: » fi » fi » fi u f u 0 cu xc {zc — ffi — ffi — ffi (1) – v fl “ – 0 f v cv fl – yc {zc fl 1 0 0 1 1 fi » » fi » fi » fi 1 0 0 ¯ ´ ¯ ´ xw xc — π ffi cosβ ´sinβ 0 π ffi — ffi — ffi — ffi — 0 cos α ` ´sin α ` (2) ffi – sinβ cosβ 0 fl – yw fl – yc fl “ — 2 2 ´ ¯ ´ ¯ fl – π π 0 0 1 ´h zc 0 sin α ` cos α ` 2 2 where Equation (1) represents the transformation from the camera coordinate to the image coordinate, and Equation (2) is the transformation from the world coordinate to the camera coordinate. Therefore, any point on the ground can be projected back to the image coordinate. Figure 4b shows an

30203

Sensors 2015, 15, 30199–30220

original image from onboard camera, and the region of interest (ROI) is marked by the red rectangle. Sensors 2015, 15, page–page Sensors 15, page–page Figure2015, 4c visualizes the IPM image of the ROI, where the line boundaries of the lane become parallel. Yaw Yaw

Pitch Pitch

(a) Definition of the coordinate systems (a) Definition of the coordinate systems

(b) Image captured from onboard camera (b) Image captured from onboard camera

(c) Inverse Perspective Mapping (c) Inverse Perspective Mapping

Figure 4. (a) Definition of the coordinate systems; (b) Original image from onboard camera; (c) IPM Figure 4. (a) of (a) Definition Definition of the the coordinate coordinate systems; systems; (b) (b) Original Original image image from from onboard onboard camera; (c) (c) IPM of the rectangle area in (b). of the rectangle area in (b).

3.2. Lane Detection 3.2. 3.2. Lane Lane Detection Detection For enhancing lane marking and reducing other disturbances, the IPM image is firstly For enhancing lanemarking marking and reducing other disturbances, is firstly For enhancing lane and reducing other disturbances, IPMthe image is image firstly smoothed smoothed by a Gaussian filter along the vertical direction, and then the filtered by aIPM second-derivative of smoothed by a Gaussian filter along the vertical direction, and then filtered by a second-derivative of by a Gaussian filter along the vertical direction, and then filtered by a second-derivative of Gaussian Gaussian for the horizontal direction. The algorithm considers 2.5% pixels as the road marking and Gaussian for the horizontal direction. The algorithm considers 2.5% pixels as the road marking and for the horizontal direction. The as algorithm considers 2.5% pixels as the thresholds thresholds the filtered IPM [38], shown in Figure 5a. In addition, the road lane marking detectionand is limited in a thresholds filtered [38], shown Figure 5a. In addition, thethe lane detection the filtered the IPM [38], asIPM shown inasFigure 5a.incenter. In addition, the lanealong detection is limited inisa limited rectangular rectangular area centered at the vehicle The width lateral direction is 7inma rectangular area centered at the distance vehicle center. widthdirection along the direction is 7and m area centered at the vehicle center. The width along the lateral direction 7lateral m (two-lane width), and the farthest along theThe heading isis30 m.(two-lane Figure 5bwidth), shows the (two-lane width), and the farthest distance along the heading direction is 30 m. Figure 5b shows the the farthest distance lane detection ROI. along the heading direction is 30 m. Figure 5b shows the lane detection ROI. lane detection ROI.

(a) (a)

(b) (b)

(c) (c)

(d) (d)

(e) (e)

(f) (f)

(g) (g)

Figure detection [49], (a) (a) Filtered IPM;IPM; (b) Lane detection ROI; (c) Hough transformation result Figure 5.5.Lane Lane detection [49], Filtered (b) Lane detection ROI; (c) Hough transformation Figure 5. Lane detection [49], (a) Filtered IPM; (b) Lane detection ROI; (c) Hough transformation result of (b); (d) Rough area of the first line; (e) Result from RANSAC for the first line detection in (d); result of (b); (d) Rough area of the first line; (e) Result from RANSAC for the first line detection in (d); of (b); (d) Rough area of the first line; (e) Result from RANSAC for the first line detection in (f) (f) Hypothesized Hypothesized area area of the second line; (g) RANSAC result for the second line detection in (f). (d); (f) Hypothesized area of the second line; (g) RANSAC result for the second line detection in (f).

One boundaries. Therefore, the the lanelane detection problem is converted into One lane laneconsists consistsofoftwo twoline line boundaries. Therefore, detection problem is converted lane consists of two line boundaries. Therefore, the lane detection problem is converted into ainto lineOne detection one in the ROI. The line detection algorithm consistz of a Hough transform to locate a line detection one in the ROI. The line detection algorithm consistz of a Hough transform a line detection one in the ROI. The line detection algorithm consistz of a Hough transform to locate the rough areas of lines, followed a Random Consensus to optimize to locate the rough areas of lines,byfollowed bySample a Random Sample(RANSAC) Consensusfitting (RANSAC) fittingthe to the rough areas of lines, followed by a Random Sample Consensus (RANSAC) fitting to optimize the line detection results [38]. In fact, the two lines are represented by two local maxima in the Hough optimize the line detection results [38]. In fact, the two lines are represented by two local maxima line detection results [38]. In fact, theintwo lines5c. areThus, represented byarea two of local Hough transform result, which is visualized Figure the rough themaxima first linein is the located by transform result, which is visualized in Figure 5c. Thus, the rough area of the first line is located by finding the global maximum in Hough transform space, as indicated by the red rectangle in Figure 5d. 30204 finding the global maximum in Hough transform space, as indicated by the red rectangle in Figure 5d. After that, the RANSAC line fitting is conducted in the red rectangle for detecting the first line. Figure 5e After that, RANSAC line fittingline is conducted thepaper, red rectangle forwidth detecting the first as line. 5e shows the the result from RANSAC fitting. Inin this the lane is assumed 3.5Figure m. The shows the result from RANSAC line fitting. In this paper, the lane width is assumed as 3.5 m. The 6

Sensors 2015, 15, 30199–30220

in the Hough transform result, which is visualized in Figure 5c. Thus, the rough area of the first line is located by finding the global maximum in Hough transform space, as indicated by the red rectangle 5d. After that, the RANSAC line fitting is conducted in the red rectangle for Sensors 2015,in15,Figure page–page detecting the first line. Figure 5e shows the result from RANSAC line fitting. In this paper, the lane width is assumed Theline hypothesized area of the second line can be determined based on hypothesized areaas of 3.5 the m. second can be determined based on this assumption. Then the RANSAC this RANSAC fitting is used for the line detection again. Figure line assumption. fitting is usedThen for thethe line detectionline again. Figure 5f,g demonstrate the hypothesized area of5f,g the demonstrate the hypothesized area of the second line and the result of the RANSAC line fitting, second line and the result of the RANSAC line fitting, respectively. Thus, the position of the vehicle respectively. Thus, of the vehicle relative to the two line boundaries can be estimated. relative to the two the lineposition boundaries can be estimated. 3.3. 3.3. Lane Lane Keeping Keeping and andLane LaneChanging ChangingDetection Detection One line boundaries onon thethe road surface. After the the line line boundaries are Onelane laneisisrepresented representedbybytwo two line boundaries road surface. After boundaries detected, we employ the particle filter filter to track of the line tracking is to detect are detected, we employ the particle to each trackline. eachThe line.purpose The purpose of the line tracking is to the lane keeping and changing behaviors. Figure 6 shows the line detection and tracking results in a detect the lane keeping and changing behaviors. Figure 6 shows the line detection and tracking lane changing process from frame t to t + 2k. In Figure 6, the blue rectangle is the ROI of lane detection, results in a lane changing process from frame t to t + 2k. In Figure 6, the blue rectangle is the ROI of the color lines detected lines. attached e.g., “Line-1”, and “Line-3”, lanered detection, theare redthe color lines are the The detected lines.number, The attached number,“Line-2” e.g., “Line-1”, “Line-2” indicate the tracking results. The small green rectangle at the bottom of images denotes our denotes vehicle and “Line-3”, indicate the tracking results. The small green rectangle at the bottom of images position. From frame From t to t +frame k, the tLine-2 becomes close to the center of the the center vehicle, is indicated our vehicle position. to t + k, the Line-2 becomes close to ofwhich the vehicle, which in the Figure 6a,b. As time increases, the Line-2 gradually appears at the left side of the as is indicated in the Figure 6a,b. As time increases, the Line-2 gradually appears at the left vehicle, side of the shown in Figure 6c. In this process, the position of the tracked Line-2 changes from the right side to vehicle, as shown in Figure 6c. In this process, the position of the tracked Line-2 changes from the right the of the which indicates the vehicle performs right-direction lane changing. sideleft to side the left sidevehicle, of the vehicle, which indicates the vehicle performs right-direction lane changing.

Line-1

Line-2

(a) Frame t

Line-2

(b) Frame t+k

Line-2

Line-3

(c) Frame t+2k

Figure6.6.Detection Detectionof oflane lanechanging changingscenario; scenario;(a), (a),(b) (b)and and(c) (c)are areline linedetection detectionand andtracking trackingresults resultsat at Figure different frames. The blue rectangle is the ROI of lane detection, the red color lines are the detected different frames. The blue rectangle is the ROI of lane detection, the red color lines are the detected lines. The and “Line-3”, indicates the the tracking results. The lines. The attached attachednumber, number,e.g., e.g.,“Line-1”, “Line-1”,“Line-2” “Line-2” and “Line-3”, indicates tracking results. green rectangle shows the vehicle position. The green rectangle shows the vehicle position.

Figure 7 shows the lane keeping scenario. The green rectangle denotes our vehicle position, Figure 7 shows the lane keeping scenario. The green rectangle denotes our vehicle position, which is located between Line-1 and Line-2 during the driving period. In Figure 7a–c, Line-1 always which is located between Line-1 and Line-2 during the driving period. In Figure 7a–c, Line-1 always appears at the left and Line-2 always appears at the right side of the vehicle. Unlike the lane changing appears at the left and Line-2 always appears at the right side of the vehicle. Unlike the lane scenario, there is no line cross from one side to the other side of the vehicle in this period. Based on changing scenario, there is no line cross from one side to the other side of the vehicle in this period. the relative position of the vehicle and the tracked lines, the lane keeping can be detected. This Based on the relative position of the vehicle and the tracked lines, the lane keeping can be detected. information will be used in the following integration. This information will be used in the following integration.

Line-1

Line-2

(a) Frame t

Line-1

Line-2

(b) Frame t+k

Line-1

Line-2

(c) Frame t+2k

Figure 7. Detection of the lane keeping scenario; (a), (b) and (c) are the line detection and tracking 30205 results at different frames. The blue rectangle is the ROI of lane detection, the red color lines are the detected lines. The attached numbers, e.g., “Line-1” and “Line-2”, indicate the tracking results. The green rectangle shows the vehicle position.

Figure 7 shows the lane keeping scenario. The green rectangle denotes our vehicle position, which is located between Line-1 and Line-2 during the driving period. In Figure 7a–c, Line-1 always appears at the left and Line-2 always appears at the right side of the vehicle. Unlike the lane changing scenario, there is no line cross from one side to the other side of the vehicle in this period. Based on the relative of the vehicle and the tracked lines, the lane keeping can be detected. This Sensors 2015, 15,position 30199–30220 information will be used in the following integration.

Line-1

Line-2

Line-1

(a) Frame t

Line-2

Line-1

(b) Frame t+k

Line-2

(c) Frame t+2k

Figure 7. 7. Detection line detection detection and and tracking tracking Figure Detection of of the the lane lane keeping keeping scenario; scenario; (a), (a), (b) (b) and and (c) (c) are are the the line ROI of of lane detection, thethe redred color lines are are the results at different frames. frames. The Theblue bluerectangle rectangleis isthe the ROI lane detection, color lines detected lines. The attached numbers, e.g., “Line-1” and “Line-2”, indicate the tracking results. The the detected lines. The attached numbers, e.g., “Line-1” and “Line-2”, indicate the tracking results. greengreen rectangle shows the vehicle position. The rectangle shows the vehicle position.

4. Heading Direction Rectification for an Inertial Sensor 7 In this research, an Inertial Measurement Unit (IMU) sensor is employed to detect the heading direction of the vehicle. The IMU sensor outputs the acceleration, the angle rate and the value of angle along each axis. According to the IMU sensor specification, the pitch and roll angles are the difference from the body of the IMU sensor to the horizontal plane perpendicular to the gravity direction. In addition, the yaw angle is the accumulated angle changes since the sensor is reset or connected. Suppose that the heading direction has been calibrated relative to the true north direction, the vehicle velocity can be calculated by following equations: fi » cosω Vnorth´east ffi — — “ 0 0 fl – – Valtitude ´sinω »

fi » 0 sinω 1 ffi — 1 0 fl – 0 0 cosω 0

»

fi » Vnorth cosθ — ffi — – Veast fl “ – sinθ Valtitude 0

´sinθ cosθ 0

fi fi » 0 0 VCAN ffi ffi — cosλ ´sinλ fl – 0 fl 0 sinλ cosλ

(3)

fi » fi 0 Vnorth´east ffi — ffi 0 fl – 0 fl 1 Valtitude

(4)

where, λ and ω are the roll and pitch angle of the IMU, respectively. θ is the heading direction from the true north direction. The vehicle velocity VCAN can be represented by Vnorth´east , and Valtitude in Equation (3). The speed Vnorth´east is further decomposed by Equation (4). Because we conducted the experiments in a flat area, the simplified North-East plane is adopted for the positioning. The GNSS positioning result at first epoch is defined as the original point of the North-East plane. The movement of the vehicle can be calculated by Equation (5): « Xk “

xnorth,k xeast,k

ff

« “

xnorth,k´1 xeast,k´1

ff

« ` ∆t

Vnorth,k´1 Veast,k´1

ff

« “

xnorth,k´1 xeast,k´1

ff

« ` ∆t

Vnorth´east,k´1 ¨ cosθk´1 Vnorth´east,k´1 ¨ sinθk´1

ff (5)

where, pxnorth,k , xeast,k q denotes the vehicle position at time k. Figure 8 shows the trajectory generated from Equations (5). The trajectory is indicated by blue points and the ground truth is shown by cyan line. It is clear to see that the blue points are parallel with the ground truth at the beginning, but it is drifting with the increase of time.

30206

k

x   east ,k 

x   east ,k 1 

V   east ,k 1 

x   east ,k 1 

V   north east ,k 1  sin k 1 

where, ( xnorth, k , xeast , k ) denotes the vehicle position at time k. Figure 8 shows the trajectory generated from Equations (5). The trajectory is indicated by blue points and the ground truth is shown by cyan line. is clear to see that the blue points are parallel with the ground truth at the beginning, but it is SensorsIt2015, 15, 30199–30220 drifting with the increase of time.

Ground truth Inertial trajectory Start of the run

Figure 8. Drift effect of IMU sensor. Figure 8. Drift effect of IMU sensor.

This paper proposes to use the vision based lane detection to control the drift effect. When the paper proposes to use thethe vision based lane detection theas drift When the lane This keeping manner is detected, vehicle heading directiontoiscontrol rectified theeffect. direction of the lane keeping is detected, the vehicle direction is rectified as the direction of the occupied lane,manner which could be estimated fromheading the 2D lane marking map. Otherwise, the heading occupied lane, which could be estimated from the 2D lane marking map. Otherwise, the heading direction is accumulated by the yaw angle rate provided by IMU sensor. This idea is formulized direction as follows:is accumulated by the yaw angle rate provided by IMU sensor. This idea is formulized as follows: # lane ,lanekeeping keeping  θlane ,lane θk “ k   kk (6) (6) I MU IMU θk´1 ` ∆θ ,otherwise   k ,otherwise



k 1

k

θlane k

where, θk is the heading direction at time k, is the lane direction obtained from 2D lane marking map. ∆θkI MU denotes the angle changing obtained8from IMU sensor. 5. Integrated Localization of 3D-GNSS, Inertial Sensor and Lane Detection In this research, multiple passive sensors are simultaneously adopted for positioning. 3D-GNSS, inertial sensor and lane detection are integrated in the particle filter. The particle filter is a nonparametric version of the Bayes!filter, and has widely) been applied. The particle filter represents i i a posterior using a set of particles xik “ pxk,north , xk,east q

n

i “1

, xik is the two dimensional positioning,

and i is the particle index, k is the time, n means the number of the particles. Each particle has an importance weight wik . The vehicle position is recursively estimated by the following steps [51]: (n 1. Prediction: Create the particles xik , wik i“1 for time k based on the previous set of particles ! )n xik´1 , wik´1 and the control value uk , based on the propagation model in Equation (5). i“1 (n 2. Correction: an weight of each particle in xik , wik i“1 is evaluated with the new observation zk , according to certain observation model wik “ ppzk |xik q. In this paper, zk includes two observation Gk for 3D-GNSS positioning result and Vk for lane detection result. (n 3. Resampling: the particle set xik , wik i“1 will be resampled based on the importance weight. Figure 9 demonstrates the process of the weight evaluation for one particle. The yellow point is the particle i. The distance between the GNSS positioning result and the particle i is consist of i,k i,k the longitudinal distance DGNSS,longitudinal and the lateral distance DGNSS,lateral by referring to the direction of the occupied lane. Therefore, the probability computed thanks to 3D-GNSS measurement ppGk |xik q is represented as follows: ¨ ˚´ ppGk |xik q “ exp ˚ ˝

ˆ´

i,k DGNSS,lateral

¯2

´ `

i,k DGNSS,longitudinal

σ2GNSS

ppGk |xik q “ ppGk,lateral |xik q ¨ ppGk,longitudinal |xik q

30207

¯2 ˙ ˛ ‹ ‹ ‚

(7)

(8)

p(G k | xik ) is represented as follows:



Sensors 2015, 15, 30199–30220

2 2 i,k   Di , k  GNSS , lateral    DGNSS ,longitudinal   p(G k | xik )  exp  2 GNSS  

    

i p(G k | xik )  p(G k ,lateral ¨ | xik )´ p(G k ,longitudinal ¯| x˛ k)

2

i,k ˚ ´ DGNSS,lateral 2 i,k  ‹ ppGk,lateral |xik q “ exp ˝    DGNSS 2 ,lateral   ‚ i  σ p(G k ,lateral | xk )  exp 2GNSS   GNSS  ¨ ´ ¯2 ˛ i,k ´ D 2 GNSS,longitudinal ˚ ‹ i,k  ppGk,longitudinal |xik q “i exp ˝    DGNSS  ‚ , longitudinal 2   σ2 GNSS p(G k ,longitudinal | x k )  exp   GNSS  

(7)

(8) (9) (9)

(10) (10)

GNSS positioning result

Particle i Distance to GNSS result Distance to two lines of lane Lane direction Figure Figure 9. 9. Distance Distance estimation estimation for for aa particle. particle.

9 meters, which is tuned empirically. Figure 10a This paper set the value of σ2GNSS as nine square demonstrated the probability of the particles estimated from the GNSS measurement. The particles around of the GNSS position have high weighting. This research optimizes the probability of the particles using the vision-based lane detection. ´ ¯

k The lane detection provides the distance from the vehicle center to right and left lines Dright , Dk . ´ ¯ le f t i,k i,k In addition, the distance from each particle to right and left lines Dlane,right , Dlane,le f t can be estimated from the prepared 2D lane marking map, as shown in Figure 9. The probability computed thanks to lane detection measurement ppVk |xik q can be calculated as follows:

¯ 1´ ppVk,le f t |xik q ` ppVk,right |xik q 2 ¨ ´ ¯2 ˛ i,k k ´ D ´ D j lane,j ˚ ‹ ppVk,j |xik q “ exp ˝ ‚, pj P tright, le f tuq σ2lane ppVk |xik q “

(11)

(12)

where, probability ppVk,right |xik q and ppVk,le f t |xik q correspond to the right line and left line, respectively. This paper empirically sets the variance σ2lane as 0.25 m2 . Figure 10b visualizes the probability ppVk |xik q estimated from the measurement of the lane detection. When the lane keeping is detected, the particles outside the occupied lane of the previous result, will be excluded in the calculation, and visualized as black dots in Figure 10b. The particles around the lane center have higher probability, because the vehicle runs along the lane center in experiments. It is important to note that the lane detection can sense the lateral position. It cannot perceive the position difference along the longitudinal direction. Therefore, we propose to integrate ppVk |xik q into ppGk,lateral |xik q. Thus, the integrated probability is represented as: ´ ¯ ppGk , Vk |xik q “ p1 ´ γq ¨ ppGk,lateral |xik q ` γ ¨ ppVk |xik q ¨ ppGk,longitudinal |xik q

(13)

where, γ is the importance weight of the lane detection measurement, which is set as 0.5 empirically. Figure 10c visualizes the integrated probability of all particles by different colors. Comparing to 30208

along the longitudinal direction. Therefore, we propose to integrate p(Vk | xik ) into p(G k ,lateral | xik ) . Thus, the integrated probability is represented as:

p(Gk , Vk | xik )   (1  )  p(G k ,lateral | xik )    p(Vk | xik )   p(Gk ,longitudinal | xik )

(13)

Sensors 2015, 15, 30199–30220

where,  is the importance weight of the lane detection measurement, which is set as 0.5 empirically. Figure 10c visualizes the integrated probability of all particles by different colors. Comparing to Figure 10a, 10a, the thehigh highweighting weightingparticles particlesare are not around GNSS positioning result, appear in Figure not around of of GNSS positioning result, butbut appear in the i i q in Equation (13) leads to this improvement. In addition, when the system the correct |x correct lane.lane. in Equation (13) leads to this improvement. In addition, when the system p(VppV | x ) k kk k detects lane lane changing, changing, the the operation operation for for the the particle particle exclusion exclusion follows follows the the lane lane changing changing direction. direction. detects Figure 10d demonstrates the valid particles corresponding to the lane detection measurement in the the Figure 10d demonstrates the valid particles corresponding to the lane detection measurement in the lane changing case. the lane changing case.

(a) Probability for GNSS

(b) Probability for lane detection

(c) Probability for integration

Invalid particle Integration result GNSS positioning result

(d) Probability for lane detection in lane changing process Sensors 2015, 15, page–page

Figure Figure 10. 10. Particle Particle probability probability evaluation. evaluation.

10

6. Experiments Experiments 6.

We chose Hitotsubashi area area in in Tokyo Tokyo for experiments due due to to the the tall tall building building density. density. We chose the the Hitotsubashi for experiments Figure 11 We used used two two Figure 11 shows shows the the developed developed 3D 3D map map for for 3D-GNSS 3D-GNSS and and 2D 2D map map for for integration. integration. We kinds of data to construct the 3D map. The first one is the 2-dimensional building footprint, which kinds of data to construct the 3D map. The first one is the 2-dimensional building footprint, which is is provided JapanGeospatial GeospatialInformation InformationAuthority. Authority.The Theother otherone oneisis the the Digital Digital Surface Surface Model Model provided byby Japan (DSM) data, data, obtained obtained from from Aero The DSM DSM data data includes includes the the height height information information (DSM) Aero Asahi Asahi Corporation. Corporation. The of the building [20]. The 2D map is generated from high resolution aerial images provided by of the building [20]. The 2D map is generated from high resolution aerial images provided by NTT-geospace. NTT-geospace.

(a) The wide view of 2D and 3D map

(b) The view from close distance

Figure 11. 2D and 3D Maps used in this research. Figure 11. 2D and 3D Maps used in this research.

In the the experiment, experiment, aa u-blox u-blox EVK-M8 EVK-M8 GNSS It In GNSS model, model, aa commercial commercial level level receiver receiver was was adopted. adopted. It is produced by u-blox Company (Thalwil, Switzerland), and can receive signals from multiple-GNSS is produced by u-blox Company (Thalwil, Switzerland), and can receive signals from multiple-GNSS (GPS, GLONASS, GLONASS, and and QZSS). QZSS). We We placed of our our vehicle vehicle to to collect collect (GPS, placed the the u-blox u-blox receiver receiver on on the the top top of pseudorange measurements. addition, an an IMU IMU sensor sensor (AMU-3002A (AMU-3002A Lite, Lite, Silicon Silicon Sensing, Sensing, pseudorange measurements. InIn addition, Amagasaki-shi, Hyogo Prefecture, Japan) and speedometer recorder were used to measure the angle Amagasaki-shi, Hyogo Prefecture, Japan) and speedometer recorder were used to measure the angle attitude and and the the vehicle vehicle velocity, velocity, respectively. Moreover, an an onboard onboard camera camera was was installed installed in in the the attitude respectively. Moreover,

vehicle, which captured the front view images when driving. These images are the input of the lane detection algorithm. In addition, we manually distinguished the ground truth trajectory of our 30209 vehicle from these images, and manually decided the occupied lane for the result evaluation. The driving distance is approximate 1500 m in each test. In the vehicle self-localization, it is more important to distinguish which lane the vehicle is in

In the experiment, a u-blox EVK-M8 GNSS model, a commercial level receiver was adopted. It is produced by u-blox Company (Thalwil, Switzerland), and can receive signals from multiple-GNSS (GPS, GLONASS, and QZSS). We placed the u-blox receiver on the top of our vehicle to collect pseudorange measurements. In addition, an IMU sensor (AMU-3002A Lite, Silicon Sensing, Sensors 2015, 15, 30199–30220 Amagasaki-shi, Hyogo Prefecture, Japan) and speedometer recorder were used to measure the angle attitude and the vehicle velocity, respectively. Moreover, an onboard camera was installed in the vehicle, which These images are are the input of the vehicle, which captured captured the thefront frontview viewimages imageswhen whendriving. driving. These images the input oflane the detection algorithm. In addition, we manually distinguished the ground truthtruth trajectory of our lane detection algorithm. In addition, we manually distinguished the ground trajectory of vehicle from these images, and manually decided the occupied lane for the result evaluation. The our vehicle from these images, and manually decided the occupied lane for the result evaluation. driving distance is approximate 15001500 m inm each test. test. The driving distance is approximate in each In the the vehicle vehicle self-localization, self-localization, it it is is more more important important to to distinguish distinguish which which lane lane the the vehicle vehicle is is in in In compared to the positioning accuracy. Therefore, both the lateral error and the correct lane rate are compared to the positioning accuracy. Therefore, both the lateral error and the correct lane rate are employed to toestimate estimatethe theperformance performanceofofthe the localization system. Figure shows definition of employed localization system. Figure 12 12 shows thethe definition of the P the lateral andheading the heading direction The lateral errorP is is the minimum Error lateral errorerror and the direction error. error. The lateral error Error thek minimum distancedistance from P k

k

Pk to the from ground truth. The heading direction the direction difference from Errork is difference to the ground truth. The heading direction error Errorkθ error is the direction from the estimated trajectory to the ground truth trajectory. The heading direction error is used for the evaluation of the estimated trajectory to the ground truth trajectory. The heading direction error is used for the heading direction rectification. evaluation of the heading direction rectification.

Figure 12. Definition positioning error error and and heading heading direction direction error. error. Figure 12. Definition of of positioning

6.1. Evaluation Evaluation for for Lane Lane Detection Detection 6.1. This section section evaluates evaluates the the lane lane detection detection by by comparing comparing the the estimated estimated lane lane with with the the hand-labeled hand-labeled This ground truth. truth. The detection, partial partial non-detection, non-detection, completed completed non-detection non-detection and and ground The cases cases of of correction correction detection, false detection are shown in Figures 13–16, respectively. false are shown in Figures 13–16 respectively. Sensorsdetection 2015, 15, page–page 11 1st example

2nd example

3th example

(a)

(b)

(c)

Figure 13. 13. Examples Examples of of correct correct detections, detections, (a) (a) column column shows shows images images captured captured from from camera; camera; (b) (b) column column Figure is IPM image and indicates the ground truth using yellow lines; (c) column shows lane detection is IPM image and indicates the ground truth using yellow lines; (c) column shows lane detection result, where where the the blue result, blue rectangle rectangle is is the the ROI ROIfor forlane lanedetection detectionand andthe thered redline lineisisthe thedetection detectionresult. result.

1st example

2nd example

30210

example (a) (a)

(b) (b)

(c) (c)

Figure 13. Examples of correct detections, (a) column shows images captured from camera; (b) column Figure 13. Examples of correct detections, (a) column shows images captured from camera; (b) column is IPMwhere imagethe and indicates theis ground using yellow lines; (c) line column laneresult. detection result, blue rectangle the ROI truth for lane detection and the red is theshows detection result, where the blue rectangle is the ROI for lane detection and the red line is the detection result.

IPM15,image and indicates the ground truth using yellow lines; (c) column shows lane detection Sensorsis2015, 30199–30220

1st 1st example example

2nd 2nd example example

(a) (a)

(b) (b)

(c) (c)

Figure 14. 14. Examples Examples of of non-detection non-detection for for one one of of lane lane boundaries, boundaries, (a) (a) column column shows shows images images captured captured Figure Figure 14. Examples of non-detection for oneindicates of lane boundaries, (a) column shows images captured from camera; (b) column is IPM image and the ground truth using yellow line; (c) column from camera; (b) column is IPM image and indicates the ground truth using yellow line; (c) column from camera; (b) column is IPM image and indicates the ground truth using yellow line; (c) column shows lane detection result, where the blue rectangle is the ROI for lane detection and the red line is is shows lane detection result, where the blue rectangle is the ROI for lane detection and the red line shows lane detection result, where the blue rectangle is the ROI for lane detection and the red line is the detection result. the detection result. the detection result.

1st 1st example example

2nd 2nd example example (a) (a)

(b) (b)

(c) (c)

Figure 15. Examples of non-detection of both boundaries, (a) column shows images captured from Figure 15. non-detection of both boundaries, (a) column shows images from Figure camera;15. (b)Examples column isof IPM image and indicates the ground truth using yellow line; (c) captured column shows camera; (b) column is IPM image and indicates the ground truth using yellow line; (c) column shows camera; yellow line; (c) lane detection result, where the blue rectangle is the ROI for laneusing detection. detection result, where where the the blue blue rectangle rectangle is is the the ROI ROIfor for lane lanedetection. detection. detection result, Sensorslane 2015, 15, page–page

12 12

(a)

(b)

(c)

Figure 16. 16. An from camera; camera; (b) (b) is is IPM IPM image image Figure An example example of of false false detection, detection, (a) (a) shows shows images images captured captured from and indicates the ground truth using yellow line; (c) shows lane detection result, where the blue and indicates the ground truth using yellow line; (c) shows lane detection result, where the blue region of of interest interest (ROI) (ROI) for for lane lane detection detection and and the the red red line line isis the thedetection detectionresult. result. rectangle is the region

As shown in Figure 13, the developed lane detection can locate the lane boundary in different As shown in Figure 13, the developed lane detection can locate the lane boundary in different scenarios. The first row of Figure 13 shows a lane changing case, the crossed line is detected scenarios. The first row of Figure 13 shows a lane changing case, the crossed line is detected accurately. The second and the third rows demonstrate that there are many other vehicles around accurately. The second and the third rows demonstrate that there are many other vehicles around our vehicle in the experiments. Because the occupied lane is visible, the lane detection still works for our vehicle in the experiments. Because the occupied lane is visible, the lane detection still works finding the occupied lane. However, this kind of urban environment is challenging for detecting other for finding the occupied lane. However, this kind of urban environment is challenging for detecting lanes, because of the occlusions. other lanes, because of the occlusions. Figure 14 shows two examples of partial non-detection. In the two examples, the right lane boundaries are not detected and the left boundaries are correctly detected. The reasons are the 30211 occlusion of other vehicles in the first example, and the degradation of the lane markings in the second example. This case is not serious for the localization system. In the case of partial non-detection, the detected lane boundaries can still be used for integration. In the quantitative evaluation, the cases of Figures 13 and 14 are considered as correct detections.

Sensors 2015, 15, 30199–30220

Figure 14 shows two examples of partial non-detection. In the two examples, the right lane boundaries are not detected and the left boundaries are correctly detected. The reasons are the occlusion of other vehicles in the first example, and the degradation of the lane markings in the second example. This case is not serious for the localization system. In the case of partial non-detection, the detected lane boundaries can still be used for integration. In the quantitative evaluation, the cases of Figures 13 and 14 are considered as correct detections. The complete non-detection means both left and right boundaries cannot be detected as shown in Figure 15. In the first example, the reason for the non-detection is the unclear lane marking on the road surface. In the second example, the lost detection of the left boundary is because the detection algorithm excludes the line, which does not cross the bottom of the ROI. The reason for the right boundary is that there are so few pixels of the line in the ROI. In the case of the completed non-detection, the localization system uses the GNSS positioning result for integration because of the lack of lane detection results. In addition, Figure 16 shows an example of false detection. A part of the right lane boundary is occluded by the front vehicle. The body of the vehicle presents a line feature, which causes the false detection. The quantitative evaluation of the lane detection is illustrated in Table 1. The total number of frames including lanes is 5920. The developed lane detection algorithm can detect lanes in 95.3% of the frames. About 4.7% of the frames are complete non-detection ones, where the non-detection happens around curved road areas and on roads including unclear lane markings, such as Figure 15. In the detections, 99.6% of the frames are correct, such as the cases of Figures 13 and 14. Only 0.4% are false detections, such as the case of Figure 16. Table 1 indicates that the lane detection displays high reliability. Table 1. Quantitative evaluation of lane detection. Frames Including Lane (F)

Detection (D) (Figures 13, 14 and 16)

Non-Detection (ND) (Figure 15)

Correct Detection (CD) (Figures 13 and 14)

False Detection (FD) (Figure 16)

The number of frames

5920

5642

278

5619

23

Detection rate

/

95.3% (D/F)

4.7% (ND/F)

99.6% (CD/D)

0.4% (FD/D)

6.2. Evaluation for Heading Direction Rectification As presented in Section 4, the IMU sensor suffers from the drift problem. The drift effect will generate errors in the propagation of particles. This paper proposes to rectify the heading direction, when lane keeping happens. Two types of heading direction errors in one experiment are plotted in Figure 17. The green line indicates the error of the IMU sensor, and the yellow line shows the error after the proposed rectification. The positive value means the difference in the counterclockwise direction. It is clear to see that the heading direction of the IMU sensor has about 5˝ drift by the end of the experiment. The error of the rectified heading direction changes around zero. This result proves that the vision-based lane detection can solve the drift problem. Actually, the experiment demonstrated in Figure 17 corresponds to the one discussed in Figure 8. In order to make the effectiveness of heading rectification more comprehensive, we plot the inertial trajectory using the rectified heading direction in Figure 18. After the rectification, the inertial trajectory is parallel to the ground truth trajectory. In Figure 17, the areas denoted as 1 to 6 are the turning areas, which are also indicated in Figure 18. The large error is caused by unconfident ground truth trajectory, because it is difficult to manually describe the accurate curve of the turning trajectory.

30212

demonstrated in in Figure Figure 17 17 corresponds corresponds to to the the one one discussed discussed in in Figure Figure 8. 8. In In order order to to make make the the demonstrated effectiveness of heading rectification more comprehensive, we plot the inertial trajectory using the effectiveness of heading rectification more comprehensive, we plot the inertial trajectory using the rectified heading heading direction direction in in Figure Figure 18. 18. After After the the rectification, rectification, the the inertial inertial trajectory trajectory is is parallel parallel to to the the rectified ground truth trajectory. In Figure 17, the areas denoted as 1 to 6 are the turning areas, which are also ground truth trajectory. In Figure 17, the areas denoted as 1 to 6 are the turning areas, which are also indicated in Figure 18. The The large large error error is is caused caused by by unconfident unconfident ground ground truth truth trajectory, trajectory, because because itit is is indicated 18. Sensors 2015,in 15,Figure 30199–30220 difficult to to manually manually describe describe the the accurate accurate curve curve of of the the turning turning trajectory. trajectory. difficult

11

22

44

33

55

66

Figure 17. 17. Effectiveness Effectiveness of of the the heading heading direction direction rectification. rectification. Figure 17. Figure

55

44

66 33

22

11

Figure 18. 18. Inertial Inertial trajectory trajectory after after heading heading rectification rectification in in the the experiment experiment of of Figure Figure 8. 8. Figure Figure 18. Inertial trajectory after heading rectification in the experiment of Figure 8.

6.3. Evaluation Evaluation for for Localization Localization 6.3. 6.3. Evaluation for Localization Inthis thissection, section,we wefocus focuson onthe theevaluation evaluationof ofthe thepositioning positioningperformance. performance.In Inorder orderto tounderstand understand In In this section, we focus on the evaluation of the positioning performance. In order to understand the benefit benefit of of the the proposed proposed 3D-GNSS 3D-GNSS in in urban urban canyon canyon environments, environments, this this paper paper compares compares the the the the benefit of the proposed 3D-GNSS in urban canyon environments, this paper compares the Weighted Least Square (WLS) GNSS-based integration and the proposed 3D-GNSS-based Weighted Least Square (WLS) GNSS-based integration and the proposed 3D-GNSS-based Weighted Least Square (WLS) GNSS-based integration and the proposed 3D-GNSS-based integration. integration. Figure Figure 19a 19a shows shows the the positioning positioning results results of of WLS-GNSS WLS-GNSS and and WLS-GNSS-based WLS-GNSS-based integration. Figure 19a shows the positioning results of WLS-GNSS and WLS-GNSS-based integration using integration using using purple purple dots dots and and green green dots, dots, respectively. respectively. The The 3D-GNSS 3D-GNSS and and 3D-GNSS-based 3D-GNSS-based integration purple dots and green dots, respectively. The 3D-GNSS and 3D-GNSS-based integration in the same integration in the same test are shown in Figure 19b, using red dots and yellow dots. The ground ground integration in the same test are shown in Figure 19b, using red dots and yellow dots. The test are shown in Figure 19b, using red dots and yellow dots. The ground truth is represented by the truth is is represented represented by by the the cyan cyan line. line. In In Figure Figure 19a, 19a, the the purple purple dots dots corresponding corresponding to to the the WLS-GNSS WLS-GNSS truth cyan line. In Figure 19a, the purple dots corresponding to the WLS-GNSS result are randomly spread result are are randomly randomly spread spread over over aa wide wide area. area. On On the the contrary, contrary, the the 3D-GNSS 3D-GNSS is is much much more more accurate accurate result over a wide area. On the contrary, the 3D-GNSS is much more accurate compared to the WLS-GNSS. compared to the WLS-GNSS. Moreover, the 3D-GNSS-based integration also indicates better compared to the WLS-GNSS. Moreover, the 3D-GNSS-based integration also indicates better Moreover, the 3D-GNSS-based integration also indicates better performance than WLS-GNSS-based performance than than WLS-GNSS-based WLS-GNSS-based integration, integration, which which can can be be understood understood by by comparing comparing the the green green performance integration, which can be understood by comparing the green dots and yellow dots in Figure 19c. dots and yellow dots in Figure 19c. Except for the GNSS positioning method, the two integration dots and yellow dots in Figure 19c. Except for the GNSS positioning method, the two integration Except for the GNSS positioning method, the two integration systems use the same algorithm. systems use use the the same same algorithm. algorithm. This This proves proves that that the the more more accurate accurate the the GNSS GNSS method method is, is, the the higher higher systems This proves that the more accurate the GNSS method is, the higher performance the integrated performance the the integrated integrated system system has. has. performance system has.

14 14

30213

Sensors 2015, 15, 30199–30220 Sensors 2015, 15, page–page

(a) WLS-GNSS and WLS-GNSS based integration

(b) 3D-GNSS and 3D-GNSS based integration

(c) WLS-GNSS based integration and 3D-GNSS based integration Figure and integration integration systems. systems. Figure 19. 19. Positioning Positioning results results of of WLS-GNSS, WLS-GNSS, 3D-GNSS 3D-GNSS and

To explain the reason for the improvement in the 3D-GNSS method, the satellite conditions in To explain the reason for the improvement in the 3D-GNSS method, the satellite conditions in this experiment are illustrated in Figure 20. In the experiment, the GNSS receiver can receive the this experiment are illustrated in Figure 20. In the experiment, the GNSS receiver can receive the signals from nine satellites on average. About five of the satellites are in LOS condition, and other signals from nine satellites on average. About five of the satellites are in LOS condition, and other satellites are NLOS. In the conventional WLS, the NLOS signal is used directly. Thus, the satellites are NLOS. In the conventional WLS, the NLOS signal is used directly. Thus, the pseudorange pseudorange error of the NLOS signal will lead to the positioning error. In the proposed 3D-GNSS error of the NLOS signal will lead to the positioning error. In the proposed 3D-GNSS positioning positioning method, the 3D map information is adopted for distinguishing NLOS and multipath method, the 3D map information is adopted for distinguishing NLOS and multipath effects, and effects, and correcting the pseudorange errors. Therefore, the 3D-GNSS achieves better performance. It also provides an accurate basis for the integrated localization system. 30214 15

Sensors 2015, 15, 30199–30220

The number of satellites

correcting the pseudorange errors. Therefore, the 3D-GNSS achieves better performance. It also provides Sensors 2015,an 15,accurate page–pagebasis for the integrated localization system. 20

15 10

5 0 1

21

41

61

81

101 121 141 Epoch index (1 Hz)

LOS

161

181

201

221

LOS+NLOS

Figure 20. Condition points indicate thethe number of Conditionof ofsatellites satellitesininthe theexperiment experimentofofFigure Figure19. 19.Purple Purple points indicate number LOS satellites from the ground truth position. Orange points are the number of satellites using of LOS satellites from the ground truth position. Orange points are the number of satellites 3D-GNSS (LOS ++ NLOS). NLOS).

We repeat three three tests tests along along the the driving driving route. route. Table We repeat Table 22 shows shows the the quantitative quantitative comparison comparison based based on the multiple tests. The comparison includes “WLS-GNSS”, “WLS-GNSS&IMU&Speedometer on the multiple tests. The comparison includes “WLS-GNSS”, “WLS-GNSS&IMU&Speedometer integration”, integration”, “3D-GNSS”, integration”, “WLS-GNSS&IMU&Speedometer&Lane “WLS-GNSS&IMU&Speedometer&Lane detection detection integration”, “3D-GNSS”, “3D-GNSS&IMU&Speedometer integration”, and “3D-GNSS&IMU&Speedometer&Lane “3D-GNSS&IMU&Speedometer integration”, and “3D-GNSS&IMU&Speedometer&Lane detection detection integration”. integration”. WLS-GNSS&IMU&Speedometer WLS-GNSS&IMU&Speedometer integration integration and and 3D-GNSS&IMU&Speedometer 3D-GNSS&IMU&Speedometer integration the lane lane detection detection function. function. As integration exclude exclude the As demonstrated demonstrated in in Table Table 2, 2, the the 3D-GNSS 3D-GNSS method method also shows much better performance compared to WLS. About 63.2% of results are in theincorrect lane. also shows much better performance compared to WLS. About 63.2% of results are the correct Although 3D-GNSS cannotcannot be directly used for vehicle self-localization, it has itpotential as theas main lane. Although 3D-GNSS be directly used for vehicle self-localization, has potential the source in the integration. After integrating with IMU and speedometer, the correct lane rate main source in the integration. After integrating with IMU and speedometer, the correct lane rate is is increased the mean is improved improved to to 1.2 1.2 m. m. Table increased and and the mean error error is Table 22 indicates indicates the the integration integration of of GNSS, GNSS, IMU, IMU, speedometer lane detection detection has has 93% 93% correct correct lane lane rate rate and and sub-meter sub-meter accuracy. accuracy. speedometer and and lane Table 2. Comparison positioning methods. methods. Table 2. Comparison of of different different positioning Positioning Method

Positioning Method

Positioning Error

WLS-GNSS WLS-GNSS&IMU&Speedometer integration7.53 WLS-GNSS WLS-GNSS&IMU&Speedometer WLS-GNSS&IMU&Speedometer &Lane detection integration 5.72 integration 3D-GNSS 3D-GNSS&IMU&Speedometer integration WLS-GNSS&IMU&Speedometer 3.16 &Lane3D-GNSS&IMU&Speedometer& detection integration Lane detection integration

3D-GNSS

Positioning Error Standard

Positioning Error Error (m) Mean (m) Positioning Deviation Mean (m) (m) 7.53 Standard Deviation 10.06

1.48

5.72 3.16 1.48 1.17

10.06 5.34 2.56

0.75

5.34 2.56 1.12 0.84 0.76

1.12

Correct Lane

Correct Lane Rate Recognition Recognition Rate 12.9% 16.1% 12.9% 37.4%

16.1%

63.2% 79.1%

37.4%

93.0%

63.2%

3D-GNSS&IMU&Speedometer To demonstrate the performance 21 shows the 1.17of the 3D-GNSS based 0.84 integration, Figure 79.1% integration

positioning error of the three types of localization methods in the experiment that has been discussed in3D-GNSS&IMU&Speedometer& Figure 19. Obviously, the 3D-GNSS&IMU&Speedometer&lane has the best 0.75 0.76 detection integration 93.0% Lane detection integration performance. Most of the time, the positioning error is maintained under 1.5 m, but there are some areas where the positioning error is larger than 1.5 m. Those areas are indicated in Figures 21 and 19b, To demonstrate the performance of7 the 3D-GNSS basedtheintegration, Figure shows the correspondingly. The areas 1, 2, 3, 5, 6 and in Figure 21 denote intersections. In the21 intersections, positioning error of function the three does types not of localization methods in lack the experiment that has been discussed the lane detection work because of the of lane markings. Therefore, the in Figure 19. Obviously, the 3D-GNSS&IMU&Speedometer&lane detection integration has the best positioning error cannot be reduced compared to 3D-GNSS&IMU&Speedometer integration. This performance. Most the positioning error is maintained underrecognition 1.5 m, but there are some result indicates thatof ittheistime, necessary to develop other road marking functions for areas whereareas, the positioning error istolarger thanthe 1.5positioning m. Thoseerror areasinare indicated in Figures intersection which is expected improve those intersection areas. 21 and Area 19b, correspondingly. The areas 3, 5, 6and andshown 7 in Figure 21 22. denote the intersections. 4 is in the road link. This area1,is 2, enlarged in Figure The vehicle enters this In the intersections, the lane detection function does not work because of the lack of lane markings. road link from epoch 196554. From the epoch 196554 to epoch 196557, 3D-GNSS results are in the Therefore,lane the positioning cannot truth). be reduced 3D-GNSS&IMU&Speedometer incorrect (the left lane oferror the ground At thecompared same time,tothe camera detects lane keeping behavior. Therefore, the integrated localization system gives an incorrect positioning result, as indicated by the yellow dots. From epoch 196558, the 3D-GNSS appears in the correct lane, and the 30215 result of the integration gradually becomes correct because of the effect of the 3D-GNSS. The epochs from 196554 to 196557 correspond to the area 4 in Figures 21 and 19b. This case study explains the

Sensors 2015, 15, 30199–30220

integration. This result indicates that it is necessary to develop other road marking recognition functions for intersection areas, which is expected to improve the positioning error in those intersection areas. Area 4 is in the road link. This area is enlarged and shown in Figure 22. The vehicle enters this road link from epoch 196554. From the epoch 196554 to epoch 196557, 3D-GNSS results are in the incorrect lane (the left lane of the ground truth). At the same time, the camera detects lane keeping behavior. Therefore, the integrated localization system gives an incorrect positioning result, as indicated by the yellow dots. From epoch 196558, the 3D-GNSS appears in the correct lane, and the result of the integration gradually becomes correct because of the effect of the 3D-GNSS. The epochs Sensors 2015, 15, page–page Sensors196554 2015, 15,topage–page from 196557 correspond to the area 4 in Figures 21 and 19b. This case study explains the reason for for the the 7% 7% failure failure in in the the occupied occupied lane lane recognition. recognition. It It is is important important to to note note that that this this case case also also reason reason for the 7% failure in the occupied lane recognition. It is important to note that this case also demonstrates that the positioning result can be corrected from an incorrect lane assignment by the demonstrates that the positioning result can be corrected from an incorrect lane assignment by the demonstrates that the positioning result can be corrected from an incorrect lane assignment by the 3D-GNSS after several epochs. 3D-GNSS after several epochs. 3D-GNSS after several epochs.

Positioning error (Meter) Positioning error (Meter)

8 7 6 5 4

3 2

1 0

1 1

8

2 2

3 3

4 4

5 5

6 6

7 7

7 6 5 4

3 2

1 01

1

21 21

3D-GNSS 3D-GNSS

41 41

61 61

81 81

101 121 141 161 181 201 221 101 121 141 161 181 201 221 Epoch index (1Hz) Epoch index (1Hz) 3D-GNSS&IMU&Speedometer 3D-GNSS&IMU&Speedometer&Lane detection 3D-GNSS&IMU&Speedometer 3D-GNSS&IMU&Speedometer&Lane detection

Figure 21. Positioning error of 3D-GNSS and 3D-GNSS-based integrations. Figure Figure21. 21.Positioning Positioningerror errorof of3D-GNSS 3D-GNSSand and3D-GNSS-based 3D-GNSS-basedintegrations. integrations.

196554 196554

196558 196558 Figure 22. Demonstration of the effect of the 3D-GNSS in the proposed integration. Figure22. 22.Demonstration Demonstrationof ofthe theeffect effectof ofthe the3D-GNSS 3D-GNSSin inthe theproposed proposedintegration. integration. Figure

Figure 23 shows another case, which can demonstrate the benefit of the lane detection. At the Figure 23 shows another case, which can demonstrate the benefit of the lane detection. At the 23and shows another case, can demonstrate the benefit of the lane detection. At the epochFigure 196619 epoch 196623, the which 3D-GNSS&IMU&Speedometer integration (green dots) appears epoch 196619 and epoch 196623, the 3D-GNSS&IMU&Speedometer integration (green dots) appears epoch 196619 and epoch 196623, the 3D-GNSS&IMU&Speedometer integration (green dots) appears in the incorrect lane because of the 3D-GNSS error. The lane detection increases the probability of the in the incorrect lane because of the 3D-GNSS error. The lane detection increases the probability of the in the incorrect lane of thewhich 3D-GNSS error. The lane detection the probability of particles being in thebecause correct lane, is explained in Figure 10. The increases lane detection reduces the particles being in the correct lane, which is explained in Figure 10. The lane detection reduces the the particles being in the correct lane, which is explained in Figure 10. The lane detection reduces the lateral positioning error by correctly adjusting the probability of particles, which is the reason for the lateral positioning error by correctly adjusting the probability of particles, which is the reason for the lateral positioning error by adjusting improvements indicated in correctly Table 2 and Figure the 21. probability of particles, which is the reason for the improvements indicated in Table 2 and Figure 21. improvements indicated in Table 2 and Figure 21.

196623 196623 196615 196615 196619 196619

30216

Figure 23 shows another case, which can demonstrate the benefit of the lane detection. At the epoch 196619 and epoch 196623, the 3D-GNSS&IMU&Speedometer integration (green dots) appears in the incorrect lane because of the 3D-GNSS error. The lane detection increases the probability of the particles being in the correct lane, which is explained in Figure 10. The lane detection reduces the lateral2015, positioning error by correctly adjusting the probability of particles, which is the reason for the Sensors 15, 30199–30220 improvements indicated in Table 2 and Figure 21.

196623

196615 196619

Figure 23. Demonstration of the effect of the lane detection in the proposed integration. Figure 23. Demonstration of the effect of the lane detection in the proposed integration.

7. Conclusions 7. Conclusions This research extended our previous 3D-GNSS work for vehicle positioning in city This research extended our previous 3D-GNSS work for vehicle positioning in city environments. environments. As indicated in the experimental results, the innovative 3D-GNSS positioning As indicated in the experimental results, the innovative 3D-GNSS positioning technique achieved technique achieved 1.5 m mean positioning error and a 63% lane recognition rate by reducing the 1.5 m mean positioning error and a 63% lane recognition rate by reducing the multipath and NLOS effects. However, it is still difficult to satisfy the17sub-meter requirement by only using 3D-GNSS. This paper proposed the integration of multiple on-board sensors, 3D-GNSS, inertial sensors and an on-board monocular camera for improving the accuracy of vehicle positioning. In the integration system, a lane detection algorithm is developed, in order to recognize the lane keeping/changing behavior. Finally, the particle filter integrates the three sources, 3D-GNSS, vehicle motion and lane detection for localization. In the integration, the drift problem of the inertial sensor is effectively controlled by using lane detection. Moreover, the lane detection function provides an additional measurement to optimize the positioning result along lateral direction. The experiments conducted in the urban traffic environment have demonstrated that the proposed system achieved a 93% correct lane recognition rate and sub-meter accuracy. In the near future, the sensing technology for the intersection scenario will be considered to improve the localization accuracy. In addition, more difficult scenarios will be discussed, such as under bridgea or in tunnels, where the GNSS signal is in outage conditions. Acknowledgments: The authors acknowledge the Grant-in-Aid for Japan Society for the Promotion of Science (JSPS) Postdoctoral Fellowship for Oversea Researchers. Author Contributions: Yanlei Gu implemented the lane detection algorithm. He proposed and implemented the integrated localization algorithm. He also conducted the experiments and the performance analysis. Li-Ta Hsu implemented the algorithm of the 3D-GNSS method. He also participated the discussion of the integration algorithm development. Shunsuke Kamijo is the research leader of this project on GNSS urban localization. He proposed the idea of the 3D-GNSS method and the application for autonomous driving. He participated in all the discussions associated with this project. Conflicts of Interest: The authors declare no conflict of interest.

References 1.

2. 3.

Levinson, J.; Askeland, J.; Becker, J.; Dolson, J.; Held, D.; Kammel, S.; Kolter, J.Z.; Langer, D.; Pink, O.; Pratt, V.; et al. Towards fully autonomous driving: Systems and algorithms. In Proceedings of the IEEE Intelligent Vehicles Symposium 2011, Baden-Baden, Germany, 5–9 June 2011; pp. 163–168. Moosmann, F.; Stiller, C. Velodyne SLAM. In Proceedings of the IEEE Intelligent Vehicles Symposium 2011, Baden-Baden, Germany, 5–9 June 2011; pp. 393–398. Glaser, C.; Michalke, T.P.; Burkle, L.; Niewels, F. Environment perception for inner-city driver assistance and highly-automated driving. In Proceedings of the IEEE Intelligent Vehicles Symposium 2014, Dearborn, MI, USA, 8–11 June 2014; pp. 1270–1275.

30217

Sensors 2015, 15, 30199–30220

4. 5. 6.

7.

8.

9. 10. 11. 12.

13. 14.

15.

16. 17.

18. 19. 20. 21. 22. 23.

24.

25.

Choi, J. Hybrid map-based SLAM using a Velodyne laser scanner. In Proceedings of the 17th International Conference on Intelligent Transportation Systems, Qingdao, China, 8–11 October 2014; pp. 3082–3087. Tang, J.; Chen, Y.; Niu, X.; Wang, L.; Chen, L.; Liu, J.; Shi, C.; Hyyppä, J. LiDAR scan matching aided inertial navigation system in GNSS-denied environments. Sensors 2015, 7, 16710–16728. [CrossRef] [PubMed] Hata, A.Y.; Osorio, F.S.; Wolf, D.F. Robust curb detection and vehicle localization in urban environments. In Proceedings of the IEEE Intelligent Vehicles Symposium 2014, Dearborn, Michigan, USA, 8–11 June 2014; pp. 1257–1262. Hata, A.; Wolf, D. Road marking detection using LIDAR reflective intensity data and its application to vehicle localization. In Proceedings of the 17th International Conference on Intelligent Transportation Systems, Qingdao, China, 8–11 October 2014; pp. 584–589. Urmson, C.; Anhalt, J.; Bartz, D.; Clark, M.; Galatali, T.; Gutierrez, A.; Harbaugh, S.; Johnston, J.; Kato, H.; Koon, P.; et al. A robust approach to high-speed navigation for unrehearsed desert terrain. Springer Tracts Adv. Robot. 2007, 36, 45–102. Rezaei, S.; Sengupta, R. Kalman filter-based integration of DGPS and vehicle sensors for localization. IEEE Trans. Control Syst. Technol. 2007, 15, 1080–1088. [CrossRef] MacGougan, G.; Lachapelle, G.; Klukas, R.; Siu, K.; Garin, L.; Shewfelt, J.; Cox, G. Performance analysis of a stand-alone high-sensitivity receiver. GPS Solut. 2002, 6, 179–195. Groves, P.D. GNSS solutions: Multipath vs. NLOS signals. How does non-line-of-sight reception differ from multipath interference? Inside GNSS Mag. 2013, 8, 40–42. Marais, J.; Meurie, C.; Attia, D.; Ruichek, Y.; Flancquart, A. Toward accurate localization in guided transport: Combining GNSS data and imaging information. Transp. Res. Part C: Emerg. Technol. 2014, 43, 188–197. [CrossRef] Meguro, J.I.; Murata, T.; Takiguchi, J.I.; Amano, Y.; Hashizume, T. GPS multipath mitigation for urban area using omnidirectional infrared camera. IEEE Trans. Intell. Transp. Syst. 2009, 10, 22–30. [CrossRef] Bauer, S.; Obst, M.; Streiter, R.; Wanielik, G. Evaluation of shadow maps for non-line-of-sight detection in urban GNSS vehicle localization with vanets—The GAIN approach. In Proceedings of the IEEE 77th Vehicular Technology Conference 2013: VTC2013-Spring, Dresden, Germany, 2–5 June 2013; pp. 1–5. Obst, M.; Bauer, S.; Reisdorf, P.; Wanielik, G. Multipath detection with 3D digital maps for robust multi-constellation GNSS/INS vehicle localization in urban areas. In Proceedings of the IEEE Intelligent Vehicles Symposium 2012, Alcala de Henares, Spain, 3–7 June 2012; pp. 184–190. Groves, P.D. Shadow Matching: A new GNSS positioning technique for urban canyons. J. Navig. 2011, 64, 417–430. [CrossRef] Wang, L.; Groves, P.D.; Ziebart, M.K. GNSS shadow matching: Improving urban positioning accuracy using a 3D city model with optimized visibility scoring scheme. Navig. J. Inst. Navig. 2013, 60, 195–207. [CrossRef] Wang, L.; Groves, P.D.; Ziebart, M.K. Smartphone shadow matching for better cross-street gnss positioning in urban environments. J. Navig. 2014, 68, 411–433. [CrossRef] Miura, S.; Hsu, L.T.; Chen, F.; Kamijo, S. GPS error correction with pseudorange evaluation using three-dimensional maps. IEEE Trans. Intell. Transp. Syst. 2015, 16, 3104–3115. [CrossRef] Hsu, L.T.; Gu, Y.; Kamijo, S. 3D building model-based pedestrian positioning method using GPS/GLONASS/QZSS and its reliability calculation. GPS Solut. 2015, 1–16. [CrossRef] Hsu, L.T.; Gu, Y.; Kamijo, S. NLOS correction/exclusion for GNSS measurement using RAIM and city building models. Sensors 2015, 7, 17329–17349. [CrossRef] [PubMed] Milanes, V.; Naranjo, J.E.; Gonzalez, C.; Alonso, J.; de Pedro, T. Autonomous vehicle based in cooperative GPS and inertial systems. Robotica 2008, 26, 627–633. [CrossRef] Noureldin, A.; Karamat, T.B.; Eberts, M.D.; El-Shafie, A. Performance enhancement of MEMS-based INS/GPS integration for low-cost navigation applications. IEEE Trans. Veh. Technol. 2009, 58, 1077–1096. [CrossRef] Kubo, N.; Dihan, C. Availability improvement of RTK-GPS with IMU and vehicle sensors in urban environment. In Proceedings of the 25th International Technical Meeting of The Satellite Division of the Institute of Navigation (ION GNSS 2012), Nashville, TN, USA, 17–21 September 2012; pp. 1545–1555. Angrisano, A.; Petovello, M.; Pugliano, G. Benefits of combined GPS/GLONASS with low-cost MEMS IMUs for vehicular urban navigation. Sensors 2012, 12, 5134–5158. [CrossRef] [PubMed]

30218

Sensors 2015, 15, 30199–30220

26. 27.

28.

29.

30. 31. 32.

33.

34.

35.

36. 37. 38. 39. 40. 41.

42.

43.

44.

45.

Sun, D.B.; Petovello, M.G.; Cannon, M.E. Ultratight GPS/reduced-IMU integration for land vehicle navigation. IEEE Trans. Aerosp. Electron. Syst. 2013, 49, 1781–1791. [CrossRef] Gu, Y.; Hsu, L.-T.; Wada, Y.; Kamijo, S. Integration of 3D map based GPS positioning and on-board sensors for vehicle self-localization in urban canyon. In Proceedings of the ION 2015 Pacific PNT Meeting, Honolulu, HI, USA, 20–23 April 2015. Gu, Y.; Wada, Y.; Hsu, L.-T.; Kamijo, S. vehicle self-localization in urban canyon using 3D map based GPS positioning and vehicle sensors. In Proceedings of the 3rd International Conference on Connected Vehicles & Expo (ICCVE 2014), Vienna, Austria, 3–7 November 2014. Nedevschi, S.; Popescu, V.; Danescu, R.; Marita, T.; Oniga, F. Accurate ego-vehicle global localization at intersections through alignment of visual data with digital map. IEEE Trans. Intell. Transp. Syst. 2013, 14, 673–687. [CrossRef] Gruyer, D.; Belaroussi, R.; Revilloud, M. Map-aided localization with lateral perception. In Proceedings of the IEEE Intelligent Vehicles Symposium 2014, Dearborn, MI, USA, 8–11 June 2014; pp. 674–680. Zhang, J.; Singh, S. LOAM: Lidar odometry and mapping in real-time. In Proceedings of the Robotics: Science and Systems Conference (RSS), Berkeley, CA, USA, 14–16 July 2014. Engel, J.; Stückler, J.; Cremers, D. Large-scale direct slam with stereo cameras. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IROS) 2015, Hamburg, Germany, 28 September–2 October 2015. Klein, G.; Murray, D. Parallel tracking and mapping for small AR workspaces. In Proceedings of the 6th IEEE and ACM International Symposium on Mixed and Augmented Reality 2007, Nara, Japan, 13–16 November 2007; pp. 225–234. Jiang, Y.H.; Xiong, G.M.; Chen, H.Y.; Lee, D.J. Incorporating a wheeled vehicle model in a new monocular visual odometry algorithm for dynamic outdoor environments. Sensors 2014, 14, 16159–16180. [CrossRef] [PubMed] Zhang, J.; Singh, S. Visual-lidar odometry and mapping: Low-drift, robust, and fast. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 2174–2181. Cheng, W.C. PSO algorithm particle filters for improving the performance of lane detection and tracking systems in difficult roads. Sensors 2012, 12, 17168–17185. [CrossRef] [PubMed] Tapia-Espinoza, R.; Torres-Torriti, M. Robust lane sensing and departure warning under shadows and occlusions. Sensors 2013, 3, 3270–3298. [CrossRef] [PubMed] Aly, M. Real time detection of lane markers in urban streets. In Proceedings of the IEEE Intelligent Vehicles Symposium 2008, Eindhoven, The Netherlands, 4–6 June 2008; pp. 7–12. Bertozzi, M.; Broggi, A. GOLD: A parallel real-time stereo vision system for generic obstacle and lane detection. IEEE Trans. Image Process. 1998, 7, 62–81. [CrossRef] [PubMed] Yenikaya, S.; Yenikaya, G.; Düven, E. Keeping the vehicle on the road: A survey on on-road lane detection systems. ACM Comput. Surv. 2013, 46, 2:1–2:43. [CrossRef] Tao, Z.; Bonnifait, P.; Fremont, V.; Ibanez-Guzman, J. Lane marking aided vehicle localization. In Proceedings of the 16th IEEE International Conference on Intelligent Transportation Systems 2013, The Hague, The Netherlands, 6–9 October 2013; pp. 1509–1515. Tao, Z.; Bonnifait, P.; Fremont, V.; Ibanez-Guzman, J. Mapping and localization using GPS, lane markings and proprioceptive sensors. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2013, Tokyo, Japan, 3–7 November 2013; pp. 406–412. Schreiber, M.; Knoppel, C.; Franke, U. Laneloc: Lane marking based localization using highly accurate maps. In Proceedings of the IEEE Intelligent Vehicles Symposium 2013, Gold Coast, Australia, 23–26 June 2013; pp. 449–454. Mattern, N.; Wanielik, G. Camera-based vehicle localization at intersections using detailed digital maps. In Proceedings of the 2010 IEEE/ION Position Location and Navigation Symposium (PLANS), Indian Wells, CA, USA, 4–6 May 2010; pp. 1100–1107. Rose, C.; Britt, J.; Allen, J.; Bevly, D. An integrated vehicle navigation system utilizing lane-detection and lateral position estimation systems in difficult environments for GPS. IEEE Trans. Intell. Transp. Syst. 2014, 15, 2615–2629. [CrossRef]

30219

Sensors 2015, 15, 30199–30220

46.

47.

48. 49.

50.

51.

Seo, Y.W.; Rajkumar, R. Tracking and estimation of ego-vehicle's state for lateral localization. In Proceeding of the 17th International Conference on Intelligent Transportation Systems, Qingdao, China, 8–11 October 2014; pp. 1251–1257. Marita, T.; Negru, M.; Danescu, R.; Nedevschi, S. Stop-line detection and localization method for intersection scenarios. In Proceedings of the IEEE 7th International Conference on Intelligent Computer Communication and Processing (ICCP 2011), Cluj-Napoca, Romania, 25–27 August 2011; pp. 293–298. Lee, B.H.; Song, J.H.; Im, J.H.; Im, S.H.; Heo, M.B.; Jee, G.I. GPS/DR Error estimation for autonomous vehicle localization. Sensors 2015, 8, 20779–20798. [CrossRef] [PubMed] Kamijo, S.; Gu, Y.; Hsu, L.-T. GNSS/INS/on-board aamera integration for vehicle self-localization in urban canyon. In Proceedings of the IEEE 18th International Conference on Intelligent Transportation Systems (ITSC 2015), Gran Canaria, Spain, 15–18 September 2015; pp. 2533–2538. Garin, L.; van Diggelen, F.; Rousseau, J.M. Strobe & edge correlator multipath mitigation for code. In Proceedings of the the 9th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GPS 1996), Kansas City, MO, USA, 17–20 September 1996; pp. 657–664. Qin, B.; Chong, Z.J.; Bandyopadhyay, T.; Ang Jr, M.H.; Frazzoli, E.; Rus, D. Curb-intersection feature based monte carlo localization on urban roads. In Proceedings of the IEEE International Conference on Robotics and Automation 2012 (ICRA 2012), Saint Paul, MN, USA, 14–18 May 2012; pp. 2640–2646. © 2015 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons by Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).

30220