Onboard Vision System for Bus Lane Monitoring

2 downloads 0 Views 700KB Size Report
David Fernández-López, Antonio S. Montemayor, Juan José Pantrigo, ... Universidad Rey Juan Carlos, C/Tulipán s/n, 28933 Móstoles, Spain. Abstract.
Onboard Vision System for Bus Lane Monitoring David Fern´andez-L´opez, Antonio S. Montemayor, Juan Jos´e Pantrigo, Mar´a Luisa Delgado, and R. Cabido Universidad Rey Juan Carlos, C/Tulip´an s/n, 28933 M´ostoles, Spain

Abstract. Improving the mobility is one of the most important challenges the cities face. The coexistence of public and private vehicles sometimes force the city governments to designate reserved lanes for bus use only. However, not all the private drivers respect these reserved spaces and they use them. Therefore, it is necessary to provide a surveillance mechanism. This work presents a visual system devoted to perform automatic surveillance of a bus lane. This system proposal consists of a heuristic combination of filtered images of the road for the bus lane change detection. We show how to refine the strategy for reducing false positives as well as improving its computational performance. The resulting system is able to run in real time on an Intel Atom platform without the use of any programming optimization technique. Keywords: Mobility, Bus Lane, Automatic Surveillance System, On-board Vision System.

1 Introduction The Smart City concept has been growing during the last few years. City governments has perceived the importance of this philosophy and many of them has elaborated a plan to become a smart city. This concept emerges from the search of a sustainable city where the citizen is the most important part [5]. The six main axes which define a smart city (i.e., people, economy, living, mobility, environment and governance) are represented in Fig. 1. These areas are used as indicators of the city progress. There also exist different rankings based on the score obtained by the cities in each one of those indicators (see for example [5]). In this work we mainly focus on the mobility axe. It depends on different indicators like the sustainable transport or the intelligent trafic control. These indicators are directly associated to traffic problems like traffic jams, vehicles crashes, atmospheric and noise pollution and the loss of time associated to an inefficient transport. In the opinion of various experts in mobility, the solution for all these problems relies on the correct management of the public transport [9][11]. Urban buses is the public transport that most affects the city traffic. This is due to the fact that urban buses share the road with private vehicles. With the aim of improving the public transport service exclusive bus lanes have been introduced. However, in many cities these lanes are indiscriminately occupied by private vehicles. It does not only make the public transport inefficient but also it affects the overall mobility, as the buses are forced to use the common lanes. In the last years, several attempts to avoid the illegal J.M. Ferr´andez Vicente et al. (Eds.): IWINAC 2013, Part II, LNCS 7931, pp. 286–295, 2013. c Springer-Verlag Berlin Heidelberg 2013 

Onboard Vision System for Bus Lane Monitoring

287

Fig. 1. Main axes of the smart city concept (adapted from [5])

use of the bus lane have been carried out. Some of the measures are only dissuasive while others physically separates the bus lane from the common road. Despite the fact that these separators improve the bus lane traffic flow, they are dangerous and cause vehicle crashes. In this work, we present an on-board automatic system which is able to detect vehicles parked in the bus lane. This system has a great dissuasive effect because it can be installed on all buses serving the bus lane network. The use of this technology would contribute to improve the urban transport traffic. Furthermore, the use of separators will not longer be required, improving the general mobility and road safety of the cities. Our system hadrware consists of a camera used to detect the lane separator, a GPS receptor and an on-board computer. The remainder of the paper is organized as follows. Section 2 presents a system overview. In Section 3, the lane detection change algorithms are detailed. In Section 4, the obtained results in a testing under real conditions are shown. Finally, Section 5 depicts the most important conclusions extracted from this work.

2 System Overview This section presents an overview of the system hardware and software. The hardware system view is depicted in Fig. 2 and consists of two main subsystems: the on-board system and the control center. The former is the object of this paper because the control center is different in each city. The on-board systems consists of a camera, a GPS receptor and an on-board computer installed in each bus. As shown in Fig. 2, each bus sends information concerning the offending vehicles through a wireless network to the central server. This information is sent to the Control Center, where a human operator performs a manual validation. For this task, the operator uses a front-end, where the sequence of the offense can be visualized to help him to make the decision of filing a complaint or not. Fig. 3 presents the software view of the on-board System. It describes the relationship among the four main modules: setup, GPS data analyzer, lane change detection and

288

D. Fern´andez-L´opez et al.

Fig. 2. Hardware System Overview

Fig. 3. Software System Overview

Onboard Vision System for Bus Lane Monitoring

289

ALPR modules. Setup module reads the scene configuration and the bus-lane configuration. GPS data analyzer is devoted to parse the GPS data, allowing us to categorize the different lanes in the city. Through the camera images, the lane change detection module is able to distinguish the lane in which the bus is located. Additionally, an ALPR module provides the license plate information about the offending vehicles.

3 Bus Lane Change Detector A main requisite of our system is that we need a real time execution on a commodity PC. It is necessary to avoid the processing of the whole image. To this aim, we define a region of interest (ROI) covering the width of the lane and enough distance for evaluating the lane delimiters as in [1]. The lane segmentation and the lane change detection will be only applied into this ROI. An example of this ROI can be seen in the Figure 4.a.

Fig. 4. a)Example of a ROI. b) Image division to compute the horizontal projection.

3.1 Image Filtering Framework and Lane Change Heuristic We apply the following heuristic to detect a lane change: “If the onboard camera is centered in the vehicle, the lane delimiter will cross the image from one side to the other when the vehicle performs a lane change”. Based on this assumption the segmented image is split into three regions ([0], [1] and [2] in Fig. 4.b) corresponding to the lower half of the left, central and right zones, respectively. In order to decide where the lane delimiters are, we count the labelled pixels in each region. Once these values are obtained, we determine that a lane change occurs when most of the significant pixels are located in the central part of the image (i.e., when the line marker is in front of the vehicle). Additionally, to avoid false positives produced by other markings, the method compares the number of labelled pixels in the left and right zones to those labelled into central zone. 3.2 Decision Fusion Approach In order to refine the decision process we perform a fusion approach. It involves different visual features and combines them to decide if a lane change is produced. Specifically, we consider four different features to extract the lane delimiters:

290

D. Fern´andez-L´opez et al.

– Adaptive color segmentation: this filter is based on the road properties presented in [8]. These properties define the valid road colors based on a gray color with very similar RGB components. During the execution we update the maximum and minimum gray values using a 3D color histogram like in [6]. – Fixed thresholding: The lane delimiters in the city of Madrid are white. We experimentaly determine the threshold to segment the white road markers as in [15], [13]. Note that this approach does not only detects lane delimiters but all the road markers. – Horizontal motion detection using image difference: When a lane change happens the lane delimiter moves horizontally so that this measure can segment it. – Background subtraction: This measure is the result of a subtraction operation between the current frame and an adaptive background computed as described in [7]. Once these measures have been evaluated, the system decides that a lane change is taking place if at least three of the four measures are positive. 3.3 RANSAC-Based Line Extraction Approach The decision fusion approach produces many false positives and it does not run in real time on a low cost PC when not using optimized SIMD instructions. For these reasons, we need a more efficient strategy. This new approach reduces the number necessary measures to improve the execution performance and includes a RANSAC method [4] to decide whether there is a lane change or not. The proposed method is based on two measures of the previous approach: adaptive color segmentation and thresholding. We select these filters as they give less false positives and, at the same time, they offer a good performance. The adaptive color segmentation is the same one that was used in the previous approach. This measure has the best rate between true positives and false positives (see Section 4). The thresholding method is improved using the Otsu method [10], that computes the threshold that minimizes the intraclass variance between two classes in a histogram. Otsu thresholding has been used in the literature to separate the gray part of the road from the white lane delimiters [3][14]. It is important that the ROI image contains lane delimiters before applying the Otsu method. Otherwise, the method separates two gray levels, resulting in an incorrect segmentation. To avoid this, we previously perform an image thresholding with a high threshold value. Then, we compute the percentage of the pixels that survives to this threshold. If this percentage is sufficient, the Otsu method is applied. Otherwise, we apply a thresholding with a fixed predefined value. Figure 5 shows the difference among a fixed thresholding, the original Otsu method and our improved Otsu method. Note that the original Otsu method fails in the third and forth considered examples resulting in a wrong segmentation (Fig.5.c) while our proposed improved method obtains very accurate results (Fig.5.d). We fit a line in the segmented image using RANSAC method, based on the selection of points belonging to the line at a predefined set of heights. RANdom SAmple Consensus (RANSAC) [4] is a method for fitting a model to a point cloud with outliers. We use a RANSAC approach instead of others methods (for example, linear regression or hough transform) for two main reasons. The first one is that we expect, at most,

Onboard Vision System for Bus Lane Monitoring

291

Fig. 5. a) Original image, b) fixed threshold, c) original Otsu method, and d) improved Otsu method

Fig. 6. Example of fitted lines using the RANSAC method

Fig. 7. Supported line angles for each zone

one lane delimiter in each part of the ROI and we need to fit a line for each one. The second one relies in the performance of RANSAC method especially when compared to the Hough transform [12]. Figure 6 shows some examples of line fitting using the RANSAC method after the point candidates selection. Once the line which best fits the centroids is obtained we compute the angle between the line and the horizontal. Then, we consider the fitted line is a lane separator if its angle belongs to the range of predefined valid angles. Figure 7 depicts the considered valid angles for each region.

292

D. Fern´andez-L´opez et al.

Fig. 8. State diagram for the lane change detection method

To decide if a lane change is taking place, we use the state diagram depicted in Figure 8, where h[x] (x = 1..5), stores a history of the last five consecutive video frames. The following two conditional states are shown in the mentioned figure: – “Potential lane change in f?”: in this state, the algorithm evaluates if a potential lane change is taking place. This event is characterized by the presence of a line in the central section and, at the same time, no lines in the left and right regions of the ROI. – “Is there a lane change?”: in this state the algorithm confirms the previous potential lane change. To this aim, the algorithm considers several previous frames stored in the process recent history buffer, and determines the position of the line in the center zone of each frame. Then, if these positions show a continuous lateral shift, we consider that a lane change is taking place.

4 Results We test our algorithms on a 57 minutes video sequence at 25 fps taken from a real urban bus using a Logitech consumer camera configured at 320x240 pixels of video resolution. In this sequence 44 lane changes are produced (from bus lane to normal lane or vice versa), 16 of them from right to left and 28 from left to right. Additionally, to test the performance of each individual filter, we have used a subsequence of 5130 frames (about 4 minutes at 25 fps). In this sequence, seven bus lane changes are produced. 4.1 Lane Change Detection Results In our experimental design we include a preliminary experiment and a final experiment. The objective of the preliminar experiment is to identify the best filter method to be coupled with the RANSAC method. To this aim, we test the performance of each filter presented in Section 3 on the image subsequence described above. Figure 9 illustrates the results obtained. As it can be seen from the figure, the background subtraction and our proposed motion detection are the worst methods. The improved Otsu method performs

Onboard Vision System for Bus Lane Monitoring

293

Fig. 9. Quality estimation of individual filters applied to the test sequence with a ground truth of 7 lane changes. True positives (TP) are correct lane change detections and false positives (FP or f p in Eq.1) are incorrect ones.

reasonably well, as it is able to significantly reduce the number of false positives. We finally conclude that the RANSAC-based approach should use the Otsu thresholding and the color adaptive filters. Table 1 resumes the results of our three approaches for the considered video sequence. It shows the number of real bus lane changes (from right to left and from left to right), the number of false positives (f p ) and an error measure that combines the difference between real and detected lane changes as well as false positives given by Eq.1:  = tn + f p

(1)

where tn denotes the true negatives. This error measure combines two error sources, however, for practical situations, false positives are more inconvenient that loosing real lane changes, as we need to minimize false alarms when possible. Table 1 shows that the RANSAC-based approach obtains the better results (the lower the value of , the better the method), followed by the decision fusion strategy. Finally, the single improved Otsu thresholding produces the highest number of false detections among the considered methods. Figure 10 depicts typical cases for miss detection or false positives. The most challenging situations are given by roads in bad conditions with patches or cracks. Table 1. Results of our three approaches

Left changes Right changes f p 

Ground truth Improved Otsu Fusion approach RANSAC approach 16 14 11 8 28 24 16 15 – 55 34 2 – 61 51 23

294

D. Fern´andez-L´opez et al.

Fig. 10. Views of different frames responsible of false positives

5 Conclusions In this work, we present a visual system devoted to perform automatic surveillance of a bus lane. This system proposal consists of a heuristic combination of filtered images of the road for the bus lane change detection. We show how to refine the strategy for reducing false positives as well as improving its computational performance. Experimental results show that the system performance is reasonably adequate for solving this task, as it can detect the lane changes of the buses with a low rate of false negatives. The resulting system is computationally efficient as it is able to run in real time on an Intel Atom platform without the use of any programming optimization technique. Acknowledgments. This work has been supported by the C´atedra de Ecotransporte, Tecnolog´ıa y Movilidad between University Rey Juan Carlos and the Empresa Municipal de Transportes de Madrid (EMT) through the BusVigia project, by the Spanish Ministry of Economy and Competitiveness grant TIN2011-28151 and by the Government of the Community of Madrid grant ref S2009/TIC-1542.

References 1. Aly, M.: Real time detection of lane markers in urban streets. In: Intelligent Vehicles Symposium, pp. 7–12 (2008) 2. Caragliu, A., Del Bo, C., Nijkamp, P.: Smart cities in Europe. VU University Amsterdam, Faculty of Economics, Business Administration and Econometrics (2009) 3. D’Cruz, C., Zou, J.J.: Lane detection for driver assistance and intelligent vehicle applications. In: International Symposium on Communications and Information Technologies, pp. 1291–1296 (2007) 4. Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. ACM Commun. 24(6), 381–395 (1981) 5. Giffinger, R.: Smart cities, Ranking of European medium-sized cities (2007), http://www.smart-cities.eu 6. Cheng, H.-Y., Jeng, B.-S., Tseng, P.-T., Fan, K.-C.: Lane Detection With Moving Vehicles in the Traffic Scenes. Intelligent Transportation Systems 4(7), 571–582 (2006) 7. Wang, J.-M., Chung, Y.-C., Chang, S.-L., Chen, S.-W.: Lane Marks Detection Using Steerable Filters. In: IPPR Conference on Computer Vision (2003) 8. Kuo-Yu, C., Sheng-Fuu, L.: Lane detection using color-based segmentation. In: Intelligent Vehicles Symposium, pp. 706–711 (2005) 9. Odeck, J.: Congestion, ownership, region of operation, and scale: Their impact on bus operator performance in Norway. Socio-Economic Planning Sciences 40(1), 52–69 (2006)

Onboard Vision System for Bus Lane Monitoring

295

10. Otsu, N.: A Threshold Selection Method from Gray-Level Histograms. Systems, Man and Cybernetics 1(9), 62–66 (1979) 11. Santos, G., Behrendt, H., Teytelboym, A.: Part II: Policy instruments for sustainable road transport. Research in Transportation Economics 28, 46–91 (2010) 12. Se, S., Lowe, D., Little, J.: Global localization using distinctive visual features. In: International Conference on Intelligent Robots and Systems, vol. 1, pp. 226–231 (2002) 13. Su, C.-Y., Fan, G.-H.: An Effective and Fast Lane Detection Algorithm. In: Bebis, G., et al. (eds.) ISVC 2008, Part II. LNCS, vol. 5359, pp. 942–948. Springer, Heidelberg (2008) 14. Yanqing, W., Deyun, C., Chaoxia, S., Peidong, W.: Vision-based road detection by monte carlo method. Information Technology Journal 9, 481–487 (2010) 15. Zhou, X., Huang, X.-Y.: Multi lane line reconstruction for highway application with a signal view. In: Third International Conference on Image and Graphics, pp. 35–38 (2004)