FPGA Implementation of the V-disparity Based

0 downloads 0 Views 778KB Size Report
Keywords- FPGA;CENSUS stereoscopic matching; V- disparity; Obstacles detection; Hough transform. I. INTRODUCTION. In many applications, obstacles ...
2013 21st Mediterranean Conference on Control & Automation (MED) Platanias-Chania, Crete, Greece, June 25-28, 2013

FPGA Implementation of the V-disparity Based Obstacles Detection Approach Zohir IRKI, Hamza BENDAOUDI

Michel DEVY1,2

Abdelhakim KHOUAS

LSN/Ecole Militaire Polytechnique (EMP)

1 :CNRS, LAAS, 7 Av Col Roch F-31400 2: Univ de Toulouse, LAAS F-31400

University M’hamed Bougara

Bordj El Bahri, Algiers,Algeria [email protected]

Toulouse, France [email protected]

Boumèrdes, Algeria

In order to adapt this approach to a real-time treatment, an architecture was presented in [5]. In this architecture, input stereoscopic images are first rectified before carrying out a method to increase the algorithm performance in filtering wrong matching when computing the disparity map. The matching of stereoscopic images is carried out using the Zero mean Sum of Squared Differences (ZSSD) correlation criteria. An estimation of the movement of objects is done before carrying out the obstacles detection task. The method is of high complexity and cannot run in real-time on standard processors. Therefore, the application is accelerated with the use of special purpose hardware. Authors proposed to use 5 FPGAs, 4 DSP's, 6 Memories and an interface manager (ARM9). This architecture can so be classified as high cost architecture. Indeed, this architecture requires a lot of calculus since it makes use of the ZSSD criteria. In this paper, we propose an architecture making use essentially of comparators and accumulators. We use a particular Hough transform in order to detect obstacles in the V-disparity image. In the V-disparity image, obstacles are represented by vertical lines while the road is presented by an oblique line. The particular Hough transform allows us to extract only the vertical lines. When the V-disparity image approach is combined with Hough transform, a novel detection approach is defined. This approach can be easily implemented on an FPGA component. Thus, real time detection can be carried out. Our V-disparity based obstacles detection approach consists of three major steps: the stereoscopic matching, the V-disparity image construction and the obstacles extraction. The stereoscopic matching represents an important step in this approach. The disparity map is obtained using the CENSUS criteria because of its simplicity. In order to implement this approach on an FPGA component, a main architecture has been proposed. This architecture is composed from three sub-architectures. Each one ensures a specified task.In this paper, we describe briefly the stereoscopic matching and how the V-disparity image could be built. We describe how Hough transform is used in order to characterize obstacles. At the end of the paper we describe the architecture proposed in order to implement the approach on an FPGA component.

Abstract— In this paper we present an implementation of the whole V-disparity obstacles detection approach on an FPGA component. This approach is based on the use of stereoscopic images for the construction of an image called the V-disparity image from which obstacles can be easily extracted using a particular Hough transform. FPGA represents a good alternative for the use of the approach on an embedded system. The implementation of the approach on an FPGA component requires parallelizing all its steps which are the stereoscopic matching, the V-disparity image construction and obstacles extraction using a unidirectional Hough transform. These steps have been described with VHDL language using the ISE 9.2 software. Finally, the entire approach has been implemented on a Virtex-II type XC2V1000 FG456-4 placed on an RC200 board. Keywords- FPGA;CENSUS stereoscopic matching; Vdisparity; Obstacles detection; Hough transform.

I.

INTRODUCTION

In many applications, obstacles detection is an essential task that must be performed in real time, robustly and accurately [1]. In general, obstacle detection systems typically compute the position of obstacles relative to a mobile agent by using range information. Range information may be obtained from laser ranging, sonar ranging or vision based techniques [2]. Vision approaches have the advantage of low cost, low power consumption and a high degree of mechanical reliability. The speed and accuracy of vision algorithms typically scale with faster computing platforms. Stereovision is one vision technique often used in obstacles detection but inherent algorithms are sometimes not precise or fast enough to be used efficiently [1, 3]. Many obstacles detection approaches using stereovision were presented in the literature and most of them assume a planar road [4]. The approach proposed in [1] makes use of a stereo algorithm based on the construction and subsequent processing of the V-disparity image. This image can be obtained from a pair of stereoscopic images, after the computation of a disparity map. From the V-disparity map, it is straightforward to detect specific planes or cylindrical surfaces in the scene in a robust, fast and effective way. Thus, a good representation of the 3D geometric content of the scene is obtained and these data are used as the input of the obstacles detection system. 978-1-4799-0997-1/13/$31.00 ©2013 IEEE

1104

II.

STEREOSCOPIC MATCHING

Stereo matching is a problem that has been studied over several decades in computer vision and many researchers have worked at solving it. The proposed approaches can be broadly classified into feature- and correlation-based approaches [6]. Stereo matching approaches can be classified into area based stereo and features based stereo. Area based stereo is used to classify algorithms which utilize image domain similarity metrics in the correspondence process. Area based algorithms can be further divided in the following categories: x Cross-correlation based; x Least-squares region growing; x Simulated annealing based.

Figure 2. CENSUS Transformation.

The second step computes the displacement by summing up the Hamming distances of a small window in the left and in the right image. The minimum for each horizontal line defines the disparity. Figure 3 represents a pair of stereoscopic images while the figure 4 represents the disparity images computed using the Census criteria.

Feature based stereo is defined as algorithms which perform stereo matching with high level parameterizations called image features. These algorithms can be classified by the type of features used in the matching process as follows: x Edge-string based; x Corner based; x Texture region based. Stereo matching methods can also be classified into local or global. Local methods include block matching, feature matching and gradient based optimization, while the global methods include dynamic programming, graph cuts and belief propagation. Due to a rectification process, the searching range of the stereo matching can be restricted to one dimension. Local methods can be very efficient compared with global methods [6]. The rectification process can be carried out using precalculated tables [7].

Figure 3. Stereoscopic images.

Figure 4. CENSUS Stereomatching results.

III.

STEREOSCOPIC MATCHING ARCHITECTURE

Many previous works have studied the implementation of stereoscopic matching on an FPGA component [8, 10, and 11]. The purpose of this work is not to propose a novel architecture for the stereoscopic matching but for a stereovision based obstacles detection approach. So, in order to implement the CENSUS based stereoscopic matching on an FPGA component, we have adopted the main scheme illustrated on figure 5. In the proposed architecture, which is similar to the architecture proposed in [13], the disparity image is computed in two main steps: x The CENSUS Transformation for each stereoscopic image (left and right); x The computation of the better correlation score.

Figure 1. Correlation based stereo matching principle.

The census based correlation, which is a local method, is particularly used because of its robustness to the random noise within a window and its bit-oriented cost computation [8]. The census algorithm, described by Zabih and Woodfill [9], computes the disparity in two steps. First the so called Census transform is defined for each pixel in both images. It describes the relationship between that pixel and its surrounding neighborhood. Neighbor pixels with intensity smaller than the center pixel result in 'zero' in the Census vector and 'one' otherwise. Census transform can be carried out using 3x3, 5x5, 7x7 or 9x9 sliding window.

1105

which represents, transformation.

In the proposed architecture, we use a 3X3 correlation window and we suppose that the maxima disparity is 40. Thus, the CENSUS transformation consists on the computation, for each image, of a new image in which pixel's gray level is substituted by an 8bits code. The computation of the better score consists on the extraction of the minima Hamming distance among 40 values. In order to carry out this task in real-time, a simple extraction algorithm is adopted.

for

each

pixel,

its

CENSUS

B. Disparity Computation The CENSUS code of a left pixel, belonging to the left image, will be compared with the codes of its 40 right candidate's pixels in order to determine for each candidate a similarity level. Two pixels, right and left, which are correspondent, one to other, must have a high level of similarity. This step is carried out in two steps. In the first we apply 'XOR' operators between the left code and all the right candidates' codes. In the second step which is a summation step, we compute, for each right candidate, how many bits are set to '1'. The better candidate is the one having the smallest number of bits set to '1'. The steps of applying 'XOR' operators and the summation step can easily carried out in real time using a set of comparators and adders. The most difficult is to determine the minimum and its position (the position of the minimum represents the disparity value). Many sorting algorithms can be used for the determination of the minima. For a real time treatment, a parallelized algorithm must be implemented. In our work, the determination of the disparity value is done in two steps. In the first, we determine the value of the minima, for each value we compute how many values are greater than it. If two values, A and B, are equal, we consider that A B and B ! A . In this step, we use only comparators and adders. The number of comparators to use is equal to 780 when the number of adders is 40. At the end of this step, to each pixel will be associated a value from '0' to '39'. The minima value is this for who is associated '0'.

Figure 5. The main scheme for the stereoscopic matching.

A. CENSUS transformation The CENSUS transformation cannot be started immediately after the acquisition of the first pixel. We must first memorize a window. For this, we must use a shift register in which all future needed pixels are memorized. The CENSUS transformation will be started after the acquisition of 2 lines and 3 columns of each image. The shift register to be used must have 9 outputs representing the pixels of a 3X3 sliding window. The core the sliding window is the pixel P22.

Figure 7. Minima extraction approach. Figure 6. Shift register for CENSUS Transformation

The second step is a decision step. Values are introduced in their right order and the rank of the minima will be taken

The core of the sliding window (P22) will be compared with the 8 other outputs. We can thus obtain an 8 bits code 1106

generator. In this architecture, the disparity image is substituted by a Read Only Memory (ROM).

as the disparity value. Figure 7 illustrates the extraction of the minima among 3 values. Our stereoscopic matching architecture has been simulated using ISE.9.2 of Xilinx. This architecture provides a useful disparity map since the future work (the V-disparity approach) do not need a high quality of the disparity map [1]. IV.

V-DISPARITY APPROACH AND ARCHITECTURE

The V-disparity approach is a fast and robust vision based obstacle detection algorithm. It was proposed by Labayrade and Aubert [1]. The V-Disparity map is used to detect obstacle in front of a road vehicle. The system proved to be capable of warning the conductor about obstacles and of controlling the longitudinal position of a low-speed automated vehicle [14]. The V-disparity algorithm supposes that a disparity map I ' has been computed from a stereo image pair.

Figure 9. The V-disparity architecture.

The used accumulator is composed from a RAM followed by a system which allows incrementing the value memorized in a targeted case. In this architecture, we first read disparity data from the Disp_ROM. From the ROM address we can easily compute image row and column. We use this information for computing two addresses. The first for the RAM_U_disp intended to memorize the U_disparity image. The second for the RAM_V_disp intended to memorize the V_disparity image. When data is read from RAM_V_disp (or RAM_U_disp), it will incremented in the inc_val block before its rememorization in the RAM.

Once I ' has been computed, the”V-disparity” image I v' is built by accumulating the pixels of same disparity in & I ' along the v axis [15]. The V-disparity image has the same number of rows than the disparity map but its columns number is equal to the maxima disparity. Extracting straight lines or curves from the ”V-disparity” image leads to extract 3D global surfaces in the scene. These global surfaces correspond either to the road surface, or to obstacles. In the V-disparity image, the road surface is represented by a straight line when the obstacles are represented by vertical lines. Any robust 2D processing, like the Hough transform, can be used to the extraction of lines. For obstacles detection, only the V-disparity image is needed. But the additional construction of a second image, called the U-disparity image, allows knowing obstacle dimensions after its detection. The U-disparity image can be built in same way the V-disparity is built. In fact, the number of columns of the U-disparity image is equal to this of the disparity map while its rows number is equal to the maxima disparity. In the U-disparity image, the road profile is not visible while obstacles are represented by horizontal lines.

Figure 10. The U and V-disparity from our architecture.

For simulation reasons, in the architecture are added blocks which ensure the memorization of simulation results in text files. Treating resulting files shows that the proposed architecture guaranties the construction of the same U and V_disparity images as a soft program. Figure 10 illustrates the U and V-disparity images built starting from the proposed architecture resulting disparity image. V.

OBSTACLES DETECTION ARHITECTURE

As mentioned before, in order to extract straight lines or curves from the ”V-disparity” image, Hough transform (HT), can be used. In this part, we will describe an approach which extracts only vertical lines (representing obstacles) from the Vdisparity image. This approach is inspired from the Hough transform which can only be applied on binary images [16]. In the figure 12, we considered a perfect V-disparity image. We first applied a classical HT to that image in order to extract all the lines present in the scene. Later, we applied the HT inspired detection approach to extract only the vertical lines. The limitation of the extraction makes it easy to implement the approach on an FPGA component. HT will be summarized on comparison operations. Note that the used

Figure 8. A disparity map and the V-disparity image.

The V-disparity approach, as a robust approach, does not require a disparity map of a higher quality. For this, we have use the disparity map, which results from the architecture we proposed, to build the V-disparity image. In our work, we have proposed an architecture which ensures the real-time construction of both U and V-disparity images. In this architecture, we have used accumulators and control signals 1107

Figure 13 illustrates that the best edge can be taken between 0.055 and 0.07. This implies that the pixel's value must be compared to 14 or 18. The result of the application of the inspired HT approach on a treated image with a taken edge equal to 16 is represented on figure 14.

V-disparity image is generated and not computed from a disparity map.

Figure 11. Obstacles extraction from the V-disparity image.

In the reality, a perfect V-disparity image does not exist. In order to validate our proposed approach on a real case, we used the V-disparity image issued from the architecture of section 4. The application of this approach on the Vdisparity image (figure 11) leads to the result represented in the figure 12. Figure 14. Obstacles extraction from a treated V-disparity image.

In this figure, we can distinguish 4 obstacles. The positions of these obstacles are given in disparities 11, 12, 18 and 19. These positions are so near that we can consider the obstacles present in disparities 11 and 18. So the number of detected obstacles is 2 instead of 4. In figure 3, these obstacles are the chair and the box. For the implementation of the approach on an FPGA component, we have proposed a first architecture in which the binarized image is built via a comparison process. The used HT is carried out using accumulators, always composed of a RAM followed by an incrementing system. When the value of an accumulator exceeds a pre-fixed Hough edge, we consider that an obstacle exists in the considered disparity. And the obstacle is memorized. In a real scene, obstacles can be classified into three categories: x Far obstacles: such obstacles do not represent any danger, they can be neglected. These obstacles are situated in the left side of the V-disparity image. x Closer obstacles: such obstacles should be detected previously; they are situated in the right side of the V-disparity image. x Obstacles in a median situation: such obstacles are not too far and too closer. These obstacles represent the interest and must be detected. Figure 15 represents the classification of obstacles according to their situation in the V-disparity map. The green area represents the interesting zone. If an obstacle is detected in this area, an alarm should be given.

Figure 12. Obstacle extraction from a real V-disparity image.

From this result, we can't deduce any information. The V-disparity image, in its original state, is not useful. We can make use of it after some treatments. In our work these treatments consists on the background suppression and the binarization. The background suppression consists on the elimination of the part that can be considered too far. In the disparity map, far objects have en general a small value of disparity. So in the V-disparity image, these objects will be placed in the columns of small value. In our work, we have considered all objects having a disparity lower than 10 as far. The suppression of the background will be realized by constructing a new image in which, all pixels belonging to the columns of value less than a pre-considered disparity will be set to '0'. The binarization of the V-disparity image consists on the comparison of the value of its pixels with a pre-fixed edge. If the pixel's value is higher than the edge, the considered pixel will set to '255', to '0' otherwise. In the binarization step, the selection of the edge is very important. In order to select the better value of the edge, we have done a set of tests. The result is illustrated on figure 13.

Figure 15. Interesting obstacle’s area. Figure 13. Edge influence on the binarization step.

1108

Obstacles classification allows reducing the area in which obstacles are searched; we can so limit the memory requirement. Another modification can allow reducing memory requirement; in consists on changing the internal composition of an accumulator. The accumulators are considered as addressed counters. The number of these counters is equal to the number of interesting disparities (where an eventual obstacle can produce a danger).

VI.

GLOBAL REAL-TIME OBSTACLES DETECTION ARHITECTURE

The global obstacles detection architecture is an architecture which ensures the extraction of eventual obstacles in a perceived scene starting from a pair of stereoscopic images acquired using cameras. This architecture ensures, so, to carry out the essential tasks of: x Stereoscopic matching; x The V-disparity image construction; x The obstacles extraction. Another task can be also carried out if the stereoscopic images are not rectified. This task is image rectification which allows having aligned and undistorted images. In this work, we assume that images are rectified. The architecture is qualified as real-time architecture if it ensures to carry out the bellow quoted tasks in addition to the video acquisition. Input images are taken from two stereoscopic cameras.

Figure 16. Obstacles detection architecture.

In the architecture described in figure 16, when the Vdisparity value is higher than the binarization edge, a pulse will be produced. This pulse will allow selecting only one counter among a set of counters. The selected counter will be incremented and when the value of a counter exceeds the HT edge, an obstacle is detected in the position targeted by the selected counter. The HT edge has an influence on the number of the detected obstacles. In this work we did not study how to choose this edge. We have just tested a set of edges in order to observe the influence of this variation. Figure 18 illustrates that the number of detected obstacles is 3 when Hough edge is 50 while the number of detected obstacles is 0 when the Hough edge is 80. The selection of Hough edge is empirical and can't be precisely fixed.

Figure 18. Global detection obstacles architecture.

The entire architecture is implemented on a Virtex-II FPGA type XC2V1000 FG456-4 placed on an RC200 board. This board is doted of several analog video inputs (figure20) but we an input is used, the others are invalid. This is due to the fact that the RC200 board contains a video processor (SAA7113H) which allows choosing the input to be used.

Figure 17. Influence of the Hough edge on the number of detected obstacles.

1109

The comparison made in figure 21 shows that the CENSUS algorithm requires fewer resources than the others algorithms. Table 1 summarizes the device utilization when the stereoscopic matching is implemented using the CENSUS algorithm with images of size 280x180 and 7x7 correlation window. TABLE 1. DEVICE UTILIZATION FOR STEREOSCOPIC MATCHING WITH CENSUS Resources CLB Slices Slices FFs LUTs RAM Frequency Max

Figure 19. The RC200 board.

In order to make a real-time treatment, it is important to provide an architecture for image acquisition. In image acquisition architecture, the processor SAA7113H must be validated using the I2C bus. The configuration of the processor is stocked in a ROM. After its configuration, the SAA7113H receives analog data from the chosen video input (CVBS, S Video or Camera) before converting it into a numerical data exploitable by the FPGA. The DUAL RAM in the architecture of figure 21 ensures the synchronization between the left and right images.

Used 3224 1718 5937 39

Total 5120 10240 10240 40 29.536MHz

% 62% 16% 56% 97%

The output of the stereoscopic matching architecture, which is a computed disparity, will be used as input for the second step. The V-disparity architecture described in figure 10 is unsuitable for a real-time treatment. This is due to fact of using RAM for what we need a sequential access to clear it. For a real time treatment, we proposed to use 2 RAMs, when one is used as an accumulator, the second will in Write mode in order to clear it. Since two RAMs are used in the modified architecture, a multiplexer must be added in order to select for each RAM its mode. The inputs of the modified V-disparity architecture are a clock signal, a line indices and the computed disparity. The modified V-disparity architecture provides, in its output, a signal which will be used for the obstacles extraction. Table 2 summarized the device utilization for the obstacles extraction architecture using images of size 280x180 and 7x7 correlation window. TABLE 2. DEVICE UTILIZATION FOR OBSTACLES EXTRACTION

Figure 20. Acquisition architecture.

Resources CLB Slices Slices FFs LUTs RAM Frequency Max

Since that the RC200allow to use only one camera, we have used two RC200 boards, the second ensures the acquisition of the right image and transmits the acquired image to the first board where all the others tasks are done. The right image is transmitted to the second board via the Expansion ATA. The stereoscopic matching is the first task after images acquisition. In this part, we do not need to memorize the disparity image. Only disparities values are computed. An addressing block ensures to have all necessary data for the construction of the V-disparity image (line indices). The stereoscopic matching has been implemented using several criteria (CENSUS, SAD, SSD and RANK) for different sizes of stereoscopic images (320X240) and correlation window (3, 5 and 7).

Used 4756 2439 7183 39

Total 5120 10240 10240 40 28.229MHz

% 92 % 23 % 70 % 97 %

Table 2 illustrates that the maximal frequency (for CENSUS based detection) is 28.229 MHz for images of size 280x180 and 7x7 correlation window. This implies that 560 images can be treated each second. For the detection of an obstacle, we need to treat at least one image (treatment of all its pixels and the latency time). This time is evaluated at 1.85 ms. The real-time aspect is achieved. The implementation of the architecture on the targeted FPGA has given the results shown in figure 22. First in image (a) when there is no obstacle, the board indicates 00, in this case, we can display “HH” for example in order to distinguish it from the case when the detected obstacle is very closer. In figure (b), we detect an obstacle at 1.9 meter. The same obstacle is detected in image (c) and (d) at different distances.

Figure 21.Resources consumption for stereoscopic matching, comparison between several criteria.

1110

on a mobile robot. The detection approach, implemented in this work, can be combined with an avoidance algorithm. It will be useful to use a more powerful FPGA and another type of cameras. REFERENCES [1]

(a)

(b) [2]

[3]

[4]

[5]

(c)

(d)

Figure 22.Implementation results. [6]

At the end of this part, we notice that for practical reasons, the used stereoscopic bench was not calibrated. VII.

[7]

CONCLUSION AND FUTURE WORK

In this work, we have presented an implementation of a real-time stereovision based obstacles detection approach on a Virtex II FPGA. The presented approach is based on the use of a pair of rectified stereoscopic images in order to build a disparity map. The disparity image will be used in the construction of an image called the V-disparity image in which road profile and obstacles can easily be extracted. This is due to the fact that the road profile is represented by a straight line while those obstacles are represented by vertical lines. Hough transform can be used for the extraction of all lines in the V-disparity image. In this work, we have proposed an inspired Hough transform approach in order to extract only vertical lines representing obstacles. In this work, we have, first, proposed 3 separated architectures. The first is intended to the stereoscopic matching, the second for the construction of the U and Vdisparity images and the third for the extraction of obstacles. The proposed separate architectures have been modified and grouped in global obstacles detection architecture. When the entire proposed architecture is associated with stereoscopic images acquisition architecture, the real-time obstacle detection could be carried out. For the proposed implementation, results illustrates that the approach can be implemented on an FPGA component. The targeted FPGA, which is a Virtex II type XC2V1000 FG456-4 placed on an RC200 board, is not so powerful comparatively with the recent versions of FPGAs. The fact of implementing the whole approach on it let us conclude that the implementation of this approach on a more powerful FPGA (Virtex7) is very possible. The future works consist on the test of the design on a mobile platform. The stereoscopic bench can be embedded

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

1111

R.Labayrade, D. Aubert, J.P. Tarell. ‘’ Real Time Obstacle Detection on Non Flat Road Geometry through V-Disparity Representation’’, IEEE Intelligent Vehicles Symposium, Versailles, June 2002 ; S.Badal, S. Ravela, B. Draper, A. Hanson. “A Practical Obstacle Detection and Avoidance System’’, Proceedings of the 2nd IEEE workshop of the Application of Computer Vision, pp 97-104, 1994. V. Lemonde, M.Devy, “Obstacle Detection With Stereovision’’, Mechatronics & Robotics 2004, Vol.3, pp.919-924, Aachen (Germany), 13-15 September 2004. M. Zayed, J. Boonaert. “Obstacles detection from disparity properties in a particular stereo vision system configuration“, Proceedings of the 2003 IEEE Intelligent Transportation Systems, pp 311-316, 2003. N. Ventroux, R. Schmit, F. Pasquet, P.E Viel, S. Guyetant. “Stereovision-based 3D obstacle detection for automotive safety driving assistance“, Proceedings of the 12th International IEEE Conference on Intelligent Transportation Systems, pp 394-399,. St. Louis, MO,USA, October 3-7, 2009. U. Dhond,J. Aggarwal, “Structure from Stereo - A Review’’, IEEE Trans. Syst., Man and Cybern. 19, 1489-1510, 1989. A.Naoulou, J.L. Boizard, M.Devy, J.Y. Fourniols, “An alternative to sequential architectures to improve the processing time of passive stereovision algorithms’’, 16th International Conference on Field Programmable Logic and Applications (FPL'2006), pp.821-824, Madrid , 28-30 August 2006. Z. Irki, M. Devy, ’’A Parallel Architecture for the Real Time Correction of Stereoscopic Images’’, International Journal of Computer and Information Science and Engineering, pp 185-191, 2008. R. Zabih, J. Woodfill, “Non-parametric Local Transforms for Computing Visual Correspond’’, 3rd European Conference on Computer Vision, pp. 151-158, Stockholm, Sweden, Mai 1994. S. Jin, J. Cho, X. Pham , K. Lee, S. Park, M. Kim, J. Jeon. “FPGA Design and Implementation of a Real-Time Stereo Vision System“, IEEE Transactions on circuits and systems for video technology, Vol. 20, NO. 1, pp. 15-26, January 2010 no1, N.Y. Chang, T.H. Tsai, B.H. Hsu, Y.C. Chen, T.S.Chang, “Algorithm and Architecture of Disparity Estimation With Mini-Census Adaptive Support Weight’’, IEEE Transactions on circuits and systems for video, Vol. 20, NO. 6, pp 792-805, June 2010. C. Georgoulas, I. Andreadis, “FPGA based disparity map computation with vergence control’’, Microprocessors and Microsystems 34, pp 259–273, 2010. A.Naolou, “Architecture pour la stéréovision passive dense temps réel: Application à la stereo-endoscopie’’, Phd Thesis, LAAS/CNRS 2006. L. Pollini, F. Greco, R. Mati, M. Innocenti, “Stereo Vision Obstacle Detection based on Scale Invariant Feature Transform Algorithm’’, AIAA Guidance, Navigation and Control Conference and Exhibit, 20 - 23 August 2007, Hilton Head, South Carolina. R.Labayrade, D. Aubert, “In-vehicle obstacles detection and characterization by stereovision”, Proceedings the 1st International Workshop on In-Vehicle Cognitive Computer Vision Systems, pp 1319, Austria, 3 April 2003. P.V.C. Hough, “ Method and means for recognizing complex patterns’’, December 1962.