Navigation based on Monocular Vision - IEEE Xplore

1 downloads 0 Views 8MB Size Report
Grand Challenge, world-wide championship organized by. DARPA (Defense Advanced Research Projects Agency). Machine. Vision is, together with a set of ...
A simple and efficient Road Detection Algorithm for Real Time Autonomous

Navigation based on Monocular Vision A. Miranda, Neto, and L. Rittner

Abstract- Navigation of a Mobile Robot is based on its interaction with the environment, through information acquired by sensors. By incorporating several kinds of sensors in autonomous vehicles, it is possible to improve their autonomy and "intelligence", specially regarding Mobile Robot Navigation in unknown environment. The type and number of sensors determines the data volume necessary for the processing and composition of the image from the environment. Nevertheless, the excess of information imposes a great computational cost in data processing. Currently many applications for control of autonomous vehicles are being developed, for example, the Grand Challenge, world-wide championship organized by DARPA (Defense Advanced Research Projects Agency). Machine Vision is, together with a set of sensors, one of the tools used in the championship. In our work, we propose a Mobile Robot Navigation Software based in monocular vision. Our system is organized in hierarchic and independent layers, with distributed processing. After reading an image, the system processes, segments and identifies the navigation area (road detection), that will be used as input by a motion generator system. The considered system is based on images acquired by a single camera and uses simple filtering and thresholding techniques, which make it fast and efficient for real time navigation.

Index Terms- Mobile Robots, Autonomous Vehicles, Navigation, Machine Vision, Image Segmentation. I. INTRODUCTION

Lately, several applications for control of autonomous vehicles are being developed, and in most cases, the machine vision is an important part of the set of sensors used for the navigation. Perhaps this happens because we want to reproduce human vision in machines. According to Gonzalez and Woods in [1], the acquisition of values sampled by a camera (vision) can be compared with image processing performed by human vision (role played by the eye jointly with the brain). In a few words, the human vision allows the light of an object outside the eye to become image in the retina (more internal membrane of the eye). The perception of this image happens due to the excitement of

Manuscript received August 7, 2006. This work was supported in part by CNPq. Arthur de Miranda Neto is with UNICAMP, Campinas - SP, 13083852 Brazil, phone: 55-19-3521-3823; (e-mail: arthur ofem.unicamp.br). Leticia Rittner is with UNICAMP, Campinas - SP, 13083-852 Brazil (email: lrittner adca.fee.unicamp.br).

1-4244-0537-8/06/$20.00 ©2006 IEEE

light receivers, that transform the radiating energy into electric pulses that later are decoded by the brain. In the other hand, it is known that the human vision without assistance from the brain is not capable to allow people displacement (navigation) in an efficient way. This is also truth for computer systems, which in order to navigate based on images, should contain software intelligent enough to manage mechanical structure through navigation. Although extremely complex and highly demanding, thanks to the great deal of information it can deliver (it has been estimated that humans perceive visually about 90°0 of the environment information required for driving), machine vision is a powerful means for sensing the environment and has been widely employed to deal with a large number of tasks in the automotive field [2]. With the purpose to study machine vision techniques applied to the navigation of autonomous vehicles, the worldwide championship organized by the DARPA (Defense Advanced Research Projects Agency) [3], [4], known as Grand Challenge, was one of our motivations for the accomplishment of this work. In general, the participant vehicles of this competition are equipped with all kinds of sensors to allow the interface between the navigation system and the environment to be explored. The vision system usually used in these vehicles is composed by two or more cameras, as we see in [5] and [6]. In our work, a single sensor (monocular vision) will be considered, or either, a camera will acquire images (samples) of the environment and these will feed the computer system. What is trivial for the human system, that is, to construct three-dimensional scenes from two-dimensional images captured by the vision system and to use it on the decision process for navigation, is not necessary trivial for the computer systems. Different from the human system, complex computational vision systems can lead to some damages due to the processing time. Thinking about the existing relation between a real time decision system and an image reading system that operates in a specific acquiring/reading rate, that is, amount of images read per second, one can question: how many images acquired must be discarded by the image processing system to guarantee an acceptable real time navigation of an autonomous vehicle? Therefore, the decision for a more complex machine vision system possibly leads to an excessively slow system for an independent real time application. Additionally, the automatic

92

discarding choice of information (images) by the system, for it to become fast enough, can result in loss of important information. Although the system could maintain a database of acquired images and submits it, for example, to a neural network for decision, the great number of information not necessarily would lead to better decisions and also could harm the performance of the system, overloading it. Due to the general applicability of it, the problem of navigation of mobile robots is dealt with using more complex techniques [2]. The most common ones are based on the processing of two or more images, such as the analysis of the optical flow field and the processing of non-monocular images. In the first case more than one image are acquired by the same sensor in different time instants, while in the second one multiple cameras acquire images simultaneously, but from different points of view. Besides their intrinsic higher computational complexity caused by a significant increment in the amount of data to be processed, these techniques must also be robust enough to tolerate noise caused by vehicle movements and drifts in the calibration of the multiple cameras' setup. The optical flow-based technique requires the analysis of a sequence of two or more images: a two-dimensional vector is computed in the image domain, encoding the horizontal and vertical components of the velocity of each pixel. The result can be used to compute ego-motion, which in some systems are directly extracted from odometry; obstacles can be detected by analyzing the difference between the expected and real velocity fields. Aware that in the majority of the autonomous navigation systems the machine vision system is working together with other sensors, this work has for purpose to present a method to process and analyze images acquired by a single camera (monocular vision). The main purpose is the identification of the navigation area in the images. Because it uses simple techniques and fast algorithms, the system is capable to present a good performance in real time, where the commitment between processing time and images acquisition is fundamental In section 2 we present an overview of the Grand Challenge, as context for the studied problem. Section 3 presents the structure of a Navigation System. In section 4 the stages of image processing acquired for identification of the navigation area are described. The obtained results are presented in section 5 and the conclusions and possible extensions can be found in section 6. II. GRAND CHALLENGE A. Overview The DARPA (Defense Advanced Research Projects Agency) is an agency that aims to improve the defense of U.S.A. in medium and long term. With the intention to stimulate the development of autonomous off-road vehicles they decide to organized the first DARPA Grand Challenge in 2004. One of the reasons for the creation of the Grand Challenge was to move toward the U.S.A military intention to

have one third of its fleet composed by autonomous vehicles up to 2015. This competition is opened to high schools of U.S.A. and Universities/Companies inside and outside the U.S.A.. Vehicles of all kinds are allowed. In 2004 there were 106 competitors inscribed. It took place on a desert course stretching from Barstow, California to Primm, Nevada, but did not produce a finisher. Carnegie Mellon's Red Team traveled the farthest distance, completing 7.4 miles ofthe course. In 2005, the route to be followed by the robots was supplied to the teams two hours before the start. Once the race had started, the robots were not allowed to contact humans in any way. By the end, 18 robots had been disabled and five robots finished the course. The winner of the 2005 DARPA Grand Challenge was Stanley, with a course time of 6 hours 53 minutes and 8 seconds (6:53:08) with average speed of 19.1 MPH (30.7 km/h). B. Basic Rules

* * * *

Vehicle must be entirely autonomous, using only GPS and the information it detects with its sensors. Vehicles will complete the route by driving between specified checkpoints. Vehicles must operate in rain and fog, with GPS blocked. Vehicles must avoid collision with vehicles and other objects such as carts, bicycles, traffic barrels, and objects in the environment such as utility poles.

C. Future of the Championship The 2007 Grand Challenge, which is also known as the DARPA Urban Challenge, will take place on November 3, 2007. The course will involve a 60 mile mock urban area course, to be completed in fewer than 6 hours. Rules will include the obeying of traffic laws while negotiating other traffic and obstacles and merging into traffic.

III. NAVIGATION SYSTEM In a general way, the navigation system must be capable to manage the mechanical structure using the information received from all sensors. In our work, we choose a distributed

architecture, considering that the processing of the specific sensor signals is performed by dedicated computers. The navigation system abstracted by us is presented in Fig. 1. After the image acquisition by a physical device (camera) the machine vision layer (in red) fulfills its role through sublayers (reading and pattern recognition - segmentation). The generated information are then available for next layer, the navigation layer (in blue), which converts them into movements commands for the vehicle.

93

Jr

It is important to point out that our application (software) developed based on the object oriented paradigm and organized in layers (multithreading), where each layer is treated as a service (thread). When a layer is composed by sub-layers, they will also be structured as services (threads). According to [7] and [8], the use of object oriented paradigm facilitates the migration of the software project to the codification, and the layers structure contributes with applications running in multiprocessor computers. This work is focused on our machine vision layer implementation. was

IV. MACHINE VISION In general, processes conducted by this layer (machine vision) have as goal, in addition to the visual information, to process image data for machine perception. Its algorithms carry through operations on images, with the purpose to reduce noise and to segment them. Several methods can perform pattern recognition in images. These methods, when applied to image processing, generally are known as segmentation methods. This segmentation, according to Gonzalez and Woods in [1], can be considered as the partition of digital images in sets of pixels, considering application and previously defined criteria. The purpose of segmentation is to distinguish objects in an image [9], what can be extremely sophisticated and complex. Results can be very satisfactory with the use of well elaborated filters. However, these results (high quality segmentation) can generate a higher price, that is, normally robust segmentation algorithms present great complexity. One way to perform segmentation of an image is to use thresholds. This type of segmentation technique, called thresholding, is very simple and fast computationally, however the identification of the ideal threshold can be sufficiently complicated. The best thing to do in this case is to use techniques and algorithms that search the thresholds automatically.

Thresholding methods are divided in two groups: global and local. The global ones divide the image using only one threshold and the local ones are those that divide the image in sub-images and for each one ofthem a threshold is defined. In Sahoo et al. [10] the local thresholds are defined as multilevel thresholds. Summarizing, from the definition of global thresholds and/or multilevel, the use of a global threshold in a two-dimensional image I (x, y) with levels of N=[0,255] intensity consists of determining an only threshold T that separates pixels in two distinct classes: object and background. Threshold T normally is applied to the histogram of an image h (N), which can be seen as a description of the distribution of pixels intensities for the image. Multilevel thresholds are generally more laborious than the global threshold, because it is more complicate to find multiples thresholds that determine the interest regions effectively, especially when there are some groups of objects in the image. On the other hand, according to Gonzalez and Woods [1], the global thresholding only reaches good results when the illumination of the image is relatively uniform and the interest regions, represented by objects, possess a significant difference of intensity from the background (contrast), what probably it is not the case of the great majority of environments defined for DARPA Grand Challenge. What we propose in this work is a global thresholding method, which seeks not the ideal threshold for the whole image, but an ideal threshold associate to the portion of the image that interests us for the identification of the navigation area. Details ofthis search are described below. A. Image pre-processing Most research groups face this problem using highly sophisticated image filtering algorithms. For the most part, gray-level images are used, but in some cases color images are used: this is the case of the MOSFET (Michigan Off-road Sensor Fusing Experimental Testbed) autonomous vehicle, which uses a color segmentation algorithm that maximizes the contrast between lane markings and road [2]. In this work we use gray-level images and smooth them using a very simple low-pass filter. B. Image Segmentation In this work, the purpose of segmentation is the identification of the navigation area in the images, that is, the image classification in two types of objects: Navigation Area and Obstacles. Right after the images pre-processing, we begin the search for an ideal threshold using a segmentation technique proposed by Otsu in [11].

The main characteristic of this method is the maximization of the variance between classes of the image, that is, the maximization of the separation between object and background. The thresholding process is seen as the partitioning of pixels of an image in two classes: Cl (object) and C2 (background). This method is recursive and searches the maximization for the cases: C1 0.1,..., T} and C2 {T + 1, T + 2,..., N -1}, where T is the chosen threshold and N the number of intensity levels of the image.

94

One of the great advantages of this method is that it does not restrict itself to the type of histogram of the image, that is, it can be applied to unimodal, bimodal or multimodal histograms, but it presents better performance in images with bigger intensity variance. Its main disadvantage is its sensitivity to noise in the image, what can be reduced with the application of a smoothing filter. On the other hand, Sahoo et. al. in [10] and Lee et. al. in [12] consider this method as one of the best choices for real time applications in machine vision. Although the Otsu method is an excellent method to choose an ideal threshold, for all cases it considers the information of the image as a whole (global information). Fig. 2 illustrates that probably, for images that possess in its composition the horizon, the algorithm does not work well (referred in this work as "problem of the horizon").

images. The explanation for the division is related to the fact that for immediate displacements, human uses the information that are close and for future decisions, the information that are in their horizon. In addition to that, smaller parts of an image can produce better segmentation results. Fig.3 shows an example of this division: Up (above) and Down (below), respectively, horizon vision (contribute for future decisions) and the close vision (supply information for immediate displacements). Once convinced that the division of the image is a good option, we still have to answer the question: which would be the ideal percentage to attribute for each part (Up and Down) of the image? Although some previously cited works manage this problem fixing a specific distance to cut the image, this approach makes them dependent on calibration parameters and/or different sensors to get this information. To avoid this dependence, the "TH Finder" algorithm uses only information from the images and also from the segmentation results to define this cut point.

Fig. 3 - Image cut (Up e Down). Fig. 2 - Original Image, Otsu Segmentation and Histogram of pre-processed Image

To cope with this problem, some systems have been designed to investigate only a small portion of the road ahead of the vehicle where the absence of other vehicles can be assumed. As an example, the LAKE and SAVE autonomous vehicles rely on the processing of the image portion corresponding to the nearest 12m of road ahead of the vehicle, and it has been demonstrated that this approach is able to safely maneuver the vehicle on highways and even on belt ways or ramps with a bending radius down to 50m [2]. The RALPH (Rapidly Adapting Lateral Position Handler) system, instead, reduces the portion of the image to be processed according to the result of a radar-based obstacle detection module [2]. In other systems the area in which lane markings are to be looked for is determined first. The research group of the Laboratoire Regional des Ponts-et-Chaussees de Strasbourg exploits the assumption that there should always be a chromatic contrast between road and off-road (or obstacles), at least in one color component; to separate the components, the concept of chromatic saturation is used [2]. In the proposed method, called from now on "TH Finder" (Threshold and Horizon Finder) and based on Otsu method, we decide to divide image in two parts. The division is not necessarily in equal parts, but in two complementary sub-

Initially, we create cuts that divide image in ten parts (slices) of equal heights. The algorithm initiates then analyzing the slice closest to the vehicle (more inferior slice of the image, going from the bottom edge of the image to the first imaginary cut). This first sub-image has, therefore, 10% ofthe total height of the original image. The second sub-image to be analyzed is the one that goes from the bottom edge of the original image until the second cut, totalizing 20% of the height of the original image. In summary, as illustrated in Fig. 4, having the original image the height of 100%, the first sub-image will have 10%, second will have 20%, and so on, until the last sub-image contains 100% of the original image height. All sub-images are submitted then to the segmentation algorithm, where the output of this process is a vector (vector of percentages) with values of the percentages of navigation points (white points) found after analysis of each sub-image. The purpose is to analyze how much the inclusion of one superior slice in the segmentation process positively contributes for the increase/reduction (acceptable values) of navigation points of the first sub-image (10% of the original image). In other words, once using a global segmentation method, not always the analysis of a bigger portion of the original image can contribute for a better result in the most critical region (region closer to the vehicle), where obstacles should be detected and avoided as fast as possible. On the contrary, when discarding the superior portion of the original image (horizon vision), we are capable to get a more efficient

95

segmentation and to distinguish with bigger precision the obstacles from the navigation area. .............................

Imoge D Image: 8

_

g

| kImoe7e

-

l-,

-'4 -

.

53

4

_

.9*ks-a

image: 5- 77 k Image. 4 84

Fig. 5 - Result from the developed system: image after cutting (Up e Down)

Image-3 - - 91 _ :I~g2

95~

Fig. 4 - Segmentation process overview: (a) Final result after cut definition and segmentation; (b) Sub-images analysis representation with percentage vector; (c), (d) e (e) Sub-images with, respectively, 10%, 60% and 100% of original image height, already segmented by Otsu method.

After creating the percentage vector from sub-images analyses, next stage considers these values to decide where should be the cut of the image (Up and Down). This is made using the standard deviation of the percentages vector, defined inEq. (1): in

n-

i,1

(1)

To find the cut point in the image through the analysis of the percentages vector, we subtract the standard deviation from the value of the percentage of the first sub-image (contained in the first position of the percentage vector). After that, we search in the percentage vector the position that contains the last bigger value than the calculated difference above. Once found this index, we decrement it of one, that is, we select the next index. Doing so, it is possible to get two images: image of the bottom part (Down), using the percentage equivalent to the decremented index, and image from upper part (Up), what remains from the image. For example, in the Fig. 4 (a), we have 6 as resulting cut value, that means that the bottom part (Down) will contain 60% of the height of the original image and the upper part (Up) will contain the rest of the image (400o of the height of the original image). Fig. 5 shows also the result of the division of an image performed by the developed system.

To successfully conclude the segmentation there is still a last analysis of the obtained results to be made. Using the same procedure previously presented (reading, processing and segmentation), the algorithm verifies the percentage of navigation points of the first layer of the image and if this value is less than 300O, the image is inverted (negated). In Fig.6 we have an example where, due to illumination and to the colors of the road and the horizon, the navigation area is represented (after segmentation) by black points and the obstacles by white points. In this case, the image has to be negated. This always occurs when the navigation area (road) is darker than the obstacles and could lead to a wrong decision, therefore the presented method always considers the lighter pixels as navigation area and the darker pixels as obstacles.

C. Obstacles identification Important to note that the concept of the segmentation and the identification of the navigation area is not to create algorithms with perfect results, but to find the bigger amount of white pixels (in a safe proportion), so that the rules system (in the next layer to the Navigation System) can indicate the necessary route corrections. It is known that a secure navigation cannot rely only in the machine vision model as the only source of information, unless it takes place in a controlled environment.

Fig. 6 -Original image, Negated image, where the navigation area (road) is lighter than the obstacles and Segmentation results

96

In order to deviate from the existing obstacles it is necessary, therefore, to find the biggest number of white points that represent, in our case, the navigation area. Taking in account only the image of the bottom part (Down), as shown in Fig.7, a maximization matrix is created with same dimensions of this sub-image. For each point of the sub-image, the value attributed to the maximization matrix is calculated based on the maximization function (Eq. (2)). The objective of this function is to define the influence of the points of obstacles in the analyzed image.

f (X ,y)

*{i

=

Blackit

}

(2)

At this point, a maximization vector is created, and each position of this vector is filled with the sum of all points of the corresponding column in the maximization matrix. This step is described by Eq.(3). As example, the first position of the maximization vector will receive the sum of all values of the first column of the maximization matrix. n

vi= m(Xi^Ya) j=l

(3)

After all, in each position of the maximization vector will contain an integer to represent the influence of obstacles points (in black) for each column of the image. The maximization vector will be used later by the navigation layer, which will be able to decide for one specific route, depending on the obstacles pointed by this vector. As example, Fig. 8 shows a proposal of a direction generator with 180 degrees of possible directions. The analysis of the values of the maximization vector indicates the direction that the vehicle must follow.

Vector Vector

V. TESTS AND RESULTS A. Tests It is important to point out that our hardware and software were not embedded in a vehicle or robot. Our results were obtained after submit images (extracted from video clips) to our navigation software (Laptop Computer - Intel Processor Core Duo T2300E - 2M L2 667Mhz - 512MB - Operational System: Windows XP Professional). The videos used had been made available by DARPA and in other cases they were generated by us. While transforming the videos into images (frames), the tool used by us, generated 320x240 images. An adopted strategy, which resulted in computational gain, was to reduce the images size in fifty percent. In other words, we submit to the system images with dimension 160x120. This operation did not cause losses in results of the machine vision algorithm, however if the reading system generates images with the used dimension, the scale operation can be omitted and the processing time decreases. Because it supports image treatment and multithreading development, not to mention other advantages, we use the JAVA platform (Java 2 Platform) for the software development. As support, we use the packages of functions for image treatment that consist in the APIs: JAI (Java Advanced Imaging) and IMAGEJ (Image Processing and Analysis in

Java) [13], [14].

B. Results

Before presenting our results, we show in Fig. 9 results of the work developed by Thurn et all in [6], of the University of Stanford, in partnership with companies. It is part of the machine vision system used in the 2005 Grand Challenge, however it was assisted by laser (excellent tool for detection of cracks in navigation area). In Fig. 10 are our results, obtained using the same original image used by Stanford. Fig. 10 (b) shows the conversion of the original image to a gray-level image. In Fig. 10 (c) we show the result of the filtering operation (convolution between the filter and the gray-level image). Finally, Fig. 10 (d) presents the result of the segmentation algorithm. The segmentation result presented in fig. 10 (d) shows that the "TH Finder" algorithm has a good performance, taking in account that the original image is from a nonstructured road (with no marks or signs). Although presented in previous sections (and not in this results section), figures 4, 5 and 6 were also obtained using the "TH Finder", and confirm the good performance of the proposed method. Other results can be found in Figure 11, where occlusion (Fig. 11 (a) e (b)) and texture (Fig. 11 (c) e (d)) issues were also satisfactory solved.

Fig. 8 - Motion angle

97

Fig. 9 - Processing stages of the machine vision system: (a) Original image; (b) Processed image with the laser quadrilateral and a pixel classification. (c) the pixel classification before thresholding; (d) horizon detection for sky removal [6].

Fig. 10 - Results from our algorithm: (a) original image; (b) gray-level image; (c) filtered image (smoothed); (d) navigation area identified after segmentation I a.l

VI. CONCLUSION The machine vision research area is still in evolution. The challenge to construct robust methods of image processing and analysis is far from being achieved. This can be observed by the great number of publications that in many cases, are developed with the intention to solve specific problems. In this work we presented a simple solution that solves a specific problem of machine vision from a navigation system, suitable for vehicles and robots. The purpose of it was to present a machine vision algorithm capable of identifying the navigation area from an image captured by a single camera. This algorithm is thought to be used in a mobile robot, together with other sensors, in order to supply information to the navigation system to take decisions on routes. It is important to notice that our algorithm is not based on

previous knowledge of the environment (lane shape, geometric inference, etc) neither from the image acquisition system (camera calibration). It does not depend on information from signs or marks on the road, what makes it robust and well suitable also for nonstructured roads. As a dynamic threshold search method, it is not affected by illumination changes and does not need any contrast adjustments. Our results are satisfactory and comparable with more complex systems, what stimulates the continuity of our work. AKNOWLEDGMENT The authors wish to thank Prof. Dr. Douglas Eduardo Zampieri, Prof. Dr. Roberto de Alencar Lotufo, Prof. Dr. Clesio Luis Tozzi, Prof. Dr. Andre Mendeleck, Prof. Dr. Carlos Juiti Watanabe and Judson Benevolo Xavier Junior (Cap QEM - Brazilian Army) for their kind attention to our work and for the useful discussions. This work was supported in part by CNPq.

Fig. 11 - Results from our algorithm: (a) original image 1 (occlusion example); (b) segmentation result; (c) original image 2 (texture example); (d) segmentation result

REFERENCES [1] C. R. Gonzalez and E.R. Woods,"Processamento de Imagens Digitais",

Ed. Edgard Bltcher, S.Paulo, Brazil, 2000. [2] M. Bertozzi. A. Broggi and A. Fascioli.,"Vision-based intelligent vehicles: state of the art and perspectives".Robotics and Autonomous systems 32, 1-16, 2000. "DARPA Grand Challenge Rulebook", [3] DARPA 2004.

[4] [5]

[6] [7] [8] [9]

[10]

98

Stanford Racing Team's Entry In The 2005 DARPA Grand Challenge, (2006, June 10), http://www.stanfordracing.org H. Dahlkamp, A. Kaehler, D. Stavens, S. Thrun, and G. Bradski. "SelfSupervised Monocular Road Detection in Desert Terrain." In Proceedings of Robotics: Science and Systems 2006 (RSS06). Philadelphia, USA. [Oral Presentation.] S. Thrun, M. Montemerlo, D. Stavens, H. Dahlkamp, et. al. "Stanford Racing Team's Entry in the 2005 DARPA Grand Challenge." Technical Report. DARPA Grand Challenge 2005. R.S. Pressman,"Engenharia de Software". Ed. MAKRON Books, 1995. W. Boogs and M. Boogs,"Mastering UML com Rational Rose 2002", Ed. ALTA Books, 2002. H.A. Abutaleb,"Automatic Thresholding of Gray-Level Pictures Using Two-Dimensional Entropy", Computer Vision, Graphics, and Image Processing, 1989. P. K. Sahoo, S. Soltani, and A. K. C. Wong, "A survey of thresholding techniques," Comput. Vision Graphics Image Processing, vol. 41, pp. 233-260, 1988.

[11] N. Otsu,"A threshold selection method from gray-level histogram". IEEETransactions on Systems, Man, and Cybernetics, 1978. [12] U.S. Lee, Y.S. Chung and H.R. Park,"A Comparative Performance Study of Several Global Thresholding Techniques for Segmentation". Computer Vision, Graphics, and Image Processing, 1990. [13] JAI (2006, May 15), "Java Advanced Imaging", http: /java. sun. com/products/java-media/jai/ [14] IMAGEJ (2006, May 10), "Image Processing and Analysis in Java", http://rsb.info.nih.gov/ij/

99