Genetic Algorithms for Automated Texture ... - Semantic Scholar

6 downloads 0 Views 303KB Size Report
There is an enormous variety of different types of genetic algorithms. ... and its Applications we chose to a steady state genetic algorithm similar to that described ...
Genetic Algorithms for Automated Texture Classification Dan Ashlock

Jennifer Davidson

Mathematics Department Iowa State University Ames, IA 50010 [email protected].

Department of Computer and Electrical Engineering Iowa State University Ames Iowa 50010 [email protected]

Abstract In this paper we demonstrate that a genetic algorithm can be used to produce collections of pixel locations termed foot patterns useful for distinguishing between different types of texture images. The genetic algorithm minimizes the entropy of empirical samples taken with a particular foot pattern on a training image. The resulting low entropy foot patterns for several texture types are then used to classify test images. In order to classify a given image, foot patterns for several texture types are applied to the image to obtain entropy scores. The lowest entropy foot patterns are then used in a vote with the majority among the ten lowest scoring being taken as the classification. On the original test set of sixty images, twelve each from five image types, the resulting classification was 98.3% accurate (one image was not classified). When a sixth texture type, picked specifically to confound the classification technique, was added to texture types in the original test the technique misclassified several images of the two similar types. This latter experiment helps explain much of the how and why of the texture classification technique. We discuss potential methods for overcoming limitations of the texture classification technique.

1

Introduction

Textures play an important part in imaging. The forward problem for textures is the generation or synthesis of textures, used for replicating a desired image texture. Instances where synthesis occurs include scene generation such as found in virtual reality, syntheis of parts of the image to simulate a known texture in that part of the image, and generating texture for parts of an image to recreate an image transmitted over a communications medium. See [3, 15, 14]. The inverse problem for textures is the classification of a given texture. This latter problem is typically much more difficult than the forward problem of synthesis. Classification of textures is used for segmenting an image [Bouman,Lakshmanan], identifying scenes in an image [17, 23], and compressing an image for storing or for transmitting [16]. Solving the inverse problem is complex because there are many methods from which to choose. One method is to model the texture as a stochastic process and identify parameters which are unique to that texture using that process. Once the parameters are found, then a ”goodness of fit” must be made to determine how close the parameters are to the ideal parameters, if those are known. Exampls of this method can be found in [20, 5, 3, 10, 19]. However, this method has disadvantages, such as the assumption that the texture has an underlying statistical process associated with it. Also the computation of parameters can be extremely time consuming, and defining ”goodness of fit” for parameters is not at all straightforward or even possible. Often a visual inspection of textures generated using the estimated parameters is the best way to determine closeness of solution.

1

Clouds

Mines

Pipes

Slanted Trend

Layers

Filaments.

Figure 1: Images in the training set. We present in this paper an empirical approach for classifying a texture that does not use a statistical model, and does not depend on regenerating a texture to determine goodness of fit. A genetic algorithm was used on a single image to determine solutions of a particular type (explained below) that best described that single image. Then, these solutions were applied to other similar and dissimilar images to determine a goodness of fit with respect to those solutions. The goodness of fit for this genetic algorithm was entropy of a distribution. As discussed below, this method gives very good results for an initial experiment. We could find no other methods listed in the literature that use a similar approach, and would welcome knowledge of any. Please contact the authors at the addresses on the first page. The data we used was binary image data on a 64 x 64 array. A total of 72 iamges were use, and each image fell into one of six image types. Five of the six types of images were generated using a stochastic model called partially ordered Markov models (POMMs) [4, 2]. The POMM model uses parameters plus one-pass procedure to generate an image from a distribution. The POMMs we chose to use had small local neighborhoods. For each of the five POMM models of clouds, pipes, slanted trend, layers, and filaments, twelve images were generated using different initial conditions determined randomly by the computer program, for a total of 60 different images. The 12 images of each type were similar in appearance but not exactly the same. The sixth type of image was a collection of circular objects to which a projective transformation was applied. This latter type of image simulates a minefield as viewed from an aerial position. This last type of image shows that our technique developed here can be applied to images that have ”large” grain textures, such as images of objects. See Figure 1 for the six images, one of each type, used as training images for the genetic algorithm. See Figure 2 for examples of the remaining 66 images.

2

Clouds

Mines

Pipes

Slanted Trend

Layers

Filaments

Figure 2: Additional images from the test set. We chose to represent a texture in the following way. Pick a square window of a size nxn that will include the basic ”texel,” that is, texture element [9], and choose a number k that presents the number of pixel locations inside that window that will represent the texel. In general, of course, the texel is unknown, and the number of pixels to represent it is also unknown. (More about this selection is discussed in Section 6, on future work.) For our purposes, we chose n = 7, a 7 × 7 window, and the number k = 7 pixel locations inside the window. These numbers were chosen to make the computations needed for the genetic algorithm reasonable while leaving room for a large 2 number of configurations. There are of course nk such ways to have k pixel locations in the n × n window; one specific instance of this is called an n-k foot pattern. An example of a 7-7 foot pattern is given in Figure 3.

Figure 3: A foot pattern for the Layers image series. The goodness of fit for a particular foot pattern was chosen to be entropy with respect to the foot pattern histogram of 0s and 1s. For any foot pattern with k locations and binary data, there are 2k possible patterns of 0s and 1s within that fixed foot pattern. We call a specific assignment of 0s and 1s to the locations within a foot pattern a ”footprint”. Counting the number of different footprints φ that appears in a given image data, then normalizing by the total number of possible footprints, gives us the normalized histogram or empirical probability distribution pphi for that foot pattern with respect to that image. The entropy of a foot pattern C relative to the image I is then taken to be the entropy E(C, I) of this empirical distibution accoriding to the standard formula X E(C, I) = − pφ · Log2(pφ ), (1) φ

with the logarithm base two chosen to make the unit of entropy bits. The working hypothesis that causes us to select entropy minimization as a way of finding foot patterns that are tied to an image type is that low entropy foot patterns for images exist, are rare, and are different for different images. In previous work [7, 6, 24] we experimentally verified this hypothesis to some degree. 3

2

The Genetic Algorithm

Genetic algorithms were invented by John Holland [11] and are described quite well in David Goldberg’s foundational text Genetic Algorithms in Optimization, Search, and Machine Learning [8]. They are an algorithmic technique derived from Darwin’s theory of evolution. Genetic algorithms use some form of the algorithm shown in Figure 4. 1 Create an initial population of structures. 2 Test each structure for quality with a heuristic (fitness function). 3 Select pairs of structures with a quality bias. 4 Blend selected pair with crossover to produce new structures. 5 Make small probabilistic modifications to the new structures, a process termed mutation. 6 Place the new structures into the population, replacing existing structures. 7 If no sufficiently high quality structure yet exists, go to 2. 8 Report results and stop. Figure 4: The steps of a generic genetic algorithm. There is an enormous variety of different types of genetic algorithms. Following rules of thumb presented at the 1996 Conference on Evolutionary Algorithms at the Institute for Mathematics and its Applications we chose to a steady state genetic algorithm similar to that described in [18]. Such an algorithm is thought to be a good choice when the genetic algorithm is operating with a heuristic (fitness function) that always reports the same quality for the same structure. Examples of situations in which the fitness function gives different quality measures for the same structure at different times are [21] or [1]. The structures our genetic algorithm operates on are the foot patterns, described above. These are stored as lists of pairs of displacements from the center pixel of the window of the foot pattern. The list for the foot pattern given in Figure 3 is shown in Figure 5 together with the presentation of the list used by the genetic algorithm during crossover. The initial population of 400 structures is chosen at random. A random foot pattern is created by choosing a set of seven distinct locations in the 7 × 7 window uniformly at random. The fitness function for a foot pattern is its entropy, computed relative to the fixed image which the genetic algorithm is currently processing. For two foot patterns our quality heuristic states that the one with lower entropy is judged better. {(0,2), (0,1), (1,0), (2,0), (-1,2), (3,-1), (1,1)} (0,0,1,2,-1,3,1,2,1,0,0,2,-1,1) Figure 5: The foot pattern from Figure 3 as a set of displacements from the center pixel of the window and transformed into a gene for use in crossover. Steps 3-6, following Figure 4, are performed with tournament selection [22]. A group of four foot patterns are selected, uniformly at random, from the population. The two with best fitness 4

are blended with crossover to produce two new foot patterns that replace the two with the worst fitness. These new creatures are then modified by a dependent two point mutation. Crossover of two foot patterns is accomplished by transforming them into two arrays of integers and then exchanging middle segments of those arrays selected uniformly at random. Such a transformed foot pattern is shown in Figure 5. These arrays consist of the x-coordinates of the feet followed by the y-coordinates of the feet. The dependent two point mutation replaces the x- and y-coordinate of a foot selected uniformly at random with coordinates not occupied by some other foot of the foot pattern. This process of selection, crossover, mutation, and replacement is termed a mating event. As with many genetic algorithms, our termination condition is trivial. The genetic algorithm is run for 40,000 mating events and whatever foot pattern has the lowest entropy is retained as the result of the genetic algorithm. The algorithm often finds the same foot pattern more than once for a particular training image. This indicates that some form of convergence to at least a local optima has taken place. Throughout the genetic algorithm the choice of the “best” of two equal foot patterns is made uniformly at random. The source code for the genetic algorithm is available via e-mail from the first author.

3

Design of the Experiments

It is the goal of the research reported here to classify, with a high degree of reliability and entirely automatically, the type of an image using as input only an example of each type. We performed two experiments, identical in procedure, one on a set of five image types and the other on a set of six image types. The second experiment was intended to test the limits of the technique after substantial success in the first experiment. In the first experiment, the genetic algorithm described in Section 2 was run 20 times on one training image for each of the five type of images cloud, mines, pipes, slanted trend, and layers. The best solution (foot pattern) found in each run was kept, and of these 20, duplicate foot patterns were discarded. The remaining patterns for each of the five types of images are shown in Figure 7. For example, the ”mines” training image had the most redundancy of foot patterns, while the ”filaments” training image had the least redundancy of foot patterns. Also given in Figure 6, in the upper left hand corner of each foot pattern, is the number of duplications of each patterns in the 20 solutions. This could give an indication of whether the pattern is a local optimum (low number) or in or near a global optimum (high number). After removing duplicate foot patterns, the 53 distinct foot patterns from the five training images were used to classify the all sixty images in the following way. (Note the training image was also classified as a check on the method.) Each foot pattern was applied to an unknown image, producing a total of 53 entropy fitness values. The 10 lowest entropy values were polled as to which training image they belonged to. If a majority of the 10 values belonged to one training image, then that unknown image was classified as the same type as that training image. If there was no majority, then the image was said to be unclassifiable by this technique. If there was a tie (5 to 5), then no decision could be made as to the classification type. Results are shown in Figure 6. The second experiment was performed exactly as the first, except that a sixth image was added, the filaments image. The results are shown in Figure 8. We next discuss the results in more detail.

5

4

Experimental Results

The results for the first experiment are summarized in Figure 6. The first image in each series is the training image - unsurprisingly these were all correctly classified. Images 2-12 in each series (for each type) were drawn from the same distribution of images as the first but no other information about those images was available to the training process. Only one image, number eleven in the layers series, was not correctly classified. The failure was not a misclassification but a failure to classify; the vote as to its identity was a tie between layers (the correct classification) and slanted trend - the image type, other than layers, that looked most like it. Image No. 1 2 3 4 5 6 7 8 9 10 11 12 Errors *tie vote

Clouds(1) 1 (5) 1 (5) 1 (2) 1 (5) 1 (2) 1 (2) 1 (3) 1 (3) 1 (5) 1 (2) 1 (5) 1 (5) 0

Mines(2) 2 (4) 2 (4) 2 (4) 2 (4) 2 (4) 2 (1) 2 (5) 2 (1) 2 (1) 2 (4) 2 (4) 2 (4) 0

Pipes(3) 3 (5) 3 (5) 3 (1) 3 (1) 3 (1) 3 (1) 3 (1) 3 (4) 3 (4) 3 (4) 3 (4) 3 (2) 0

Slanted Trend(4) 4 (3) 4 (5) 4 (5) 4 (2) 4 (2) 4 (1) 4 (3) 4 (5) 4 (3) 4 (3) 4 (5) 4 (2) 0

Layers(5) 5 (1) 5 (4) 5 (4) 5 (4) 5 (4) 5 (4) 5 (4) 5 (3) 5 (4) 5 (2) 5 (4)* 5 (4) 1

Figure 6: Results of the first experiment. The table displays the majority result and the second most common classification according to the numbers given after the image types. The foot patterns located by the genetic algorithm for the various image types are given in Figure 7. Notice that the foot patterns for the layers image type and slanted trend are similar. An even greater degree of similarity occurs in the foot patterns for clouds and filaments (filaments were not used in the first experiment). This similarity of foot patterns for clouds and filaments leads to the degraded performance of our technique in the second experiment.

The Second Experiment: Adding a Simulated Image The second experiment was done exactly as the first save that a sixth image type was added: filaments. Examining the foot patterns derived from this image type, shown in Figure 7, it is easy to see that many of the foot patterns for filaments are simple translations of those found for clouds. It is intuitive to conjecture that, as the size of the training image is increased, the entropy score of foot patterns should become more nearly translation invariant. This in turn suggests that the foot patterns shown for clouds and filaments differ only in their ability to detect edge effects of the images they were trained on. The results of the second experiment, given in Figure 8 show near total confusion of the images from the clouds and filaments series. Twelve of twenty-four images of those two types are not correctly classified; there are ten misclassifications and two failures to classify. This is additional evidence that the foot patterns differed only as the result of special features of the particular 6

3

3

1

1

1

1

1

1

1

1

1

4

1

1

5

1

1

1

1

1

1

1 1

mines

pipes

4

6

2

5

1

1

1

1

1

1

1

1

1

1

2

1

1

1

1

1

1

1 1

4

7

4 2

1

2

1 2

2

3

clouds

3

1

4

1

3

1 2

3

1

1

1

slanted trend

layers

filaments

Figure 7: Foot patterns for all six image series displayed, together with the number, in the upper left corner, of genetic algorithm runs for which the given foot pattern was the result.

7

image used in training. In the section on future work we outline several methods for attempting to deal with this problem. On the bright side, performance on the other four image types remained unchanged. Image No. 1 2 3 4 5 6 7 8 9 10 11 12 Errors *tie vote

Clouds(1) 1 (6) 6 (1) 6 (1) 1 (6) 1 (6) 1 (6) 1 (6) 1 (6) 6 (5) 6 (1) 1 (6) 6 (1) 5

Mines(2) 2 (4) 2 (4) 2 (4) 2 (4) 2 (4) 2 (1) 2 (5) 2 (1) 2 (1) 2 (4) 2 (4) 2 (4) 0

Pipes(3) 3 (6) 3 (5) 3 (1) 3 (1) 3 (1) 3 (4) 3 (1) 3 (4) 3 (1) 3 (5) 3 (4) 3 (5) 0

Slanted Trend(4) 4 (5) 4 (2) 4 (5) 4 (6) 4 (6) 4 (2) 4 (5) 4 (2) 4 (1) 4 (5) 4 (3) 4 (6) 0

Layers(5) 5 (4) 5 (4) 5 (6) 5 (3) 5 (4) 5 (4) 5 (2) 5 (1) 5 (6) 5 (4) 5 (4)* 5 (2) 1

Filaments(6) 6 (1) 6 (1) 1 (4) 1 (6) 6 (1)* 1 (6) 1 (6) 6 (1) 6 (1)* 1 (6) 6 (1) 6 (1) 7

Figure 8: Results of the second experiment. The table displays the majority result and the second most common classification according to the numbers given after the image types.

5

Conclusions

The technique outlined in this research shows great promise. It achieves very good results on the simple test images used in this research. The foot patterns generated contain obvious, geometric relations to the images they were trained on. The foot patterns found for the mines series, for example, detect the direction in which the black circles are slanted while the clouds and filaments both have homed in on the checker-boarded parts of their respective images. While this geometric plausibility is likely at least partly an artifact of the simple images the foot patterns were trained upon and the small size of the foot patterns use we expect some geometrical relationship to be detectable even on more complex or broader image types. This gives our technique one advantage over such black box techniques as neural nets - it is possible for the researcher to gain an intuitive understanding of how the trained, automated classifiers are doing their work and to detect, perhaps more easily, problems in the system. An automated classifier that is trained on a single example must be closely examined. We will try to explain why our technique worked as well as it did when the automated classifiers were trained on single examples. In work on natural images such single example training would clearly be a poor idea. The images used in this research were generated by entirely local processes (partially ordered Markov models or POMMs) in the case of all the image series except mines. The mines image series was generated by random placement of small, similar objects and hence had a very “local” character as well. This means that one image consists of a large number of samples of the underlying local process used to generate the image. We took care to make the process of classification in this research entirely automatic. The image types used were chosen for their diversity (save for filaments which was chosen for sharing large 8

checkerboard regions with clouds) before any foot patterns were derived with the genetic algorithm. The classification technique was fixed before any images were actually classified with it. This was done with the idea of demonstrating that the technique given was, to some degree, general purpose. We avoided fiddling with the technique to improve its performance and attempted to demonstrate the power of the technique in the absence of fine-tuning. Fine tuning and exploration of the technique is discussed in the section on future work. It is mathematically obvious that the accuracy of the classification process given in this paper is a non-increasing function of the number of image types included. Deciding between only two image types would ensure that there were only two types of votes and so there would be fewer failure modes. Empirically, we observe that low entropy foot patterns relative to a particular image (so long as that image was not produced by flipping a coin to color each pixel) are rare in the collection of all foot patterns. This gives some hope that inclusion of additional image types need not decrease the accuracy of the technique too quickly. In addition, the results obtained by adding a similar image suggests that simply checking the pairwise maximum coincidence of foot patterns (the maximum number of overlaping feet under all translations of one foot pattern relative to the other) might give an indication of the potential for misclassification.

6

Future Work

The central motivation given in this paper for future work is the experimenters ability to confound their own classification technique. The filaments image series was selected by looking at the foot patterns that the genetic algorithm produced for the clouds series. We noted that the feature of the clouds series that seem to have produced the foot patterns was the alternating checkerboard pattern of pixels. Routine mathematical investigation shows that an entropy minimizer of the sort embedded in our genetic algorithm experiences a strong pressure to place its feet on exactly one color of the checkerboard for any given window placement. We then generated randomly selected POMM-based textures until one appeared that (i) looked quite different from the clouds texture and (ii) had large regions filled with checkerboard pixels. As suspected, the image type so selected badly confused our classification technique. It is a trivial matter to distinguish filaments and clouds from one another; examine the fraction of black pixels. Were we to augment our technique to include such first order statistics it would have no trouble correcting its classification. Other such a-priori information could also be used as part of an extended classifier for specific classification problems. However, statistics based on the density of black or white pixels (or grey-scales) can be confounded by simple changes such as increased illumination and this type of issue must be considered when designing automated classifiers for natural scenes. One technique that may improve performance is to use more feet in a foot pattern. A foot pattern with a large number of feet would not, for example, gain as much of an advantage from staying on one color of a checkerboard as the ones presented in the current research. Use of more feet would require some modification of the software. At present there are 27 = 128 possible patterns of black and white pixels that may appear under the feet of one of our classification images. It is practical (in terms of storage space) simply to store the event count histogram in a one-eighth kilobyte array. If the number of feet were increased to half those in the current 7 × 7 window then there would be 224 = 16, 777, 216 possible events, requiring a roughly seventeen megabyte array. Larger windows simply make the situation even worse. Since large numbers of feet (or an increase in the number of grey levels, which cause similar problems) are desirable in future research it will be necessary to store the event counts in a sparse structure of some sort. This follows from the fact

9

that, on an image of even marginally reasonable size, almost all the events in the sample space will never occur. A second technique we think may improve performance is that of creating special tie-breaker foot patterns to be created when (automatic) examination of the foot patterns for a pair of image types shows considerable maximum pairwise coincidence of their foot patterns. These tie-breaker foot patterns would be created by modifying the genetic algorithm to use a fitness function involving the entropy for both training images. The genetic algorithm would minimize the entropy for one image type inversely modified (e.g. divided by) the entropy for the other image type. This would produce foot patterns that achieved somewhat low entropy scores for one image type while obtaining somewhat high scores for the other. Generalization of the classification technique presented in this research to natural images (or any type of image not generated by a local process) will require expansion of the training set. It seems that one way to do this is simply to run the window of the foot pattern over the multiple images in the training set and minimize the entropy thus obtained. Finally we note that increasing the number of feet or window size would enlarge our search  space from its current fairly manageable 49 = 85, 900, 584 possible foot patterns to search spaces 7 of astronomical size. Relative to the demonstration problems presented in this paper the genetic algorithm is not the best search heuristic if we wish to locate the optimum (low entropy) foot patterns for a given image type. Genetic algorithms are, however, tolerant (show relatively modest performance degradation) as search space size increases [12], [13]. With this in mind we have intentionally deigned our technique to work with a sampling of local optima and voting among the resulting possibly locally optimal foot patterns rather than to attempt to find the “golden” foot pattern for each image type. This gives us better hope of a technique that scales well to more complex image types.

References [1] Dan Ashlock, Mark D. Smucker, E. Ann Stanley, and Leigh Tesfatsion. Preferential partner selection in an evolutionary study of prisoner’s dilemma. BioSystems, 37:99–125, 1996. [2] N.A.C. Cressie and J.L. Davidson. Image analysis with Partially Ordered Markov Models. Preprint No. 94-15, Statistical Laboratory, Iowa State Univ., Ames, IA, 1994. [3] G.R. Cross and A.K. Jain. Markov random field texture models. IEEE Transactions on Pattern Anaylsis and Machine Intelligence, PAMI-5(1):25–39, Jan. 1983. [4] J. Davidson, A. Talukder, and N. Cressie. Texture analysis using partially ordered Markov models. In Proceedings, IEEE International Conference on Image Processing-94, pages 402– 406, Austin, TX, 1994. [5] J. L. Davidson, Xia Hua, and D. Ashlock. A comparison of genetic algorithm, regression, and newton’s method for parameter estimation of texture models. In Proceedings, IEEE Southwest Symposium on Image Analysis and Interpretation, pages 201–206, San Antonio, TX, 1996. [6] C. Engebretson, J. Davidson, and D. Ashlock. Genetic algorithms and Metropolis algorithm for model selection. Technical Report, Dept. of Electrical and Computer Engineering, Iowa State Univ., Ames, IA, 1996. [7] C. Engebretson, J. Davidson, and D. Ashlock. Genetic algorithms for texture identification and synthesis. In E. R. Dougherty, F. Prˆeteux, and J. L. Davidson, editors, Proceedings of the 10

SPIE, International Symposium on Statistical and Stochastic Methods for Image Processing, volume 2823, pages 21–30, Denver, CO, Aug. 1996. [8] David E. Goldberg. Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley Publishing Company, Inc., Reading, MA, 1989. [9] R.C. Gonzalez and R.E. Woods. Digital Image Processing. Addison-Wesley, New York, 1992. [10] J.K. Goutsias. Mutually compatible Gibbs random fields. IEEE Transactions on Information Theory, 35(6):1233–1249, Nov. 1989. [11] John H Holland. Adaption in Natural and Artificial Systems. The MIT Press, Cambridge, MA, 1992. [12] John R. Koza. Genetic Programming. The MIT Press, Cambridge, MA, 1992. [13] Timothy Trent Maifeld. Genetic-Based Unit Commitment Algorithm. PhD thesis, Iowa State University, Ames, Iowa, 50014, 1995. [14] L. Onural. Generating connected textured fractal patterns using markov random fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 13(8):819–825, Aug. 1991. [15] W. Qian and D.M. Titterington. Multidimensional Markov chain models for image textures. Journal of the Royal Statistical Society B, 53(3):661–674, 1991. [16] M. Rebial, H. Pavie, F. Pinson, and A. Smolarz. Simulation of a texture synthesizer for HDTV production. In International Broadcasting Convention - Proceedings, pages 395–399, Amsterdam, Jul. 1992. Institution of Electrical Engineers. [17] E. Sali and H. Wolfson. Texture classification in aerial photographs and satellite data. International Journal of Remote Sensing, 13(18):3395–3408, Dec. 1993. [18] Gilbert Syswerda. A study of reproduction in generational and steady state genetic algorithms. In Foundations of Genetic Algorithms, pages 94–101. Morgan Kaufmann, 1991. [19] A. Talukder. Partially ordered Markov models for texture synthesis and classification. Master’s thesis, Dept. of Electrical and Computer Engineering, Iowa State Univ., Ames, 1994. [20] A. Talukder and J. Davidson. Model selection and texture segmentation using partially ordered markov models. In Proceedings, IEEE International Conference on Acoustics, Speech, and Signal Processing, pages 2527–2530, Detroit, MI, 1995. [21] Astro Teller. The evolution of mental models. In Kenneth Kinnear, editor, Advances in Genetic Programming, chapter 9. The MIT Press, 1994. [22] Darrel Whitley. The genitor algorithm and selection pressure: why rank based allocation of reproductive trials is best. In Proceedings of the 3rd ICGA, pages 116–121. Morgan Kaufmann, 1989. [23] C.M. Wu and Y.C. Chen. Multi-threshold dimension vector for texture analysis and its application to liver tissue classification. Pattern Recognition, 26(1):137–144, Jan. 1993. [24] F. Zhang. Applying genetic algorithms fo binary mine data. Master’s thesis, Dept. of Mathematics, Iowa State Univ., Ames, 1997.

11