Chick feather pattern recognition

1 downloads 0 Views 834KB Size Report
to chick feather pattern recognition, and thereby show the feasibility of .... 7 Typical example of male chick wing image processing .... The best feature for.

Chick feather pattern recognition Y. Tao, Z. Chen and C.L. Griffis Abstract: A crescent model is proposed for chick wing image processing and feather pattern recognition, thereby implementing chick sex separation by machine vision technology. The crescent shape delineates the region of interest in a wing image by an arc of large radius and an arc of small radius at two off-centred circles. Wing feathers are divergently distributed in the crescent region, manifesting as an oriented stripe pattern. Male chick feathers gradually change in length from short to long and then to short in accordance with the crescent envelope. Female chick feathers alternate the stripe lengths, following a long –short –long stripe pattern. Based on this knowledge, a chick feather pattern can be numerically characterised by a stripe length sequence and a stripe endpoint sequence. For pattern classification, the first-order differences of these two sequences are used. The mean value of the stripe endpoint difference sequence is the most efficient feature in male – female chick classification. Experimental results justified the model and feature selection strategy, and showed the feasibility of automatic chick sex separation.

1

Introduction

Chick wing feather patterns, which differ in sex as the result of genetic encoding, have been widely used for chick sex identification [1 – 3]. Research indicates that the two sexes of broiler chicks have different nutritional requirements [1] and sex segregation of baby chicks provides significant benefits to the poultry industry. Commonly, sex separation is accomplished by manually opening the wings of each chick and inspecting the feather patterns to determine the sex. A female chick’s wing consists of alternating long and short feathers with an uneven distribution of feather endpoints; in contrast, the feather lengths in a male chick’s wing gradually change from short to long and then to short, from one side to the other [1, 4]. Overall, the feathers are confined in a fan-like region in the wing image, modelled as a crescent shape in this paper. In terms of image recognition, a feather pattern is an oriented pattern [5], which bears some resemblance to a fingerprint [6], a vasculature [7], or a texture pattern [8]. However, the feather pattern has its peculiarities: (1) its region of interest (ROI) is confined to a crescent region; (2) its orientations are divergently distributed. Consequently, feather pattern recognition cannot be efficiently achieved by existing techniques commonly used for fingerprint identification and texture analysis [5 – 8]. In this work, we report a crescent model dedicated to chick feather pattern recognition, and thereby show the feasibility of automatic chick sex separation by machine vision technology.

q IEE, 2004 IEE Proceedings online no. 20040730 doi: 10.1049/ip-vis:20040730 Paper first received 6th January and in revised form 29th April 2004 Y. Tao is with the Department of Biological Resource Engineering, University of Maryland, College Park, MD 20742, USA Z. Chen was with the University of Arkansas and is now with the Department of Radiology, University of Rochester, Box 648, Rochester, NY 14642, USA C.L. Griffis is with the Department of Biological and Agricultural Engineering, University of Arkansas, 203 Engineering Hall, Fayetteville, AR 72701, USA IEE Proc.-Vis. Image Signal Process., Vol. 151, No. 5, October 2004

Automatic feather-based sex separation involves chick wing image acquisition and feather recognition. Evans [1] and Jones et al. [2] used visible light to acquire wing feather images. Tao and Walker [4] proposed using near ultraviolet light for wing image acquisition. Following image acquisition are tasks of image processing and pattern recognition. Since a local region in a divergent stripe pattern can be considered approximately as parallel stripes, directional filtering [6] can be used for image enhancement. For feature extraction, we calculate feather lengths and feather endpoints and collect these data in two sequences, hence numerically representing the feather-based chick sex separation knowledge that is used by human visual inspection. Based on the numerical features, chicks can be automatically classified into male and female classes. 2

Crescent model

The diagram for automatic chick sex separation is depicted in Fig. 1. It consists of an electro-optic imaging system for wing image acquisition. Baby chicks on the conveyer are illuminated with an ultraviolet light source, and their wing images are captured using a CCD camera that senses the near-ultraviolet spectrum. Since the feathers in a wing image acquired in ultraviolet lighting have strong contrast with respect to the background or surroundings, the feather patterns can easily be segmented using a simple thresholding operation. For pattern recognition, the feathers in a wing image form an oriented pattern [5] because of the local orientation of feather stripes. Therefore, directional filtering may enhance the image, as will be reported later. The simplest way to segment feather stripes in a wing image is through thresholding. In the binary image, feather stripes are characterised in terms of locations, orientations, and lengths. To characterise the partial fan-like region and the divergent feather stripes, we propose a crescent model. This model defines a crescent region that is enclosed by two offcentred arcs: an outer arc associated with a small radius (r) and an inner arc associated with a large radius (R), as shown in Fig. 2. In this figure, Fig. 2a and 2b illustrate the feather patterns for male and female chicks, respectively. The length of a feather stripe is confined by the crescent region, and the stripe orientation is divergently distributed 337

Fig. 1

Diagram of feather-based chick sex separation system

with respect to a common point (O) on the line connecting the two arc centrs ðOr and OR Þ: The crescent model is very convenient for chick wing image processing and feather pattern recognition. First, it serves as a guide to locate the region of interest (ROI) in a wing image. The ROI can be localised at fast speed using image decimation and multiresolution techniques [9, 10]. For efficient digital image analysis, it is convenient to crop the ROI in a bounding box big enough to contain all feathers with ample margins. Second, the crescent model describes the divergent orientation of feathers, resulting from the interception of radial lines, by a crescent region. Specifically, feather lengths and locations are confined by two arcs of the crescent envelope, and feather orientation is determined by the location of the emanating centre. In general, this model captures the conspicuous features of chick feathers. In practice, the chick’s wing presentation to image capture cannot guarantee consistent and precise alignment due to a living chick’s instinctive movements on the moving conveyer. Thus, image rotation is needed during wing image processing. Figure 2b, for example, shows an example of the crescent shape of Fig. 2a rotated by y: With this crescent model, the line connecting the two arc centres deviates from the vertical direction due to rotation. Besides confinement of the crescent region, the model also portrays the gradual change of feather orientation, when sweeping from one side to the other in a wing image.

Fig. 2

Each feather is a line segment within the crescent region, as defined by upside endpoint and a downside endpoint. The upside endpoint can be used as the reference point for measuring feather location and feather length, and is referred to as the feather endpoint. To extract conspicuous features for feather pattern recognition, we use only two feather parameters: feather length and feather endpoint coordinates. Figure 3a illustrates a wing image consisting of eight feathers with the lengths fli ; i ¼ 1; 2; . . . ; 8g and feather endpoint coordinates fðxi ; yi Þ; i ¼ 1; 2; . . . ; 8g: The simplest way to designate the feather sequence is based on the x coordinates of the feather endpoints. However, the feather sequence may be out of order due to image rotation resulting from chick movement. One case is illustrated in Fig. 3b, where the feather sequence appears to be fl1 ; l3 ; l2 ; l4 ; . . . l8 g since x3 < x2 : If the image is appropriately rotated back, the correct order fl1 ; l2 ; l3 ; l4 ; . . . l8 g may be obtained. It would be time-consuming to find the rotation angle y if we strictly follow the crescent model in Fig. 2b. In practice, the image rotation angle can be determined by a technique illustrated in Fig. 3b, which seeks a chord line for the crescent shape from the wing image. The image rotation is carried out as follows. First, we find the down-endpoint coordinates of the outermost feathers on both left and right sides, for example ðx1 ; y1 Þ and ðx8 ; y8 Þ in Fig.3b, which forms a chord on the arcs. The image rotation angle f is defined the angle formed by the chord line and the x axis, i.e. f ¼ arctan

y8  y1 x8  x1

ð1Þ

Then, we define the rotation center ðO0 Þ by the middle point of the chord. Finally, we rotate the image around the rotation centre by an angle f; a counteraction to remove the obliqueness of the wing image. In the rotated image, the feathers can be labelled by a sequence in the order of feather tip coordinates. Concerning the crescent model for chick feather patter recognition, we should point out that we do not need to determine the arcs and emanating centre for the crescent

Crescent model for chick feather patterns

a Male chick feather pattern b Female chick feather pattern 338

IEE Proc.-Vis. Image Signal Process., Vol. 151, No. 5, October 2004

Fig. 3 Wing image processing

Since the feather pattern is an oriented pattern, it can be enhanced by directional filtering or directional smoothing [5, 6], which can be performed in either the space domain or the frequency domain. In the space domain, the filter is a digital window, and the filtering process is described by convolution between the wing image and the window. In accordance with the divergence of the feather stripes, we design direction filters as shown in Fig. 5a. Specifically, a bank of nine filters, fhk ; k ¼ 1; 2; . . . ; 9g; are used to accommodate the divergent orientation. We illustrate their digital implementation in a 7  7 window in Fig. 5b, where a digital filter is a 7  7 window containing a digital line. The pixels on a digital line are assigned the same non-zero value, or the pixel in question (at the centre) may be overweighted by a bigger value. To maintain image energy after convolution, it is necessary to normalise the filter, i.e. to ensure that the sum of the entries is 1. In digital geometry, pixels constituting a digital line may populate in a ‘jagged’ manner, such as h3 (the filled pixels) in Fig. 5b. Directional filtering with the directional filter bank is expressed by

a Illustration of feather endpoint coordinates fðxi ; yi Þ; i ¼ 1; 2; . . . ; 8g and feather lengths fli ; i ¼ 1; 2; . . . ; 8g b Determination of wing image rotation angle f and rotation centre O 0

gi ðx; yÞ ¼ f ðx; yÞ hi ðx; yÞ

i ¼ 1; 2; . . . ; 9

ð2Þ

and model from a feather pattern. The usefulness of the crescent model is that it suggests a ROI shape and an oriented stripe pattern. 3

Feather image processing

With the digital image acquired by the electro-optical system in Fig. 1, our next task is to perform wing image processing and feather pattern recognition using the flowchart shown in Fig. 4. The raw image from a CCD camera is a large image because the CCD captures the wing as only part of its field of view. With a large raw image, the immediate task is to locate the ROI and to crop it. Since feathers have high contrast intensities against surroundings, it is easy to define the ROI by simple thresholding, with the threshold determined from the intensity histogram. In the binary image, the ROI of chick feathers assumes a crescent region, which can be cropped from the raw image using a bounding box. Fast ROI localisation can be implemented using image decimation and ‘from presence to classification model’ [9, 10]. It is convenient to define a bounding box that is large enough to enclose the ROI with an ample margin. With the ROI, we can determine the rotation angle using (1) and then perform the image rotation. In the rotated image, we update the ROI and then crop it. Henceforth, the cropped ROI represents the wing image that will be used for feather pattern recognition.

f^ ðx; yÞ ¼ maxfgi ðx; yÞ

i ¼ 1; 2; . . . ; 9g

ð3Þ

where ‘ ’ represents convolution and max { } denotes selecting the maximum pixel value. Since directional filtering produces a bank of images in (2), it costs time and memory in image processing. In the pursuit of speed, we should reduce the number of filters and the window size. Following directional filtering comes the thresholding operation, which produces a binary feather pattern. The threshold can easily be determined from the histogram of

Fig. 5 Directional filters for wing image enhancement Fig. 4 Flowchart of wing image processing and recognition IEE Proc.-Vis. Image Signal Process., Vol. 151, No. 5, October 2004

a Orientations of the filter bank fhi ; i ¼ 1; 2; . . . ; 9g b Digital implementations in a 7  7 window 339

the wing image. For the sake of stripe computation, we extract stripe skeletons from the binary image by a thinning algorithm [8]. Owing to interference from the soft fluffy feathers known as ‘down’, and other clutter during wing image acquisition, breaks and bridges exist in the resultant

Fig. 6 a b c d

Typical example of feather image processing applied to a female chick wing

Wing image cropped from raw image after image rotation Binary image resulting from thresholding Skeletons generated by thinning Trimmed skeletons and feather endpoints (marked by ‘ þ ’)

Fig. 7 a b c d

binary feather image. Therefore, a trimming procedure [7] is used to remove spurs, and to bridge the breaks, for the skeletons. Figure 6 shows the image processing applied to a typical female chick wing image, and Fig. 7, application to a male chick wing image.

Typical example of male chick wing image processing

Wing image cropped from raw image after image rotation Binary image resulting from thresholding Skeletons generated by thinning Trimmed skeletons and feather endpoints (marked by ‘ þ ’)

340

IEE Proc.-Vis. Image Signal Process., Vol. 151, No. 5, October 2004

4

Feather feature extraction

than the primary sequences themselves ( y and l). The difference sequences Dy and Dl are calculated by

As suggested from the knowledge of human visual chick sex separation, we choose feather endpoints and feather lengths to characterise a feather pattern. A feather sequence, l ¼ fl1 ; l2 ; . . . ; lN g; can be constructed from a feather endpoint sequence, f½x1 ; y1 ; ½x2 ; y2 ; . . . ; ½xN ; yN g; in the order of x-coordinates from one side to the other, i.e. x1 < x2 < . . . < xN ; where N denotes the number of feathers, and li denotes the length of the ith feather. Each feather length is calculated by the path length on a skeleton in units of pixel numbers. From the endpoint sequence, we use the sequence of y-coordinate components, i.e., y ¼ fy1 ; y2 ; . . . ; yN g to describe the unevenness distribution of the feather endpoints. As a result, we use these two sequences, y and l, to describe a feather pattern with the crescent model. For each sequence we can calculate the mean and variance values. Specifically, the mean m(l) and variance var(l) of l, for example, are given by mðlÞ ¼

varðlÞ ¼

N 1X l N n¼1 n

N 1X ½l  mðlÞ 2 N n¼1 n

ð4Þ

ð5Þ

In similar way, we calculate the mean m( y) and variance var( y) for y. From the crescent model in Fig. 2, one can observe that the changes either in feather endpoints or in feather lengths between adjacent feathers in a female chick wing are larger than those in a male chick wing, but the magnitude of change is relatively constant for each sex. This observation reveals that the feather pattern can be more efficiently characterised by the sequence differences Dy and Dl; rather

Dy ¼ fjDy1 j; jDy2 j; . . . ; jDyN1 jg;

Dyi ¼ yiþ1  yi ð6Þ

Dl ¼ fjDl1 j; jDl2 j; . . . ; jDlN1 jg;

Dli ¼ liþ1  li ; ð7Þ

where i ¼ 1; 2; . . . ; N  1: Using (4) and (5), one can obtain the means and variances for Dy and Dl; i.e. mðDyÞ; varðDyÞ; mðDlÞ and varðDlÞ: According to the crescent model, it is expected that a female chick yields larger values of mðDyÞ and mðDlÞ than a male chick. Although varðDyÞ and varðDlÞ are different for the two sexes, they describe the relatively constant characteristics of Dy and Dl for each sex. Considering these statistical entries as features, one can render chick feather classification. The appropriateness of one single feature can be estimated based on its classification performance when applied to the two-class problem (male and female). In principle, pattern classification can be improved as more features are utilised. However, more features lead to high-dimensional problems, i.e. the feature space is spanned by high-dimensional basic vectors. Concerning feather pattern recognition, the feature space could be spanned by four features: mðDyÞ; varðDyÞ; mðDlÞ and varðDlÞ: In practice, by choosing the best feature, we can accomplish feather pattern recognition using a single feature, mðDyÞ; as will be reported in the experimental results. To show the features extracted from feather patterns, Fig. 8 provides one example, where the numerical features, y, l, Dy; and Dl are extracted from the female chick feather image in Fig. 6. Figure 9 shows another example using the male chick feather image of Fig. 7. The values of ½mðDyÞ; varðDyÞ; mðDlÞ; varðDlÞ corresponding to the female chick (in Fig. 8) and male chick (in Fig. 9) were

Fig. 8 Numerical feather features of the wing image in Fig. 6 a b c d

Feather tip sequence Difference version of a Feather length sequence Difference version of c

IEE Proc.-Vis. Image Signal Process., Vol. 151, No. 5, October 2004

341

Fig. 9 a b c d

Numerical feather features of wing image in Fig. 7

Feather tip sequence Difference version of a Feather length sequence Difference version of c

[28.77, 11.52, 28.92, 14.70] and [4.91, 3.91, 4.46, 4.81], respectively. 5

Experimental results

Experiments were conducted at Tyson Foods, Incorporated. A total of 88 chick samples were randomly divided into two groups: the training group consisting of 20 samples and the test group consisting of 68 samples. Figures 10 and 11 show some experimental images of the chick samples, which were cropped into a ROI of 180  120 pixels. During the training process, various features of the wing images were investigated for chick sex separation. The best feature for chick sex separation determined in the training stage was used to classify the test group. In both training and test

Fig. 10 342

stages, sex separation results were confirmed by human visual inspection of the chick wings. The feather classification results are shown in Fig.12, which allows the classification performance of each feature to be easily evaluated. The feature mðDyÞ is the most appropriate feature for chick sex separation, with which the two classes can be separated by a linear partition associated with a threshold value. For the training set, the mðDyÞ threshold for chick sex separation was determined to be 15.4, as indicated by the partition line in Fig. 12f. The next most appropriate feature is mðDlÞ as shown in Fig. 12c. However, it cannot dichotomize the feature space with a straight boundary line. The worst cases are associated with the features m( y), varðDyÞ; m(l) and varðDlÞ that are extracted from the primary sequences; in these, no linear

Experimental images of female chick wings (cropped by 180  120 ROI) IEE Proc.-Vis. Image Signal Process., Vol. 151, No. 5, October 2004

Fig. 11 Experimental images of male chick wings (cropped by 180  120 ROI)

Fig. 12 Results of chick feather pattern recognition generated from a training set of 20 samples (11 female and 9 male) and test set of 68 samples (38 female and 30 male) The numerical features plotted are: a m(l); b var (l); c mðDlÞ; d varðDlÞ; e m( y); f mðDyÞ; g varðDyÞ; h [var(l), m(l)]; i ½varðDlÞ; mðDlÞ ; j [var( y), m( y)]; k ½varðDyÞ; mðDyÞ The lines in f and k represent the classification boundaries. The last plot l shows the classification result for 68 testing samples using the mðDyÞ feature. In these plots, represents a male chick and  represents a female chick, and the abscissa k values in a–g and l represent label numbers for chicks IEE Proc.-Vis. Image Signal Process., Vol. 151, No. 5, October 2004

343

partition for chick wing recognition is possible. In Fig. 12f it is seen that the 20 training images from 11 female chicks and 9 male chicks are reliably separated by the feature mðDyÞ: This conclusion was verified by human visual inspection. Figures 12h – 12k show some two-dimensional feature spaces for chick feather classification. Once again, Fig. 12k shows that mðDyÞ is the best feature able to that can yield a linear partition. In the test stage, the mðDyÞ feature was extracted to classify 68 images. The results are shown in Fig. 12l. With the threshold value 15.4 obtained in the training stage, the 68 test images were classified into 38 female chicks and 30 male chicks. One of the female chicks was so close to the decision line that ambiguity or uncertainty was possible. This was due to feather underdevelopment and down intervention. Again, the classifications of Fig. 12l were justified by human visual inspection. In general, the crescent model appropriately describes the feather pattern. In practice, feather overlapping, feather underdevelopment, and down intervention may cause feature fluctuations in feature space. During wing image processing, feather stripe segmentation plays the key role in automatic implementation. In this paper, we have adopted a thresholding-based technique. More robust feather segmentation may be achieved by a ridgeline (or valley course) tracking technique [7]. For feather feature extraction, the appropriateness of a feature can be evaluated in terms of classification confidence or scores with reference to manual classification results. The experiments justified the crescent model for feather pattern recognition and showed the importance of modelling and feature extraction strategy in pattern recognition. 6

Summary

Baby chicks can be separated according to sex by wing feather patterns. Chick sex separation is then a pattern recognition problem, which is essentially a special case of oriented pattern recognition. The feather pattern in a chick

344

wing can be appropriately modelled by a crescent model, which delineates the wing shape by a crescent region in which the feathers manifest as divergently oriented stripes. Each feather is characterised by its upside endpoint and its stripe length. The endpoint sequence and the length sequence provide the primary data to characterise a feather pattern. In terms of feather-based sex discrimination, it was found that the differences between the two data sequences are more appropriate than the primary sequences themselves for feather pattern classification. Chick sex separation may be reduced to a simple linear classification issue by using the mean values of the difference sequences, thus demonstrating the fact that an appropriate model and feature selection strategy may simplify the nonlinear transformations and high dimensionality in pattern classification. Experiments justified the feather pattern model and the feature selection strategy. 7

References

1 Evans, M.D.: ‘Feather sexing of broiler chicks by machine vision’. Proc. American Society of Agricultural Engineering and Canadian Society of Agricultural Engineering Meeting, Paper no. 90– 3008, 1989, pp. 25–28 2 Jones, P.T., Shearer, S.A., and Gates, R.S.: ‘Edge extraction for feather sexing poultry chicks’, Trans. ASAE, 1991, 34, pp. 635–640 3 Swatland, H.J., and Leeson, S.: ‘Reflectance of chicken feathers in relation to sex-linked coloration’, Poultry Sci., 1988, 67, pp. 1680–1683 4 Tao, Y., and Walker, J.: ‘Automatic feather sexing of poultry baby chicks’. U.S. patent, file no. 60/076, 1997, p. 342 5 Kass, M., and Witkin, A.: ‘Analyzing oriented patterns’, Comput. Vis., Graph., Image Process., 1987, 37, pp. 362 –385 6 Sherlock, B.G., Monro, D.M., and Millard, K.: ‘Fingerprint enhancement by directional Fourier filtering’, IEE Proc., Vis., Image Signal Process., 1994, 141, pp. 87 –94 7 Chen, Z., and Sabee, M.: ‘Multiresolution vessel tracking in angiographic images using valley courses’ Opt. Eng., 2003, 42, (6), pp. 1673–1682 8 Russ, J.C.: ‘The image processing handbook’ (CRC Press, 2002, 4th edn.) 9 Chen, Z., Karim, M., and Hayat, M.: ‘Locating target at high speed using image decimation decomposition processing’, Pattern Recognit., 2001, 34, (3), pp. 685–694 10 Chen, Z., and Tao, Y.: ‘Food safety inspection using “from presence to classification” object-recognition model’, Pattern Recognit., 2001, 34, pp. 2331–2338

IEE Proc.-Vis. Image Signal Process., Vol. 151, No. 5, October 2004