FLEXIBLE NEURAL NETWORK CLASSIFIER FOR THE AUTOMATED DETECTION OF BONES IN CHICKEN BREAST MEAT Catalin Amza* Peter Innocent * *
Mark Graves** Jeffrey Knight*
De Montfort University, Leicester – United Kingdom Faculty of Computer Science and Engineering The Gateway, Leicester LE1 9BH; [email protected]
Intelligent Manufacturing Systems 45 Roman Way, Coleshill, Birmimgham, B46 1JT, United Kingdom; [email protected]
Abstract: This paper discusses the use of neural networks for the classification of potential defects found in x-ray images of chicken breast meat after the de-boning process. The chicken meat is passed under a solid-state x-ray sensor which acquires a two dimensional image of the chicken breast. A series of image processing operations are applied to the acquired image, which identify certain pixel groupings (blobs) as potentially containing a bone. The image processing task is a difficult one and the resulting segmented blobs represent not only correctly identified bones but also areas caused by overlapping muscle regions in the meat which appear very similar to bones in the resulting x-ray image. A number of image processing measurements were made on each blob and these features were used as the input into a neural network classifier whose function was to differentiate between bones and non-bone segmented regions. A standard Single Multi-Layer Perceptron network was used as the initial Neural Network Architecture. Although this performed reasonably well in the classification task it lacked flexibility in the sense that all bones were treated equally. A second classification scheme used a Two Stage Neural Network (TSNN) classifier and this is shown to be far more flexible with regard to its ability to be optimized for different bone classes. Keywords: neural networks, X-ray detection, foreign body detection, pattern recognition. 1. Introduction. In the production of meat the single largest complaint is the presence of bones left in the final supposedly bone-free product . Although this is a problem for all meat production it is more of a problem in poultry production compared with beef, pork and lamb. The reason for this is that the chicken carcass is much softer than that of a cow and therefore it is more likely that a bone will break off during the deboning process. The presence of bones is considered a far greater problem in products such as chicken sandwiches and ready meal products where because the consumer eats the product with little or no further preparation the perception that the product should be bone free is greater. Although the occurrence of bones in supermarket tray packed chicken breast portions is a concern it does not as yet attract the same concern as bones in ready meals or fast food take away meals. The occurrence of bones is far greater (up to ten times for impacted wish bones) in those processing plants, which have automated chicken deboning equipment rather than the traditional but labour intensive method of hand de-boning. As more processing plants become automated the occurrence of bones is set to increase. Until recently the only inspection technology available was that of manually inspecting every chicken fillet for bone. In almost every chicken processing plant in the world dozens of people stand at the end of the production line literally feeling every chicken piece by hand for the presence of bone. Such a monotonous task coupled with the fact that after only a few minutes the inspector’s hands become extremely cold means that many bones, which should be detected, are allowed to pass through. In addition there are many bones, which are fully impacted in the meat, which are impossible to detect by hand even for a well-trained and observant inspector. It is therefore not surprising that recently there has been a great deal of interest in the development of technology in order to automate the inspection task.
2. X-ray machine vision systems for automated bone detection. To date the only systems that have been used for the inspection of bones in meat have all been based on the use of x-rays . Systems based on ultra-violet and visual wavelength camera based systems suffer from the fact they can only detect bones that are on the surface of the meat . The penetration depth of ultra-violet systems is only a few mm . Camera systems have been tried which use high powered illumination to pass through the meat but this has the undesirable problem of heating the meat as well as being expensive to run due to constant replacement of the lighting elements. Other techniques have been suggested but have been found not to be suitable for reliable on-line detection in a poultry-processing environment. These techniques include mechanical detection , laser irradiation  and ultrasonic imaging . One of the authors has previously carried out a detailed review of alternative inspection technologies for the automatic detection of foreign bodies in food . X-ray transmission systems rely on x-rays passing through the product under inspection and impinging on a sensor, which is responsive to x-rays. The system used for our experiments was the IMS BoneScan system . This system consists of a linear array of 1046 photodiode elements that are covered with an x-ray scintillating material. The linear array is sampled as the meat passes over the sensor, in this way a complete two-dimensional real-time x-ray image is acquired. An example of an x-ray image acquired in this way is shown in figure 1. This image shows a piece of chicken breast meat containing a bone. The image is made more complex by the fact that part of the meat has folded back on itself, this is typical in real poultry plants where the demands of high volume throughput mean that it is impossible to align pieces of meat perfectly uniformly which very often happens in laboratory based demonstrations. Most commercial systems currently rely on simple thresholding of the image in order to segment the bones. This is only possible in applications where the meat is of uniform and constant thickness such as the inspection of chicken nuggets after the nugget forming stage. Fig.1. Chicken breast meat containing a bone Ideally it is much better to inspect the meat whilst it is still a fillet since at this stage a whole bone can be easily removed whereas a single bone would results in many bone fragments at the nugget stage.
Fig.2 and 3 Morphological image processing transformations In the IMS BoneScan system the acquired image undergoes a series of grayscale morphological image processing transformations resulting in segmentation of the bone as well as other potential bone
regions. Two such transformations are shown in Figures 2 and 3 resulting in differing levels of bone and non-bone region segmentation. Combinations of such filters are applied and features based on shape, intensity and local contrast are extracted for each blob. The purpose of this paper is to investigate different approaches based on Neural Networks to classify these blobs. 3. Neural Network Architectures. Taking data from the system outlined in section 2 an investigation was made of the classification process including both pre-processing of raw data and selecting of training and testing sets. The general paradigm of the detection problem is the pattern recognition one. A general pattern recognition system can comprise a preprocessing unit, a feature extraction unit, a classify unit and a context processor. The preprocessing unit separates the image into objects (blobs). From these objects the feature unit extracts information in order to facilitate recognition. The classify unit identifies the class of the object. The context processor increases recognition accuracy by providing relevant information regarding the environment surrounding the object. In our application such context information could be, for example, that wish bones lie at the top thick edge part of the breast fillet whereas fan bones lie towards the central inside edge of the fillet. A simplified neural network base pattern recognition system is presented in Figure 4 (after ). Image Acquire System
Preprocess Data Normalise data Extract training sets Extract test sets
Train Neural Network
Test Neural Network
Optional Context Post Process
Fig. 4 Neural Network based pattern recognition system A straight-forward architecture implemented in order to establish a baseline for future comparisons was a feed-forward neural network architecture or single multi-layer perceptron (SMLP) trained to classify the available data. The problem with this architecture is it is lacking in flexibility. Different meat processing plants will have different types and levels of bones depending on a number of factors such as method of de-boning, method of manual checking, breed of bird, age of bird at slaughter, diet of bird prior to slaughter etc. Clearly the optimum setup for a certain production plant manually deboning meat and leaving mainly surface fan bones behind will be very different from a production plant with automatic de-boning equipment leaving high levels of impacted wish bone behind. In order to allow for greater flexibility of the system another architecture a Two Stage Neural Network (TSNN) was proposed and tested. The idea for the latter was taken from , where a two-stage artificial neural network was used to detect cervical cells from their frequency domain image. The two stages were independent in the sense that the first stage completed its learning phase prior to commencement of stage two learning. It is actually a type of decomposition of the original problem in the traditional manner, following the old roman principle “dive et impera” (divide and conquer). The first stage was actually a series of SMLP modules one for each specialized type of bone introduced. The second stage was a voter taking the outputs of each of the SMLP modules as its input. Thus, the TSNN architecture will have two independent stages. The first stage will comprise training the bone specialised modules and the second stage will comprise training the specialised flexibility module according to a flexibility parameter which can be set by the final user at run-time. Every module consists of a single MLP module, as shown in figure 5, trained with non-bone data and bone-data from its own category. Training to reject the bone-data from other classes would have made this process more complex and difficult and with worst results.
After training all weights are frozen for both stages and the prediction phase can commence. The two stages of learning can take place in parallel. After the training, the same input vector (testing vector) will be fed into all bone-specialised modules and their results will be fed into the flexibility module along with the flexibility parameter. The output of the system is the output of the flexibility module.
Stage 1 Fig. 5 TSNN architecture
Our data has seven different categories of bones (B1-B7 – see Table 1) depending on their size and thickness. As there might be factories which could be interested in only a few of those categories, the possibility of choosing at run-time the desired types of bones which need to be rejected was introduced. This information has to be set by the final user (factory) prior to the learning phase and it will be transformed by the system into the input parameter for the flexibility module. A random sub-sampling at different rates was performed in order to find the best representative training and testing samples. The overall results were that the sub-samples are representative of the original sample with a confidence of between 85% and 95%. The best results were obtained for a rate of sub-sampling of 80%. Thus, this rate was chosen to generate training sets (80% of the available data) and test sets (remaining 20% of the available data) from different samples. Also, it seems that the sample distributions remains almost the same with the original sample distribution. This was important because it showed that the sub-samples are representatives for the whole samples and the process of sub-sampling will not introduce unnecessary errors later in the training process. Table 1 Samples used in the experiments Samples ALL ALL_BONES ALL_NO_BONES BONE1 BONE2 BONE3 BONE4 BONE5 BONE6 BONE7
Data/No. of cases All the data provided/ 3258 Data provided for bones features/ 1101 Data provided for non-bones features/ 2157 Data provided for bones of importance 1/ 77 Data provided for bones of importance 2/ 99 Data provided for bones of importance 3/ 236 Data provided for bones of importance 4/ 228 Data provided for bones of importance 5/ 293 Data provided for bones of importance 6/ 100 Data provided for bones of importance 7/ 68
4. Training. As mentioned before, in order to address possible flexibility issues, a so-called flexibility node or flexibility input (FI) was introduced in the stage 2 module of TSNN architecture. It is assumed that different factories will be interested in rejecting different types of bones, depending on who the final user of the food product is. That is why an arbitrary grouping of certain categories was made as an example (Table 2). In the present case, there are only 9 classes defined, but more classes can be defined at designtime in any combinations that may be useful to different food factories. The flexibility input was then computed as a value between 0 and 1 according to the number of flexibility classes defined. Table 2 – Flexibility input node and associated flexibility classes No 1 2 3 4 5 6 7 8 9
FI 0 0.125 0.250 0.375 0.500 0.625 0.750 0.875 1
Description of flexibility class ALL –B1, B2, B3, B4, B5, B6, B7 HALF – B4, B5, B6, B7 B1 B2 B3 B4 B5 B6 B7
1- this category of bones has to be rejected; 0 – this category of bones is not important for the process of rejection
All modules of TSNN - stage 1 were trained using standard back-propagation (BP) learning algorithm  with dynamically changing the learning rate and momentum. The learning parameters were chosen by performing experiments and the best values were used for the final results. The same learning algorithm was used to train the SMLP architecture. The raw data was first normalised in order to avoid saturation of the architectures. The generalisation is a very important issue, which means that the NN must not be over-trained. Training the stage 2 module within TSNN architecture is a little bit different in the sense that here we were only interested in getting the best possible solution and we were actually expecting to have over training. In this case the network has to learn all training patterns and any other patterns are irrelevant. That is why this network was trained until the best number of recognised trained patterns was obtained. The stage 2 module was intended to behave like a voter. Its vote depends on the outputs of stage1 modules and the flexibility input chosen at run-time by the end-user. Then the stage 2 output is computed using the outputs of stage 1 modules. This output is actually the global output of the TSNN architecture. In an ideal case, the best performance of the network for the training patterns is 100%, which means that the network has learned all presented patterns. The BP learning algorithm does not guarantee a solution to the problem so a performance of 100% is not realistic (the best rate obtained was around 96.3%). 5. Testing. Both architectures were tested by computing their percentage of correct recognised test patterns (the sum of true positives and true negatives), the false rejection rates (false positives) and the misses rate (false negatives). Both architectures were tested using the same test file, in order to make a valid comparison between their performances. For SMLP there are no flexibility issues, which have to be taken into account, as opposed to TSNN. 6. Results. The results from both architectures are synthesised in Table 3. Three classes are highlighted for which the performance drops below 90%. – B1, B2 and B4. All other flexibility categories have proven to have a performance over 90% comparable with SMLP. The results showed a slight improvement in the case of TSNN for the ‘ALL’ flexibility class, the only common class for both architectures. The only problems with TSNN are flexibility classes B1 and B2 where percentage of correct recognised patterns is quite low (73 and 80%). This can be explained by
the fact that even though the modules for B1 and B2 (stage 1 – TSNN) are working properly with a performance over 96%, the second stage is not working as planned for those flexibility classes. Table 3 Correct classified patterns for both architectures Architecture Stage 1 TSNN
Type of Performance Correct classified patterns Correct classified patterns False rejection rate Misses rate Correct classified patterns False rejection rate Misses rate
Flexibility class performance [%] B3 B4 B5 B6 B7 95.40 96.86 92.46 99.11 99.10
0.93 25.64 94.53
0.46 19.08 88.00
1.62 3.38 94.37
0.93 12.59 87.45
0.46 8.97 87.85
0.23 6.91 90.02
0 0 95.00
0.93 1.34 92.3
7.64 0.36 90.33
Nevertheless, overall there is an improvement of TSNN compared with SMLP. Even if we are talking only about flexibility issues at run-time, which are not present in SMLP, the advantage is on the TSNN side. If we are looking at the stage 1 results, it can be seen that overall, the performance is over 95%. This performance is reduced when all modules are working in parallel for the majority of them. Only for modules B6 and B7 an improvement is shown. The explanation can only be found in the fact that stage 2 module is introducing larger errors than it was expected. A comparison between results can be shown in figure 6.
Fig. 6 Comparison between performances results
When talking about false rejection rate (false positives), the results are quite different opposed to the correct recognised patterns rate results. All false rejection rates have become larger for TSNN architecture than SMLP. Even though, the difference is not very big, this is one of the shortcomings of the TSNN approach. The results are displayed in figure 7. Quite different results were obtained from the misses rates point of view (Figure 8). TSNN architecture seems to have specialised in recognising bonedata, rather than non-bone data. The above results were not expected. It was our expectation that the false rejection rates will be almost the same for both architectures, but it seems that the second stage for TSNN lead to a certain increase of those values. This fact can only be explained by some high errors introduced by stage 2 for TSNN architecture, which were materialised in false rejections. It seems that stage 2 is actually reversing the recognising effect of stage 1. A non-bone data recognised by stage 1 is then reported as being a bone by stage 2 due to some internal errors.
Comparison between false rejection rates
Comparison between misses rates for both architectures
Fig. 7 Comparison between false rejection rates
Fig. 8 Comparison between misses rates
7. Conclusions. Overall, for the problem in question, the Neural Network approach seemed quite satisfactory. Although, the performance obtained was not very high (around 90%), it seems to be acceptable by the food industry. It is a step forward from the traditional way of foreign body detection currently used in the food industry (manual detection of bones has a performance of 80%). First, IMS used as the classifier a lookup table and then a SMLP architecture. Their first approach had the disadvantage that it does not tolerate noise in the input data. Also, the generalization is not possible. This is not the case of our approach where the system inherits the characteristics of a neural-network such as graceful degradation in the presence of noise and generalization.
Detection vs. error level 96 94 92 90 88 86 84 0
Level of detection (error level) TSNN
Fig.9 Comparison between TSNN and SMLP detection/false rejection for ALL flexibility category The results had shown another interesting aspect. There is a correspondence between the false rejection rate and the total number of correctly recognised patterns (detection). As shown in figure 9, the increase in false rejection rate by increasing the correspondent error will lead to an increase of detection. So, there is a trade-off between minimal meat wastage and high rejection of bone-contaminated meat. Every end-user should be able to joggle with the level of detection (the errors involved in recognition or false rejections) so that the optimal trade-off would be achieved. However, an important aspect is that all experiments reported here were done in laboratory conditions. Even though currently IMS uses an SMLP-based system for real-time X-ray inspection, all sorts of factors currently under investigation may affect the performance of the TSNN architecture. The performance of TSNN can be considered better from the detection point of view than SMLP and by providing the end-user with the option at choosing at run-time the flexibility class or classes which are of any importance to him. When TSNN is set to perform total rejection of bones (just like SMLP
behaviour) the performance obtained is higher than SMLP. In the present work only 9 flexibility categories were defined, but the number may be increased according to the needs of the end users. Even so, the performance is quite similar than SMLP, the only exceptions being for flexibility classes B1 and B2. By increasing the performance of stage 2 module (the voter), the problem can be solved, or at least improved (by choosing a better learning algorithm and optimal parameters for that algorithm). The architectures performance can be improved. One way is by looking more carefully at the raw data. In the present work the presence of redundancies in the input data was not an issue. Also a better normalisation procedure could improve the results. So will a better generation of the training set for TSNN – stage 2 module. A “self-teaching” controller could be added to address more flexibility issues, even in some adaptive way (reinforcement learning). Such a controller could also learn from the input data presented to the NN architecture, which kind of bones appear frequently in that industrial location and try to improve the performance for those types. A new module could be inserted in TSNN-stage 1. This new module will take care of non-bone data. It will be train only to recognise non-bone data and it will act as a safety guard for all modules. If all other modules will not report the input data as being nonbone data, and this module will, then only its response will be taken into consideration by stage 2. The process of choosing the optimal network parameters could be improved with the aid of the new Genetic Programming. Thus, it is our strong opinion that TSNN architecture is a viable solution for the detection problem and with the improvements suggested above, its performance could be enhanced in order to satisfy even the highest demand from the food industry. References 1. 2. 3. 4. 5. 6. 7. 8. 9.
10. 11. 12. 13. 14. 15.
Graves, M., X-ray Machine Vision for on-line Quality Control in Food Processing, University of Cardiff, 1999 Chen, C., Statistical Pattern Recognition, Hayden, Washington, 1973 Faruq, A., Closed Pack Contents Inspection without X-rays’ in Sensor Rev. 11, 1991 Friddell , System for radiographically inspecting an object using backscattered radiation and related method, US Patent no.4,974,247, 1990 Graves M. et al., Approaches to foreign body detection in foods, Trends in Food Science and Technology vol.9, January 1998 Heiland et al., Automated excision of undesirable material and production of starting material for restructured meat, US Patent no.5,256,102 , 1993 Koch, J. et al., Bone Detector, U.S. Patent no. 5,847,382, 1998 Lawson, S.W., Defect Detection in Industrial Radiographic and Ultrasonic Images’ (PhD thesis), University of Surrey, Guildford, UK McKenna, S.J., et al., Using a two-stage artificial neural network to detect abnormal cervical cells from their frequency domain image, Proc. Of Neural Computing Research and Applications: Part Two Queen’s University of Belfast, Northern Ireland, 25-26 June 1992 Meyn, C., Method and apparatus for examining food products by means of irradiation, U.S.Patent 5,026,983 1991 Papanocolopoulos, C.D., et al., X-ray monitoring system, U.S. Patent no. 5,428,657 1995 Ramsay,J.D., et al., Method and apparatus for the detection of foreign material in food substances, U.S. Patent no. 3,995,164 1976 Rumelhart, D. et al., Learning internal representations by error propagation, in Parallel Distributed Processing, vol. 1, MIT Press, Cambridge, MA, pp. 318-362, 1986 Smith et al., Apparatus for detecting the presence of hard solid particles in a body of softer solid substance, US Patent no.3,736,583, 1971 Smith et al., Apparatus for inspecting bodies for the presence of hard particles, US Patent no.3,777,886, 1972