A genetic algorithm-based artificial neural network model for the ...

3 downloads 3256 Views 255KB Size Report
Abstract. Artificial intelligent tools like genetic algorithm, artificial neural network (ANN) and fuzzy logic are found to be extremely useful in modeling reliable ...
Neural Comput & Applic (2009) 18:135–140 DOI 10.1007/s00521-007-0166-y

ORIGINAL ARTICLE

A genetic algorithm-based artificial neural network model for the optimization of machining processes D. Venkatesan Æ K. Kannan Æ R. Saravanan

Received: 5 September 2006 / Accepted: 20 December 2007 / Published online: 15 January 2008  Springer-Verlag London Limited 2008

Abstract Artificial intelligent tools like genetic algorithm, artificial neural network (ANN) and fuzzy logic are found to be extremely useful in modeling reliable processes in the field of computer integrated manufacturing (for example, selecting optimal parameters during process planning, design and implementing the adaptive control systems). When knowledge about the relationship among the various parameters of manufacturing are found to be lacking, ANNs are used as process models, because they can handle strong nonlinearities, a large number of parameters and missing information. When the dependencies between parameters become noninvertible, the input and output configurations used in ANN strongly influence the accuracy. However, running of a neural network is found to be time consuming. If genetic algorithm-based ANNs are used to construct models, it can provide more accurate results in less time. This article proposes a genetic algorithm-based ANN model for the turning process in manufacturing Industry. This model is found to be a timesaving model that satisfies all the accuracy requirements.

D. Venkatesan Department of Computer Science, Shanmugha Arts Science and Technology Research Academy (SASTRA), Thanjavur 613402, Tamilnadu, India e-mail: [email protected] K. Kannan Department of Mathematics, Shanmugha Arts Science and Technology Research Academy (SASTRA), Thanjavur 613402, Tamilnadu, India e-mail: [email protected] R. Saravanan (&) Department of Mechatronics Engineering, Kumaraguru College of Technology, Coimbatore 641006, Tamilnadu, India e-mail: [email protected]

Keywords Genetic algorithm  Turning process  Neural networks  Machining parameters  Turning operations Abbreviations GA Genetic algorithm ANN Artificial neural networks BPN Back propagation network

1 Introduction Modeling methods can be used in several fields of production engineering, e.g., planning, optimization or controls. Difficulties in modeling the manufacturing processes are manifold. To name a few, the great number of different machining operations, multidimensional, nonlinear, stochastic nature of machining, partially understood relations between parameters, lack of reliable data are some stages, which one can overcome through modeling. One of the ways to overcome such difficulty is to implement fundamental models based on the principles of machining science. However, in spite of progress made in fundamental process modeling, accurate models are not yet available for manufacturing processes. Heuristic models are usually based on the thumb rules gained from experience, and used for qualitative evaluation of decisions. Empirical models derived from experimental data still play a major role in manufacturing process modeling. Artificial neural networks (ANNs) can be used as operation models, because they can handle high level of nonlinearities, large number of parameters and missing information. Based on their inherent learning capabilities, ANNs can adapt themselves to changes in the production

123

136

Neural Comput & Applic (2009) 18:135–140

environment and can also be used in case there is no exact knowledge about the relationships among the various parameters of manufacturing. There have been numerous theoretical and experimental studies of manufacturing processes. Some process models are extremely important in different fields of computerintegrated manufacturing. Most of the authors constructed ANN-based models. Genetic algorithms (GAs) are a class of search algorithms modeled on the process of natural evolution. They have been shown in practice to be very effective at function optimization; efficiently searching large and complex (multimodal, discontinuous, etc.) spaces, to find nearly global optima. The search space associated with a neural network weight-selection problem is also large and complex in nature. This article proposes a GA-based ANN model for turning process in manufacturing industries. This article is arranged as follows. In Sect. 2, an overview of ANN and GA is presented. A brief sketch of turning process in machining is given in Sect. 3. In Sect. 4, a brief literature survey of ANN-based process control models and neural network weight-selection by GA are presented. The proposed GA-based ANN model for turning process is explained in Sect. 5. In Sect. 6, pseudo code for the algorithm is given. Results and performance of the algorithm are discussed in Sect. 7. 2 An overview of neural networks and genetic algorithm 2.1 Artificial neural networks

Wjko is the connection weight between jth node of the hidden layer and kth node of the output layer; Wijh is the connection weight between ith node of the input layer and jth node of hidden layer; f1 and f2 are activation functions. Thus, ANNs are highly parallel systems that process information through many interconnected mimics that respond to inputs through modifiable weights, thresholds and mathematical transfer functions. The basic procedure for training a network is embodied in the following steps: 1. 2. 3.

4. 5. 6.

2.2 Genetic algorithm Genetic algorithms (GAs) are algorithms for optimization and machine learning based loosely on several features of biological evolution. They require the following five components: 1.

Neural networks generally consist of five components: 1.

2. 3. 4. 5.

A directed graph known as the network topology whose nodes represent the neurodes (called processing elements) and whose arc represent connections. A state variable associated with each neurode. A real valued weight associated with each connection. A real valued bias associated with each neurodes. P The state of each neurode f ½ wi xi  b, f being the transfer function, b being the bias of the neurode, wi are the weights on the incoming connections, xi are the states of the neurodes on the other end of connections.

Mathematically, a three-layer neural network with i input nodes, j hidden nodes and k output nodes is expressed as " ( )# L n X X o h Wjk f2 ð1Þ Wij xpi Opk ¼ f1 j¼1

i¼1

where Opk is the output from the kth node of the output layer of the network for the pth vector (data point); xpi are the inputs to the network for the pth vector (data point);

123

Apply an input vector to the neural network and calculate the corresponding output value. Compare the actual output with the correct output and determine the measure of error. Determine the increase or decrease in the weights or in which direction (positive or negative) to change each weight to reduce the error. Determine the values by which each weight changes. Apply the corrections to the weights. Repeat steps 1–5 with all training vectors in the training set such that the error is reduced to an acceptable value.

2. 3. 4.

5.

A way of coding solutions to the problem on chromosomes. An evaluation function, which returns a rating for each chromosome given to it. A way of initializing the population of chromosomes. Operators that may be applied to parents when they reproduce to alter their genetic composition; standard operators are mutation and crossover. Parameter settings for the algorithm, the operators and so forth.

Given these five components, GA operates according to the following steps: 1.

2.

Initialize the population using the initialization procedure and evaluate each member of the initial population. Reproduce until a stopping condition is met; reproduction consists of iterations of the following steps: (a) Choose one or more parents to reproduce; selection is stochastic, but the individuals with the highest evaluations are favored in the selection.

Neural Comput & Applic (2009) 18:135–140

(b) Choose a genetic operator and apply it to the parents. (c) Evaluate the children and accumulate them into a generation. After accumulating enough individuals, insert them into the population, replacing the worst current members of the population. When the components of the GA are chosen appropriately, the reproduction process will continually improve the population, converging finally on solutions close to a global optimum. GAs can efficiently search large and complex (i.e., possessing many local optima) spaces to find nearly global optima.

3 Turning process in machining for computer-integrated manufacturing The turning process is described by the following parameters [1]: 1.

Setting of the machine is handled through the following three machining parameters: • • •

2.

The tool is presented by three parameters that are as follows: • • •

3.

Cutting edge angle: x (rad) Corner radius: rc (mm) Tool life: T (min)

The following two monitoring parameters can be used for turning operation: • •

4.

Depth of cut: a (mm) Feed: f (mm/rev) Speed: v (m/min)

Force: Fa (N) (main force component) Power: P (kW)

137

model was presented by Monostori et al. [1]. An interesting example was presented by Knapp and Wang [2], who used an ANN in planning. The goal of this research is to generate operation order. Cutting tool selection was realized by Dini [3]. The inputs of the ANN are as follows: machining type, cutting conditions, clamping type, work piece material, slenderness; outputs are five parameters identifying the cutting tool. To generate an optimum set of process parameters at the design state of injection molding, Choi et al. [4] used an ANN model with inputs of filling time, melt temperature, holding time, coolant temperature and packing pressure and with outputs of melt temperature difference, mold temperature difference, over packed element, sink index and average and variance of linear shrinkage. The compensation of thermal distortion was the goal of Hatamura et al. [5]. Parameters from deformation sensors were on the input side of the used ANN to decide whether cooling, heating or no intervention is necessary. Monostori [6] described models to estimate and classify tool wear. The paper presents some variable input–output configurations of ANN models according to variable tasks. Model for creep feed grinding of aluminium with diamond wheels was presented by Liao and Chen [7]. In the ANN model, Bond type, mesh size, concentration, work speed and depth of cut are used as the inputs and surface finish, normal grinding force per unit width and grinding power per unit width are used as the outputs. Automatic input– output configuration and generation of ANN-based process models and its application in machining were presented by Viharos and Monostori [8]. Monotana [9] described a procedure for neural network weight-selection using GAs. Seiffert [10] presented a procedure for multiple layer perceptron training using GAs. Training feedforward neural networks with a modified GA was presented by Abu-AlNadi [11].

The customer demand is determined by the required •

Roughness: Ra (mm)

For simulating the turning process, the assignment is the estimation of cutting conditions, when the tool and machining parameters are selected. The produced roughness, monitoring parameters and the tool life have to be estimated.

4 Survey of ANN-based process control models and neural network weight selection by GA Satisfying various requirements in different levels and stages of machining using one general ANN-based process

5 Proposed simulated model for turning process In practical implementation sensors, machine controllers and computers would provide a part of an ANN operation model. For simulating the machining process, all information is generated through theoretical models. It should be stressed that, in a practical implementation, theoretical models are not necessary. However, they are used in the present case only to provide simulated samples for training and testing purposes. Four equations are used to create data vectors. The validity of the equations is determined by the minimum and maximum boundaries of the parameters [8].

123

138

Neural Comput & Applic (2009) 18:135–140

Fa ¼ 1; 560f 0:76 a0:98 ½sinðxÞ0:22

ð2Þ

Table 1 Standard parameter set used for training and inferencing

P ¼ 0:039f 0:79 av

ð3Þ

Parameter

T ¼ ð1:85  1010 Þ f 0:7 a0:42 v3:85

ð4Þ

Ra ¼ 8:5f 1:8 a0:08 v0:9 rc0:5

ð5Þ

The ranges of the variables in (2–5) are as follows: f: a: x: v: r c: T: Fa: P: Ra:

0.1–0.4 (mm/rev) 1–4 (mm) 1.3–1.66 (rad) 75–200 (m/min) 0.4–1.2 (mm) 5–60 (min) 800–3,000 (N) 3.8–13.5 (kw) 0.0015–0.023 (mm)

By the help of these strong nonlinear equations, values for tool life, force, power and roughness can be calculated based on the tool and machining parameters. To create parameter sets for learning and testing, one hundred random values were determined separately in the allowed range of input and output variables. First, back propagation network (BPN) is used to train and inference the data. Since five input variables and four output variables are used in the problem, we have used a BPN with five input nodes, one hidden layer with five nodes and four output nodes. The number of iterations and the error occurred were noted down. Next a GA-based BPN is used to inference the data. Here, the GA was used to determine the optimum weights of the BPN. So, after obtaining the optimized weights by the GA, it was applied to the BPN algorithm to inference the data. The number of iterations and the error occurred were noted down. The parameter set used in our algorithm is listed in Table 1.

Value

Transfer function of the neurons

Tan sigmoidal

Momentum factor

0.5

Learning coefficient

0.05

Threshold value

0.0

Sigmoidal gain

1.0

Encoding

Real (decimal)

Chromosome length

225

Population size

225

Weight initialization routine

Random

Stopping criterion

Max-iteration = 3,000

Fitness normalization

Error = 0.005 Rank

Selection operation

Rank

Crossover

Two-point Pc = 0.9

Mutation

Pm = 0.01

hidden nodes and four output nodes, the total number of weights to be calculated is 45, each represented by five digits; so, we have a chromosome length of 225. Initial populations of 225 chromosomes were generated randomly. For example, a single chromosome will be as follows: 62628608955003182973216342010113418601414434800 14804584643123138130017709529281272199126461187 54197603166642952183092405616595327301361139091 10600424182974238201239703322200546172501343909 6741741935137032153842333479269924331.

5.1.2 Weight extraction Weight extraction is done with the help of the following equations:

8 Xkdþ2 10d2 þ Xkdþ3 10d3 þ ::: þ Xðkþ1Þd > > þ Xkdþ3 10d3 þ ::: þ Xðkþ1Þd > :  Xkdþ2 10 , if 0  Xkdþ1 \5 10d2

ð6Þ

5.1 GA-based weight calculation operations

For example, suppose Gene 0: 84321 and we have k = 0, d = 5, then

5.1.1 Encoding

W0 ¼ þ½ð4  103 Þ þ ð3  102 Þ þ ð2  10Þ þ 1=103 ¼ þ4:321

Here, we have used a real encoding method for representing the chromosomes. Since we have five input nodes, five

123

suppose Gene 1: 46234 and we have k = 1, d = 5, then

Neural Comput & Applic (2009) 18:135–140

139

W1 ¼ ½ð6  103 Þ þ ð2  102 Þ þ ð3  10Þ þ 4=103 ¼ 6:234 Like that, it will generate the weight values in the range -9 to +9. 5.1.3 Fitness generation A fitness value for each chromosome is calculated by using the root mean square of the errors. qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi X E¼ ð7Þ Ei =N

characteristics, the entire population inherits the best characteristics and therefore turns out to be fit solutions to the problem. If the GA has been designed well, then most promising areas of the search space are explored, resulting in population converging to an optimal solution to the problem.

6 The algorithm of the GA-based weight calculation method

1. i

0

Calculate the fitness value Fi for each of the individual string of the population as

2. Generate the initial population Pi of real-coded chromosomes C i j each

Fi ¼ 1=E

3.While the current population Pi has not converged

ð8Þ

Representing a weight set for the BPN.

begin 3.1 Generate fitness values F ij for each C ij , using the following:

5.1.4 Reproduction operator

begin

Reproduction selects good strings in a population and forms a mating pool. The reproduction operator is also called a selector operator. In this work, rank order selection is used. A lower ranked string will have a lower fitness value or a higher objective function and vice versa. The probability of selecting a string is calculated using (9): Expected value of probability ¼ min þ½ðmax  minÞ ðrankði; tÞ  1=ðN  1Þ

ð9Þ

where N is the sample size, Min = 0.9, Max = 1.1.

Extract weights Wi from C i j with the help of equations (6)

=

Wk

⎧ ⎪ ⎪⎪ ⎨ ⎪ ⎪ ⎩⎪

+

X kd + 2 10 d -2 + X kd +3 10 d -3 + ... + X (k +1)d 10 d − 2 X kd + 2 10 d -2 + X kd +3 10 d -3 + ... + X (k +1)d

-

10 d − 2

, if 5 ≤ X kd +1 ≤ 9

, if 0 ≤ X kd +1 < 5

Keeping Wi as a fixed weight setting, train the BPN for the N input Instances; Calculate the error E i for each of the input instances using formula:

E = ∑ (T i

ji

j

− O ji

)

2

------------------------- (10)

5.1.5 Crossover

Where Oi is the output vector calculated by BPN;

We have used a two-point crossover operation with a probability of 0.9. Here, we select two parent chromosomes randomly. We generate another random number. If it is less than 0.9, then we apply the crossover operation between them to generate the new offsprings. For applying the crossover, we select two random positions and exchange the string between the two positions from the two parents to get the new offsprings.

Find the root mean square E of the errors Ei, I = 1, 2, N E=

∑E N i

----------------------------------- (11)

Calculate the fitness value Fi for each of the individual string of the Population as Fi = 1 / E

--------------------------------------- (12)

end

5.1.6 Mutation

3.2 Get the mating pool ready by terminating worst fit individuals and

With the mutation probability of 0.01, its next digit replaces the particular digits in the string. A whole new population of possible solutions to the problem is generated by selecting the best (high fit) individuals from the current generation. This new generation contains characteristics, which are better than their ancestors. Progressing in this way, after many generations, owing to mixing and exchange of good

3.3 Using the Crossover & Mutation operations, reproduce offspring

duplicating high fit individuals;

from parent chromosomes; 3.4 i ← i+1 ; 3.5 Call the current population Pi 3.6 Calculate fitness values F i j for each C i j end 4. Extract weight from Pi to be used by the BPN.

123

140

Neural Comput & Applic (2009) 18:135–140

8 Conclusions There are a number of real time problems that require soft computing techniques like GA to provide good results. If it is due to intractability through mathematical functions or due to the higher degree of nonlinearity. At such instances, GA is used as a good initiator for some other soft computing tool to capture such nonlinearities. Here, GA is used as a complementary tool for weights optimization that serves in turn for ANN to perform better for obtaining accurate results, though only marginal amount of time saving is achieved. The work of using advanced genetic operators for further optimization is ongoing.

Fig. 1 Comparison of the error values for a fixed number of iterations

Fig. 2 Comparison of the number of iterations for a fixed number of error rates

7 Discussion of the results Of all the networks constructed with a hidden layer, a number of hidden nodes and with a set of activation functions, it is found that five hidden nodes with tan sigmoid activation function and weight optimization by GA saves much time and iterations. At first, a simulated model is used for 100 random inputs within the boundaries to find the actual outputs. For the same set of inputs, GA-based BPN and BPN are used to calculate the outputs, and graphs are plotted for the errors in Figs. 1 and 2. It is observed that the GA-based BPN method provides more accurate results in less number of iterations.

123

References 1. Monostori L, Viharos Zs J, Markos S (2000) Satisfying various requirements in different levels and stages of machining using one general ANN-based process model. J Mater Process Technol 107:228–235 2. Knapp GM, Hsu-Pin W (1992) Acquiring, storing and utilizing process planning knowledge using neural networks. J Intell Manuf 3(5):333–344 3. Dini G (1995) A neural approach to the automated selection of tools in turning. In: Proceedings of the second AITEM conference, Padova, Italy, 18–20 September 1995. pp. 1–10 4. Choi GH, Lee KD, Chang N, Kim SG (1994) Optimization of the process parameters of injection modeling with neural network application in process simulation environment. CIRP Ann 43(1):449–452 5. Hatamura Y, Nagao T, Kato KI, Taguchi S, Okumura T, Nakagawa G, Sugishita H (1993) Development of an intelligent machining center incorporating active compensation for thermal distortion. CIRP Ann 42(1):549–552 6. Monostori L (1993) A step towards intelligent manufacturing: modeling and monitoring of manufacturing process through artificial neural networks. CIRP Ann 42(1):485–488 7. Liao TW, Chen LJ (1994) A neural network approach for grinding processes: modeling and optimization. Int J Mach Tools Manuf 34(7):919–937 8. Viharos ZsJ, Monostori L (1999) Automatic input–output configuration and generation of ANN-based process models and its application in machining. In: Imam I, Kodratoff Y, El-Dessouki A, Ali M (eds) Proceedings of the XIIth international conference on industrial and engineering applications of artificial intelligence and expert systems, IEA/AIE-99, Keiro, Egypt, 1999. Springer, New York, pp 659–668 9. Montana DJ Neural network weight selection using genetic algorithms. http://www.vishnu.bbn.com/papers/hybrid.com 10. Seiffert U (2001) Multiple layer perceptron training using genetic algorithms. In: Proceedings of the 9th European symposium on artificial neural networks (ESANN 2001), Bruges, Belgium, 25–27 April 2001. D-Facto, Evere, Belgium, pp 25–27 11. Abu-Al-Nadi DI Training feedforward neural networks with a modified genetic algorithm. http://www.ines-conf.org/ines-conf/ 2004list.htm