Design of neural network-based estimator for tool ... - Semantic Scholar

3 downloads 116086 Views 665KB Size Report
Jan 25, 2008 - neural network (optimized FFCNN) estimator are further dis- cussed. Finally ... 2004; Xie et al. 2005); artificial intelligence (AI)-based app-.
J Intell Manuf (2008) 19:383–396 DOI 10.1007/s10845-008-0090-8

Design of neural network-based estimator for tool wear modeling in hard turning Xiaoyu Wang · Wen Wang · Yong Huang · Nhan Nguyen · Kalmanje Krishnakumar

Received: 1 January 2007 / Accepted: 1 September 2007 / Published online: 25 January 2008 © Springer Science+Business Media, LLC 2008

Abstract Hard turning with cubic boron nitride (CBN) tools has been proven to be more effective and efficient than traditional grinding operations in machining hardened steels. However, rapid tool wear is still one of the major hurdles affecting the wide implementation of hard turning in industry. Better prediction of the CBN tool wear progression helps to optimize cutting conditions and/or tool geometry to reduce tool wear, which further helps to make hard turning a viable technology. The objective of this study is to design a novel but simple neural network-based generalized optimal estimator for CBN tool wear prediction in hard turning. The proposed estimator is based on a fully forward connected neural network with cutting conditions and machining time as the inputs and tool flank wear as the output. Extended Kalman filter algorithm is utilized as the network training algorithm to speed up the learning convergence. Network neuron connection is optimized using a destructive optimization algorithm. Besides performance comparisons with the CBN tool wear measurements in hard turning, the proposed tool wear estimator is also evaluated against a multilayer perceptron neural network modeling approach and/or an analytical modeling approach, and it has been proven to be faster, more accurate, and more robust. Although this neural network-based estimator is designed for CBN tool wear modeling in this study, X. Wang · Y. Huang (B) Department of Mechanical Engineering, Clemson University, Clemson, SC 29634-0921, USA e-mail: [email protected] W. Wang College of Mechanical and Energy Engineering, Zhejiang University, Hangzhou 310027, P.R. China N. Nguyen · K. Krishnakumar Intelligent Systems Division, NASA Ames Research Center, Moffett Field, CA 94035, USA

it is expected to be applicable to other tool wear modeling applications. Keywords Tool wear · Hard turning · Neural network · Extended Kalman filter · Connectivity optimization

Introduction The hard turning process is defined as the single point turning of materials with hardness higher than 50 HRC under the small feed and fine depth of cut condition. It offers possible benefits over the process of grinding in the context of lower equipment costs, shorter setup time, fewer process steps, greater part geometry flexibility, and the elimination of cutting fluid use (König et al. 1984; Tönshoff et al. 2000). Among the available tool materials, cubic boron nitride (CBN), second to diamond in hardness and inert to steel materials, has been recommended as the best hard turning tool material and widely used for hard turning operations. However, one of the major hurdles affecting the wide implementation of hard turning in industry is the severe wear of CBN tools. The cost of hard turning tools and the tool change down-time due to rapid tool wear can impact the economic viability of precision hard turning. For a given tool and workpiece combination, the ability to estimate the tool wear as a function of cutting conditions, which include cutting speed, feed rate, and depth of cut, is critical to the overall optimization of a hard turning process. It is ideal that the CBN tool wear in hard turning can be modeled using a model-driven approach. However, CBN tool wear mechanisms in hard turning are still not yet well understood (Huang et al. 2007), and as a result, the modeldriven approach is short of robustness and accuracy. As an alternative, the data-driven approach is usually favored to

123

384

capture the tool wear progression. For example, artificial neural network (ANN) is generally implemented to model nonlinear dynamic systems including the tool wear progression (Chryssoluouris and Guillot 1990; Das et al. 1996; Dimla and Lister 2000; Haber and Alique 2003). However, ANNbased estimator modeling performance is usually not satisfactory if the network architecture is not properly selected, and how to design an efficient and effective ANN for tool wear modeling is still a research topic. Even for the most widely implemented multilayer perceptron neural network (MLP), there are still no general rules to specify the number of hidden layers, the number of neurons for each layer, and the network connection to achieve an optimized modeling effect. If ANN is selected as a tool wear modeling approach, such challenges must be carefully addressed. The objective of this study is to design a novel but easy to be implemented ANN-based generalized optimal estimator for CBN tool wear modeling in hard turning. The estimator is designed based on a fully forward connected neural network (FFCNN) and trained using the extended Kalman filter (EKF) algorithm. Network connectivity optimization is achieved using a destructive approach. The paper first introduces the theoretical background of the proposed estimator design, and then evaluates the estimator’s modeling performance based on two hard turning studies. The learning convergence speed, generalization capability, sensitivity, and training cost-benefit analysis of this designed optimized fully forward connected neural network (optimized FFCNN) estimator are further discussed. Finally, some conclusions are made regarding the proposed estimator. Although the proposed estimator is for CBN tool wear modeling, it is expected that this approach should be also applicable for other tool wear modeling applications.

Background CBN tool wear in CBN hard turning The cutting edge of a tool insert in machining is subject to a combination of high stresses, high temperatures, and perhaps chemical reactions which cause the tool wear due to one or several mechanisms. These mechanisms depend on the tool and workpiece material combination, cutting geometry, environment, and mechanical and thermal loadings encountered. Different classifications of tool wear processes have been addressed in the literature. Basically, five wear mechanisms or any combinations of them are involved in the tool wear progression. They are abrasion, adhesion, fatigue, dissolution/diffusion, and tribochemical process. It is well accepted that the tool wear mechanisms in machining involve more than one wear mechanism and it is difficult to predict the relative importance of any one of them (Huang et al. 2007). Crater

123

J Intell Manuf (2008) 19:383–396

rake face

100 µm

flank wear VB flank face Fig. 1 Typical tool wear picture in CBN hard turning

and flank wear are most reported wear patterns in machining including hard turning. Crater wear is mainly caused by physical, chemical, and/or thermomechanical interactions between the rake face of the insert and the hot metal chip, and flank wear occurs primarily when the flank face rubs against the workpiece surface. CBN tool flank wear length or wearland (VB), as shown in Fig. 1 which is drawn based on a typical CBN tool wear observation (Dawson 2002), is generally regarded as the tool life criterion or an important index to evaluate the tool performance in hard turning (Takatsu et al. 1983; Abrao et al. 1995; Dewes and Aspinwall 1996), and the CBN tool flank wear is of interest in this study. The tool wear rate is assumed uniform across the width of cut as shown in Fig. 1. The main wear mechanisms in CBN turning hardened steels are generally considered to be a combination of abrasion, adhesion, and diffusion, and the contribution of each wear mechanism is related to cutting conditions, tool geometry, and material properties of the tool and the workpiece as follows (Huang and Liang 2004a): (cot γ + tan α)R dVB  =  dt VB (R − VB tan γ )   n−1  Pa Vc VBσ¯ × 0.0295K Ptn −4

+ 1.4761 × 10−14 e9.0313×10 T Vc σ¯   − T20460 6 +273 + 5.7204 × 10 Vc VBe

(1)

where VB is the length of flank wear, γ is the clearance angle, α is the rake/chamfer angle, R is the tool nose radius, Pa and Pt are the hardness of abrasive particle and tool respectively, K and n is a known function of Pt /Pa , Vc is the cutting speed, σ¯ is the average normal stress, and T is the temperature on the tool–workpiece interface.

J Intell Manuf (2008) 19:383–396

Tool wear modeling in machining Tool wear modeling has been studied by numerous researchers and can be classified into four main categories: analytical model-based approach which models the tool wear as a function of cutting conditions, machining environment, tool geometry and property, and/or workpiece property (Usui et al. 1978; Kramer 1986; Huang 2002); computational methodbased approach which applies the finite element method (FEM) to model the wear development process (Yen et al. 2004; Xie et al. 2005); artificial intelligence (AI)-based approach which includes ANN (Wang and Dornfeld 1992), fuzzy logic (Kuo and Cohen 1998), and support vector machine (Sun et al. 2004); and parametric model-based approach including the Taylor tool life model (Poulachon et al. 2001) and the regression model (Ozel and Karpat 2005). With respect to CBN tool wear modeling in hard turning, the main endeavors include the analytical model-based approach (Huang and Liang 2004a,b; Huang and Dawson 2005), the AI-based approach (Ozel and Nadgir 2002; Ozel and Karpat 2005), and the Taylor tool life equation-based approach (Poulachon et al. 2001; Dawson 2002). Although the analytical models help to provide better insight to underlying physical wear mechanisms in hard turning, they are usually less satisfactory in modeling wear progression due to model simplifications and assumptions (Scheffer et al. 2003; Huang et al. 2007). The FEM approach also provides insight into the tool wear process; however, it is too computational time demanding and not suitable for optimization using current computing technologies. The time series (Boyd et al. 1996) and regression models (Ozel and Karpat 2005) are typically less accurate when compared with the AI-based approaches. If both accuracy and speed are of interest instead of underlying wear physics, the AI-based modeling approaches are generally favored for real applications. Among the AI-based approaches, ANN is a viable, reliable, and attractive approach for tool wear modeling (Chryssoluouris and Guillot 1990; Das et al. 1996; Dimla and Lister 2000; Haber and Alique 2003) due to the following reasons: (1) ANN is capable of modeling non-linear process which makes it suitable for modeling the tool wear process; (2) The data-driven feature of ANN makes it powerful in parallel computing and be capable of handling large amount of data; and (3) ANN has a good fault tolerance and adaptability and is good for modeling tool wear in machining which is always subject to noisy environments. Both sensorless and sensor-based approaches have been studied as the ANN inputs in tool wear modeling/monitoring (Sick 2002). ANN-based tool wear modeling Different ANN architectures have been researched or applied to solve the tool wear modeling challenge (Dimla et al. 1997;

385

Sick 2002) such as MLP (Liu and Altintas 1999), radial basis function ANN (Elanayar and Shin 1999; Kuo and Cohen 1999), self-organizing map (SOM) (Kamarthi et al. 1991; Scheffer et al. 2003), neuro-fuzzy ANN (Chungchoo and Saini 2002), time delay ANN (Sick 1998), and ART2 ANN (Obikawa and Shinozuka 2004), to name a few. Some studies (Lin and Ting 1995; Sick 1998) have even tested a few different network architectures to find the best network architecture. However, there is still a need to have a systematic way to determine the optimal network architecture for tool wear modeling applications. Modeling performance of ANN-based estimators is usually undermined if the network architecture is not properly selected. For most ANN architectures, some critical questions should be addressed first in order to better implement an ANN-based estimator: (1) how many hidden layers should be selected; (2) how many neurons should be assigned to each hidden layer; and (3) how to determine the connectivity relationship between each neuron pair. If ANN is selected as a tool wear modeling approach, some of the aforementioned concerns should be carefully addressed. As a simple and easy implementation, the back propagation (BP) algorithm has been typically chosen as the learning algorithm for most ANN-based tool wear modeling applications, but its learning converge speed, efficiency, and accuracy are not satisfactory for tool wear modeling/monitoring (Sarkar 1995). Other advanced ANN architectures and/or training/optimization algorithms have been pioneered with encouraging results, but they are difficult to be widely implemented due to their complexity. Among the studied ANN architectures, MLP-based ANN has been widely implemented due to its simplicity and sufficient effectiveness in modeling the tool wear progression (Chryssoluouris and Guillot 1990; Rangwala and Donfeld 1990; Monostori 1993; Lin and Ting 1995; Das et al. 1996; Kuo and Cohen 1998; Sick 1998; Dimla and Lister 2000; Ozel and Nadgir 2002; Haber and Alique 2003; Sivarao 2005; Panda et al. 2006), and the BP algorithm has been commonly used to train such MLP-based ANNs. Based on the universally accepted MLP structure, this study aims to develop a novel but easy to be implemented a fully forward connected neural network-based generalized optimal estimator for CBN tool wear modeling in hard turning (i.e., optimized FFCNN), and this estimator is trained using the EKF algorithm and optimized using a destructive approach. There are three main advantages of the proposed NN approach over other existing ANN-based approaches: (1) The structure of the proposed ANN is much more generalized. For the proposed fully forward connected neural network, only the number of hidden neurons is to be predetermined instead of the number of hidden layers and the neuron number for each hidden layer, which makes

123

386

it a much more generalized modeling approach. The hidden neurons are to be organized into several layers equivalently using the proposed training and optimization algorithm. (2) The network structure is optimized given the numbers of input, hidden, and output neurons, respectively. Through connectivity optimization, some unnecessary network connections are removed to form an optimized and concise network structure, leading to increased network robustness (KrishnaKumar and Nishta 1999). (3) The network training convergence performance is improved using the EKF algorithm. The training convergence speed and training accuracy using the EKF algorithm are much better than those of the BP algorithm (Li 2001).

J Intell Manuf (2008) 19:383–396

i

j

Wij

Feedforward loops (solid lines) 1, …, ni Input neurons

1, …, nh Hidden neurons

1, …, no Output neurons

Fig. 2 Architecture of a fully forward connected neural network

Forward pass computation For each forward pass, the net input to the neuron i is computed as follows: neti =

i−1 

Wi j X j 1 ≤ i ≤ n i + n h + n o

(2)

j=1

Theoretical background of proposed modeling approach As detailed in the following, the proposed CBN tool wear estimator is designed based on a generalized fully forward connected neural network, which is trained by the Kalman filter algorithm and optimized using a destructive approach. Fully forward connected neural network (FFCNN) ANN is an emulation of the structures of the human brain, where the nodes correspond to neurons and the weights correspond to synaptic connections. Its universal input-output mapping approximation property is also mathematically guaranteed (Haykin 1999). This paper proposes an optimized FFCNN as an ANN-based estimator to model the CBN tool wear progression. As shown in Fig. 2, the FFCNN architecture proposed by Werbos (1990) is adopted as the backbone for this study since it is more general than the MLP approach proposed by Rumelhart and McClelland (1986). For FFCNN, the network is composed of three sections, namely input neuron section, hidden neuron section, and output neuron section respectively, and every neuron takes connections from every neuron to the left of it. FFCNN is also viewed as a generalized version of MLP (KrishnaKumar 1993). However, once the number of hidden neurons is given, there is no need to specify the number of hidden layers for FFCNN in contrast to that of MLP. The FFCNN hidden neurons are fully forward connected as many MLPs. The FFCNN learning process includes two passes—the forward pass aiming to calculate the network outputs and the backward pass aiming to update the weights of network connections. In the forward pass, the activation (output) of a particular neuron depends on the activations (outputs) of neurons to the left of it (Werbos 1990; KrishnaKumar 1993).

123

and the output of the neuron i is computed using an activation function as follows: X i = Fi (neti ) 1 ≤ i ≤ n i + n h + n o

(3)

where n i , n h , and n o represent the number of the input neurons, hidden neurons, and output neurons respectively, neti represents the net input to the neuron i, Wi j represents the weight connecting the neuron j to the neuron i, X i represents the output of the neuron i, and Fi ( ) represents the activation function used for the neuron i. For neurons in the hidden section, a unipolar sigmoid activation function is used: 1 ni < i ≤ ni + nh (4) Fi (neti ) = 1 + e−neti For neurons not in the hidden section, a linear activation function is used: Fi (neti ) = neti 1 ≤ i ≤ n i or n i + n h < i ≤ n i + n h + n o (5) Then the neural network outputs are calculated as follows: Yi = Fi+n i +n h (neti+n i +n h ) = X i+n i +n h 1 ≤ i ≤ n o

(6)

where Yi represents the output of the output neuron i. Backward pass computation The EKF algorithm was first introduced to train neural networks by Singhal and Wu (1989). With the EKF approach, the network weights are viewed as the states of the non-linear stochastic process that the ANN describes. Compared to the BP algorithm in training the network weights, the EKF learning algorithm has the following advantages and drawbacks. The main advantages are: (1) The EKF algorithm helps to reach the training steady state much faster than the BP algorithm

J Intell Manuf (2008) 19:383–396

387

for non-stationary processes (Zhang 2005); (2) The EKF algorithm excels the BP algorithm when the training data is limited (Puskorius and Feldkamp 1994); and (3) The adjustment of coefficients using the EKF algorithm are based on physical characteristics of the described process such as the variance of process noise Q and the variance of measurement noise R (Alessandri 2002), while for the BP algorithm the optimization of the learning coefficients such as the adjustment of the learning rate and momentum is carried out by a trial-and-error method. On the other hand, the computational expense of the EKF learning algorithm is higher than that of BP (Alessandri 2002), and EKF also requires higher computational precision (Bierman 1977; Lary and Mussa 2004). Fortunately, these computation-related drawbacks of the EKF algorithm have been offset thanks to significant computing technology advances. Hence the EKF learning algorithm (Puskorius and Feldkamp 1994; Haykin 1999) is favored here, and the EKF-based network updating approach is introduced as follows. For the backward pass, the connection weights are updated by minimizing the error E between the neural network output Yi and the desired output Di as follows: o 1 (Yi − Di )2 2

n

E=

(7)

i=1

The ordered derivatives of the output vector Y with respect to weights are computed as follows: ⎛ ⎞ n i +n h +n o + + +  ∂ Y ∂ X j ⎠ ∂ Xi ∂ Y ∂ Y ∂ Xi = X j= ⎝ Xj ∂ Wi j ∂ X i ∂neti ∂ X j ∂ X i ∂neti j=i+1 ⎛ ⎞ n i +n h +n o +  ∂ X ∂ Xi ∂ Y j W ji ⎠ =⎝ Xj (8) ∂ X j ∂(net j ) ∂neti j=i+1

The network trainable weights Wi j are arranged into an M dimensional vector W , and the elements of ordered deriva+ tives ∂∂WYi j are arranged into an M × n o matrix H (Haykin 1999) at the mth step, where M is the number of trainable weights. The trainable weights are further updated using the EKF algorithm. The Kalman filter gain matrix K at the mth step is computed as follows: K m = Pm−1 Hm [Rm + HmT Pm−1 Hm ]−1

(9)

(10)

Once Wˆ m is updated, it is restored to form the weight matrix [Wi j ](n i +n h +n o )×(n i +n h +n o ) . The error covariance matrix Pm is further computed as: Pm = Pm−1 − K m HmT Pm−1 + Q m

Network optimization ANN-based tool wear modeling approach has some common issues such as under-training problem, convergence problem, overfitting problem, and topology optimization problem (Danaher et al. 2004). While the first two can be mitigated by carefully selecting stopping criteria, the latter two are of interest here through an optimization approach, which optimizes the network topology as well as reduces the risk of overfitting. In this study, a topology destructive optimization approach is utilized to optimize the FFCNN estimator. First, the number of hidden neurons is chosen based on a-trial-and-error approach (Schalkoff 1997), and then the network topology is optimized by disconnecting some weights among the network neurons using a method proposed by KrishnaKumar (1993). Such a pruned and optimized network has been proven to be simpler, more accurate, and more robust (KrishnaKumar and Nishta 1999). An example of the optimization result is illustrated in Fig. 3, where FFCNN originally has a structure of 1-3-1, and two connections (C31 and C42 ) are disconnected after optimization. The detailed optimization algorithm can be found in (KrishnaKumar 1993). Forward pass for optimization To optimize the network connectivity, a function g(Ci j ) is introduced as Eq. 12 to represent the connections for each Before optimization Input

1

The network weight vector Wm , which is the vector W at the mth step, is updated as follows: Wˆ m = Wˆ m−1 + K m (dm − yˆm )

where the subscript m represents the mth step, K m is the Kalman gain matrix, dm is the target vector, yˆm is the output vector of the network, Wm is the weight vector, Wˆ m and Wˆ m−1 are the estimate of the weight vectors Wm and Wm−1 respectively, Hm is the derivatives matrix of the network outputs with respect to the trainable weights, Rm is the covariance matrix of measurement noise, Q m is the covariance matrix of process noise, and Pm is normally initialized as a diagonal matrix with large diagonal elements such as 100 at m = 0 (Puskorius and Feldkamp 1994).

(11)

2

3

4

5

3

4

5

Output

Connections will be removed After optimization Output

Input

1

2

Fig. 3 An example of connectivity optimization effect

123

388

J Intell Manuf (2008) 19:383–396

neuron pair. If g(Ci j ) = 1.0, this implies there is a connection between the ith and jth neurons; and if g(Ci j ) = 0, it implies there is no connection. g(Ci j ) =

1 1 + e−Ci j

(12)

where Ci j is the connection coefficient from the neuron j to the neuron i. With g(Ci j ), the forward pass computation procedure corresponding to Eq. 2 is rewritten as follows: neti =

i−1 

Wi j g(Ci j )X j 1 ≤ i ≤ ni + n h + n o

(13)

j=1

Other network forward pass calculations are the same as in section “Forward pass computation”. Backward pass for optimization With g(Ci j ) embedded inside the network as in Eq. 8, the ordered derivatives of the output vector Y with respect to the weights are computed as follows: ∂ +Y ∂ X i ∂ +Y g(Ci j )X j = ∂ Wi j ∂ X i ∂(neti )

(14)

The ordered derivatives of the output vector with respect to the connection coefficients are computed as follows: ∂(g(Ci j )) ∂ +Y ∂ X i ∂ +Y Wi j X j = ∂Ci j ∂ X i ∂(neti ) ∂Ci j

(15)

Then both the weights and connection coefficients are updated using the EKF algorithm (Eqs. 9–11 with different H matrices) as discussed in section “Backward pass computation”. At the beginning of optimization, each Ci j is set as 0. When the training stopping criteria have been met, the connections with Ci j < 0 are disconnected by setting g(Ci j ) = 0 and the others stay connected by setting g(Ci j ) = 1.

The training process of optimized FFCNN includes the two steps: the network connection is first optimized and then the weights of the optimized network are further refined using the same training data. Input normalization In order to avoid the saturation problem in training, all inputs are first normalized before they are fed into the network. The normalization is performed using a linear function as follows:

123

X N max − X N min + X N min X max − X min

Network training and testing Network training stop criteria are vital to the performance of trained FFCNN estimators. If the stopping criteria are too strict, they will cause an over-training problem so that the network is trained to map not only the concerned patterns but also the noise features of training data, and as a result, the generated model can not fit testing data well. On the contrary, if the stopping criteria are too loose, they will cause the training process to end prematurely which often results in the under-training problem. The stopping criteria of the studied FFCNN and optimized FFCNN are determined by trial-and-error as follows: (1) The number of training cycle should be less than 4,500 and the training process stops after 4,500 cycles if no other stopping criteria are met before; or (2) if the error is less than 0.03 and the difference between the current error and the error of 50 epochs before is less than 0.0004, then the training process stops. During the training process, the trainable parameters are determined to minimize the error E. Once the training stopping criteria are met, the training process is terminated and both the structure and weights of ANNs are fixed. During the testing process, the testing data are fed into the trained ANN by following the aforementioned forward pass computation in section “Forward pass computation”.

Experimental validation Validation with Ozel and Nadgir’s experimental results Hard turning experiment setup and tool wear estimator design

Training procedure

X N = (X − X min )

where X N is the normalized inputs, X is the original value of inputs, X N max and X N min are the maximum and minimum values of the normalized inputs, and X max and X min are the maximum and minimum values of the inputs before normalization.

(16)

In the work by Ozel and Nadgir (2002), hardened H-13 steel tube workpiece (55 HRC) was turned using two types of CBN tools: chamfered tool with a chamfer length 0.1 mm and chamfer angle 25◦ and honed tool with a 0.02 mm edge radius. The tool holder had a negative five-degree rake angle. Different settings of cutting velocities (200, 250, and 300 m/min) and feed rate (0.05 and 0.1 mm/rev) were used in the experiments. Besides their experimental measurements, Ozel and Nadgir have also tried to model tool wear using a three-layer neural network (MLP) (Ozel and Nadgir 2002). The topology of neural network was determined as 5-30-8 based on

J Intell Manuf (2008) 19:383–396

389

a trial-and-error method, and the network was trained using the BP algorithm. The neural network inputs included cutting velocity, feed rate, cutting force ratio, and depth of cut, which was constant (2.5 mm). The network output was the coded tool flank wear depth. Twenty five data sets of the chamfered tool and 18 data sets of the honed tool were used as the training data, respectively. The output layer consisted of 8 neurons which represented the 8 binary values of flank wear. For comparison, the FFCNN and optimized FFCNN estimators are applied to model this tool wear progression (Ozel and Nadgir 2002). The topologies of these two estimator architectures are both 5-7-1. Five inputs are the cutting speed, feed rate, depth of cut (DOC), machining time and a constant bias of 1. No force ratio information is required as the input in contrast to that of Ozel and Nadgir (2002). The output is the tool flank wear estimation. The number of hidden neurons is chosen based on a trial-and-error method. Based on the recommendation that ANN with 2n i + 1 hidden neurons is enough for a satisfactory modeling accuracy (Schalkoff 1997), 12 hidden neurons are first chosen, then the number of hidden neurons is gradually reduced until a better performance is achieved. This process results in a structure of 7 hidden neurons and the overall network architectures are shown in Fig. 4. The training data used are the same as those of Ozel and Nadgir (2002). The EKF algorithm is applied to train both the FFCNN and optimized FFCNN estimators. The diagonal elements of the process noise covariance matrix Q is initialized as 0.01 and this value descends linearly within 10,000 training cycles until Q reaches a minimum limit of 0.000001. The diagonal elements of the measurement noise covariance matrix R is initialized as 100 and it also descends linearly until R reaches a minimum boundary of two. Both R and Q help the training process to converge to a global minimum. Performance comparison The modeling performance of FFCNN and optimized FFCNN is compared with the MLP approach of Ozel and Nadgir (2002). The comparisons are based on the data from

Depth of Cut Cutting Speed Feed Rate Machining Time Bias

FFCNN or

Flank Wear

Optimized FFCNN

Fig. 4 Input and output feature for proposed FFCNN and optimized FFCNN

both the chamfered and honed tools. In modeling the chamfered tool wear progression, the connections are reduced from 68 for FFCNN to 37 for optimized FFCNN. In modeling the honed tool wear progression, the network connections are reduced from 68 for FFCNN to 35 for optimized FFCNN. Two typical training results are shown in Fig. 5. The training error is 0.92% for FFCNN versus 0.45% for optimized FFCNN in chamfered tool cutting and 0.60% for FFCNN versus 0.37% for optimized FFCNN in honed tool cutting. Figure 6 shows the predicted flank wear progressions from MLP (Ozel and Nadgir 2002), FFCNN, and optimized FFCNN for the two representative testing cases. For both the testing cases, the performance of optimized FFCNN is slightly better than that of FFCNN. From Fig. 6a, it can be seen that the accuracy of optimized FFCNN is slightly better and more consistent than that of MLP; however, the modeling accuracy of optimized FFCNN is much better than that of MLP for the other testing case as shown in Fig. 6b. The detail testing error comparisons are shown in Table 1. The error here is defined as

 (X i − Xˆ i )2 × 100%  2 Xi where X i and Xˆ i are the actual measurements and the ANN estimator outputs, respectively. Validation in CBN hard turning of hardened steel Experimental setup To better appreciate the validity in applying the proposed optimized FFCNN estimator for CBN tool wear modeling, hardened AISI 52,100 bearing steel with a hardness 62 HRC was machined on a horizontal Hardinge lathe using a low CBN content tool insert (Kennametal KD050) with a −20◦ and 0.1 mm wide edge chamfer and a 0.8 mm nose radius. The DCLNR-164D (ISO DCLNR-164D) tool holder was used. No cutting fluid was applied. Flank wear length was measured using an optical microscope (Zygo NewView 200). The experiment was stopped when sudden force jump was observed signaling a chipping or broken tool condition. Cutting tests were performed based on a standard central composite design test matrix with an alpha value of 1.414. The center point (0,0) was determined based on the tool manufacturer’s recommendation (Huang and Liang 2004a). A typical depth of cut was suggested as 0.203 mm, which was used in the test matrix. The test conditions are shown in Table 2. Conditions 4, 7, and 11 were identical in this experimental design. To further investigate the effect of depth of cut on tool wear, experiments with various depths of cut were also studied according to Table 2. Uncertainty characterization is not offered here due to the size of the experimental

123

390

(b) 0.11

0.08

0.1

Flank wear (mm)

(a) 0.09 Flank wear (mm)

Fig. 5 Training performance comparison for (a) chamfered tool at cutting speed = 200 m/min and feed = 0.1 mm/rev and (b) honed tool at cutting speed = 200 m/min and feed = 0.05 mm/rev

J Intell Manuf (2008) 19:383–396

0.07 0.06 0.05 Desired outputs FFCNN outputs Optimized FFCNN outputs

0.04 0.03 10

20

30

40

50

0.09 0.08 0.07 Desired outputs FFCNN outputs Optimized FFCNN outputs

0.06 0.05 20

60

30

Time (min)

0.1

0.08 0.07 0.06

0.07 0.06

Desired outputs MLP outputs FFCNN outputs Optimized FFCNN outputs

0.05 0.04 0.03

30

40

50

60

0.02 10

Time (min)

Table 1 Testing error comparison based on Ozel and Nadgir’s data (Ozel and Nadgir 2002)

Test 1 (Fig. 6a) Test 2 (Fig. 6b) Average error Error variance

data set. Under the high cutting speed of Condition six, the break-in period accounted for a large portion of tool flank wear and microchipping was a dominant factor of tool life, the tool wear progression under Condition six is not of interest here. Tool wear estimator design and performance comparison In this study, MLP, FFCNN and optimized FFCNN are structured to model the CBN tool wear progression based on the experimental measurements, and their modeling performance is compared with the measurements as well as the analytical predictions (Huang and Liang 2004a). The network structure (5-7-1) selected in section “Validation with Ozel and Nadgir’s experimental results” is also used here. The inputs for the ANN estimators are the cutting speed, feed rate, depth of cut, machining time, a constant bias of 1, and the output is the tool flank wear length. The data of conditions 1, 5, 9, 10, and a are used for training and the rest are

123

60

0.08

0.09

0.05 20

50

(b) 0.09

Desired outputs MLP outputs FFCNN outputs Optimized FFCNN outputs

Flank wear (mm)

(a) 0.11 Flank wear (mm)

Fig. 6 Testing performance comparison for (a) chamfered CBN tool and (b) honed CBN tool at cutting speed = 250 m/min and feed = 0.05 mm/rev

40

Time (min)

12

14

16

18

Time (min)

MLP (%)

FFCNN (%)

Optimized FFCNN (%)

10.17 59.14 34.66 1199.03

7.60 5.08 6.34 3.18

7.39 4.83 6.11 3.28

used for testing. For MLP, the learning rate and momentum are set to be 0.1 and 0.8 respectively, and the limit for training epochs is set to be 20,000. Other network configurations are the same as those in section “Validation with Ozel and Nadgir’s experimental results”. Two typical training results are shown in Fig. 7. All training results of the three investigated ANNs are very similar and all of them model the tool wear progression very well as in section “Validation with Ozel and Nadgir’s experimental results”. However, the differences in modeling generalization capability can be seen from the five testing cases as shown in Fig. 8 and Table 3. Optimized FFCNN, which has 35 connections for this case, excels all other three approaches under Conditions 3, 4, and b, but be second to MLP under Condition 2 and the analytical approach under Condition 8. Overall, it can be seen that the optimized FFCNN-based approach has the least modeling error and error variance as shown in Table 3. For all the testing cases, optimized FFCNN excels FFCNN, which confirms the effectiveness of the connectivity optimization algorithm. It should be noticed that

J Intell Manuf (2008) 19:383–396

1 2 3 4 5 6 7 8 9 10 11 a b

Speed (m/s)

Feed (mm/rev)

Depth of cut (mm)

3.05 1.52 3.05 2.29 1.52 3.36 2.29 2.29 2.29 1.21 2.29 1.52 1.52

0.152 0.152 0.076 0.114 0.076 0.114 0.114 0.061 0.168 0.114 0.114 0.076 0.076

0.203 0.203 0.203 0.203 0.203 0.203 0.203 0.203 0.203 0.203 0.203 0.102 0.152

(a) 150

(b) 200 Condition 5

Flank wear (micro meter)

Fig. 7 Training performance comparison under (a) Condition 5 and (b) Condition 10

Condition index

Flank wear (micro meter)

Table 2 Experimental cutting conditions (Conditions 4, 7, and 11 were identical in this experimental design)

391

100

50 Desired outputs Analytical predictions MLP outputs FFCNN outputs Optimized FFCNN outputs

0

-50

0

5

10

15

20

25

30

Condition 10 150

100 Desired outputs Desired outputs Analytical predictions MLP outputs FFCNN outputs Optimized FFCNN FFCNN outputs outputs

50

0

0

Time (min)

the proposed optimized FFCNN estimator delivers the worst testing performance under Condition three and it is attributed to rapid and stochastic tool wear under the aggressive cutting conditions. Discussion Experimental data from section “Validation in CBN hard turning of hardened steel” are further used to appreciate the proposed optimized FFCNN estimator performance in terms of learning convergence speed, generalization capability, model sensitivity, network training cost-benefit analysis, and effect of cutting conditions on tool wear. Learning convergence speed Figure 9a provides the learning convergence speed comparison between the BP learning and the EKF learning algorithms for FFCNN. It is obvious that FFCNN trained by the EKF algorithm converges much faster than BP. This observation also agrees with the previous findings (Zhang 2005). Figure 9b compares the learning convergence speed between FFCNN and optimized FFCNN, and both the ANNs are trained by EKF. The comparison shows that the learning convergence speed of optimized FFCNN is even faster than

10

20

30

40

50

60

Time (min)

that of FFCNN. Note that for this comparison, some connections of optimized FFCNN have been removed already and the convergence study is based on the network refining phase as discussed in section “Training procedure”. Generalization capability Once the proposed optimized FFCNN estimator is trained using the training data, its generalization capability in modeling tool wear is further studied using the testing data. Based on the comparisons of the different ANN-based approaches as seen in Table 3, it is concluded that the optimized FFCNNbased tool wear estimator is the most accurate and effective estimator among the three ANN-based modeling approaches. Furthermore, the variance of errors of optimized FFCNN is the smallest (10.04) when compared with those of the other approaches (184.01, 175.56, and 65.54) for all the testing cases. The above observation concludes that the generalization ability of optimized FFCNN is the best among the three investigated ANNs for CBN tool wear modeling in hard turning. Modeling sensitivity to network structure variation The structure 5-7-1 has been selected based on a trial-anderror approach and satisfactory modeling accuracy has been

123

392 120

200

Condition 2

Flank wear (micro meter)

Flank wear (micro meter)

Fig. 8 Testing performance comparison under Conditions 2, 3, 4, 8, and b

J Intell Manuf (2008) 19:383–396

150

100 Desired outputs Analytical predictions MLP outputs FFCNN outputs Optimized FFCNN outputs

50

0

0

10

20 Time (min)

30

60 40

Desired outputs Analytical predictions MLP outputs FFCNN outputs Optimized FFCNN outputs

20 0

0

1

2 Time (min)

3

4

140

Condition 4

Flank wear (micro meter)

Flank wear (micro meter)

80

40

200

150

100 Desired outputs Analytical predictions MLP outputs FFCNN outputs Optimized FFCNN outputs

50

0

Condition 3 100

0

5

10 Time (min)

Condition 8

120 100 80 60

Desired outputs Analytical predictions MLP outputs FFCNN outputs Optimized FFCNN outputs

40 20 0

15

0

2

4

6 8 Time (min)

10

12

Flank wear (micro meter)

250

Condition b 200 150 100 50 0

Table 3 Testing error comparison in CBN hard turning

Condition 2 Condition 3 Condition 4, 7, or 11 Condition 8 Condition b Average error Error variance

0

20

40 Time (min)

60

80

Analytical model (%)

MLP (%)

FFCNN (%)

Optimized FFCNN (%)

25.63 14.49 11.96 6.27 40.52 19.77 184.01

8.37 39.45 17.94 15.57 5.96 17.46 175.56

10.65 26.32 22.03 15.27 6.50 16.15 65.54

10.29 12.77 11.51 13.71 5.60 10.78 10.04

achieved. However, it is of much interest to investigate the deterioration of neural network estimating capability among MLP, FFCNN, and optimized FFCNN when the network structure is modified. If the network outputs are less sensitive to its structure change, it means that such a network is robust since its performance may vary less after any unnecessary structure alteration. In order to study the structure sensitivity

123

Desired outputs Analytical predictions MLP outputs FFCNN outputs Optimized FFCNN outputs

characteristic of the investigated ANNs, two more experiments are done by altering the number of hidden neurons of ANNs from seven to five and nine, respectively. Table 4 lists the statistical performance of the investigated ANNs due to this structure variation. As expected, the modeling performance degrades after altering the ANN structure, and FFCNN is the most sensitive to the network structure change

J Intell Manuf (2008) 19:383–396

393

Fig. 9 Learning convergence comparison between (a) MLP and FFCNN, and (b) optimized FFCNN and FFCNN

Table 4 ANNs’ performance degradation due to hidden neuron number alteration

With 5 hidden neurons With 9 hidden neurons 7 to 5 hidden neurons 7 to 9 hidden neurons

Avg. error Avg. deviation Avg. error Avg. deviation Avg. error change Avg. deviation change Avg. error change Avg. deviation change

as shown in Table 4. Among the cases investigated, the average estimating errors of FFCNN are the largest, and optimized FFCNN excels in modeling performance and has much smaller deviation values. To further compare the relative performance degradation of the three ANNs, the changes in the average and variance of estimating errors due to the hidden neuron number alteration are also listed in Table 4. When the hidden neuron number decreases from seven to five, the incremental average error of MLP is slightly smaller than that of optimized FFCNN; however, optimized FFCNN excels both MLP and FFCNN for all the other changes. The results prove that optimized FFCNN is the least sensitive to the structure alteration among the three ANNs investigated. Cost-benefit analysis of ANN training The proposed optimized FFCNN estimator excels in modeling performance but requires more computational time in training due to the EKF training algorithm and additional structure optimization process. A cost-benefit analysis is conducted to better appreciate the proposed estimator. MLP and optimized FFCNN are trained to model the CBN hard turning tool wear progression in section “Validation in CBN hard turning of hardened steel”. If the networks are trained by 1,200 iterations, training MLP costs 3.2 s, resulting in a 17.53% training error, and training optimized FFCNN costs 483.2 s, resulting in a 1.94% training error. It shows that the proposed estimator requires much more time for each training epoch; however, the proposed estimator has an increased training convergence speed as discussed in section “Learn-

MLP (%)

FFCNN (%)

Optimized FFCNN (%)

19.35 80.09 22.63 239.45 1.89 95.47 5.17 63.89

28.01 284.57 27.12 516.84 11.86 219.03 10.97 451.30

12.81 18.88 12.41 25.52 2.03 8.84 1.63 15.48

ing convergence speed” and higher modeling accuracy and efficiency as discussed in section “Generalization capability”. Considering the significant advances in computational power, the weakness in computational cost will be less pronounced, and the proposed estimator is preferred for most tool wear modeling cases. Effect of cutting conditions on tool wear using optimized FFCNN CBN tool performance in terms of tool life is further evaluated as a function of the cutting conditions (cutting speed, feed rate and depth of cut) based on the developed optimized FFCNN estimator. For comparison, the cutting conditions and tool life criteria are selected based on a previous study (Huang and Liang 2004c). The estimator prediction results are also compared with those of experimental measurements and theoretical predictions (Huang and Liang 2004a) as shown in Figs. 10–12. The effect of cutting speed on the tool wear is first investigated. For this case, both the feed rate and the depth of cut are fixed at some representative values, 0.114 mm/rev and 0.203 mm, respectively, and the tool life is investigated by varying the cutting speed from 1.20 to 3.02 m/s. The tool life criterion here is selected as 150 µm as in (Huang and Liang 2004c). As shown in Fig. 10, the tool life curves from the analytical model and optimized FFCNN match each other closely, and both are close to the experimental measurements. The effect of feed rate is also investigated. For this case, cutting speed and depth of cut are set as 2.29 m/s and 0.203 mm respectively. The tool life is investigated by varying

123

394

J Intell Manuf (2008) 19:383–396 45

45

Optimized FFCNN outputs

Optimized FFCNN outputs

40

Analytical predictions

Analytical predictions

Experimental data

40

Experimental data

30

Tool life (min)

Tool life (min)

35

25 20

35

30

15

25 10 5

1.5

2

2.5

3

20 0.1

Cutting speed (m/s)

Optimized FFCNN outputs Analytical predictions Experimental data

11

0.16

0.18

0.2

0.22

Fig. 12 Effect of depth of cut on tool life (cutting seed = 1.52 m/rev and feed = 0.076 mm/rev)

surements than those using the analytical model (Huang and Liang 2004a). As seen from these figures, the cutting speed is the most significant factor in determining the tool life, and the depth of cut is the least significant factor.

10

Tool life (min)

0.14

Depth of cut (mm)

Fig. 10 Effect of cutting speed on tool life (feed = 0.114 mm/rev and depth cut = 0.203 mm) 12

0.12

9 8

Conclusions

7

An FFCNN-based generalized optimal estimator is proposed to model CBN tool wear in hard turning, and this estimator has the following advantages:

6 5 4 0.06

0.08

0.1

0.12

0.14

0.16

0.18

Feed (mm/rev)

Fig. 11 Effect of feed rate on tool life (cutting speed = 2.29 mm/rev and depth cut = 0.203 mm)

the feed rate from 0.076 to 0.168 mm/rev. The tool life criterion here is specified as 110 µm (Huang and Liang 2004c). As seen from Fig. 11, the optimized FFCNN estimator provides a better estimation of the tool wear progression than that of the analytical model in Huang and Liang (2004a). The effect of depth of cut is investigated as well. For this case, the cutting speed and the feed rate are set as 1.52 m/s and 0.076 mm/rev respectively, and the tool life is investigated by varying the depth of cut from 0.102 to 0.203 mm. The tool life criterion here is specified as 125 µm (Huang and Liang 2004c). As seen from Fig. 12, the optimized FFCNN estimator provides a better estimation of the tool wear progression than that of the analytical model in Huang and Liang (2004a). It can be seen that the tool life predictions using the optimized FFCNN estimator are closer to the experimental mea-

123

(1) It is easily implemented as a generalized perceptronbased neural network, and it is not necessary to specify the number of hidden layers and the neuron number for each hidden layer once the total hidden neuron number is given. (2) The neuron connections are optimized automatically to achieve increased network robustness. (3) The network training convergence performance is improved using the EKF algorithm. That is, such an estimator for tool wear modeling can be automatically designed to achieve a better and robust estimating performance once the numbers of inputs, outputs, and hidden neurons are specified. The modeling performance of the proposed neural network-based estimator has been evaluated using the experimental measurements as well as compared with the other common ANN-based approaches, and the comparisons show that the optimized FFCNN estimator excels in modeling CBN tool wear in hard turning. Furthermore, the optimized FFCNN estimator has the fastest learning convergence speed, best generalization capability, and least sensitive to the net-

J Intell Manuf (2008) 19:383–396

work structure alteration. It is believed that this modeling approach is also applicable to other machining tool wear modeling studies. In conclusion, the optimized FFCNN tool wear estimator has been proven to be faster, more accurate, and more robust when compared with the other approaches investigated, and it will help to better optimize the cutting conditions for hard turning as well as other machining processes. Although the tool geometry effect is not explored in this study, it can also be studied using the proposed estimator. Furthermore, future work may integrate a recurrent approach to better capture the non-stationary property of tool wear progression in machining. Acknowledgments The financial support from the South Carolina Space Grant Consortium and NASA Ames Research Center is highly appreciated.

References Abrao, A. M., Wise, M. L. H., & Aspinwall, D. K. (1995). Tool life and workpiece surface integrity evaluations when machining hardened AISI 52,100 steels with conventional ceramic and PCBN tool materials. SME Technical Paper, MR95-159, 1–9. Alessandri, A. (2002). Optimization-based learning with bounded error for feedforward neural networks. IEEE Transactions on Neural Networks, 13(2), 261–273. Bierman, G. (1977). Factorization methods for discrete sequential estimation. New York: Academic Press. Boyd, M., Kaastra, I., Kermanshahi, B., & Kohzadi, N. (1996). A comparison of artificial neural network and time series models for forecasting commodity prices. Neurocomputing, 10, 169–181. Chryssoluouris, G., & Guillot, M. (1990). A comparison of statistical and AI approaches to the selection of process parameters in intelligent machining. ASME Journal of Engineering for Industry, 112, 122–131. Chungchoo, C., & Saini, D. (2002). On-line tool wear estimation in CNC turning operations using fuzzy neural network model. International Journal of Machine Tools & Manufacture, 42, 29–40. Danaher, S., Datta, S., Waddle, I., & Hackney, P. (2004). Erosion modeling using Bayesian regulated artificial neural networks. Wear, 256, 879–888. Das, S., Chattopadhyay, A. B., & Murthy, A. S. R. (1996). Force parameters for on-line tool wear estimation: A neural network approach. Neural Networks, 9, 1639–1645. Dawson, T. (2002). Machining hardened steel with polycrystalline cubic boron nitride cutting tools. Ph.D. Thesis, Georgia Institute of Technology, Atlanta, GA. Dewes, R. C., & Aspinwall, D. K. (1996). The use of high speed machining for the manufacture of hardened steel dies. Transactions of NAMRI, 24, 21–26. Dimla, D. E. Sr., Lister, P. M., & Leighton, N. J. (1997). Neural network solutions to the tool condition monitoring problem in metal cutting— a critical review of methods. International Journal of Machine Tools & Manufacture, 37, 1219–1241. Dimla, D. E. Sr., & Lister, P. M. (2000). On-line metal cutting tool condition monitoring. II: Tool-state classification using multi-layer perceptron neural networks. International Journal of Machine Tools & Manufacture, 40, 769–781. Elanayar, S. V. T., & Shin, Y. C. (1999). Robust tool wear monitoring using radial basis function neural network. ASME

395 Journal of Dynamic Systems, Measurement and Control, 117, 459–467. Haber, R. E., & Alique, A. (2003). Intelligent process supervision for predicting tool wear in machining processes. Mechatronics, 13, 825–849. Haykin, S. (1999). Neural networks: A comprehensive foundation (2nd ed.). Upper Saddle River, NJ: Prentice-Hall. Huang, Y. (2002). Predictive modeling of tool wear rate with application to CBN hard turning. Ph.D. Thesis, Georgia Institute of Technology, Atlanta, GA. Huang, Y., & Dawson, T. G. (2005). Tool crater wear depth modeling in CBN hard turning. Wear, 258(9), 1455–1461. Huang, Y., & Liang, S. Y. (2004a). Modeling of CBN tool flank wear progression in finish hard turning. ASME Journal of Manufacturing Science and Engineering, 126, 98–106. Huang, Y., & Liang, S. Y. (2004b). Modeling of CBN tool crater wear in finish hard turning. Internation Journal of Advanced Manufacturing Technology, 24(9–10), 632–639. Huang, Y., & Liang, S. Y. (2004c). Effect of cutting conditions on tool performance in hard turning. Transactions of NAMRI/SME, 32, 511–518. Huang, Y., Chou, Y. K., & Liang, S. Y. (2007). CBN tool wear in hard turning: A survey on research progresses. International Journal of Advanced Manufacturing Technology, 35(5–6), 443–453. Kamarthi, S. V., Sankar, G. S., Cohen, P. H., & Kumara, S. R. T. (1991). On-line tool wear monitoring using a Kohonen’s feature map. In: Proceeding of the First Artificial Neural Networks in Engineering Conference, St. Louis, pp. 639–644. König, W., Hochschule, T., Komanduri, R., & Tönshoff, D. H. K. (1984). Machining of hard materials. Annals of CIRP, 33(2), 417–427. Kramer, B. M. (1986). Predicted wear resistances of binary carbide coatings. Journal of Vacuum Science & Technology, A4(6), 2870– 2873. KrishnaKumar, K. (1993). Optimization of the neural net connectivity pattern using a backpropagation algorithm. Neurocomputing, 5, 273–286. KrishnaKumar, K., & Nishta, K. (1999). Robustness analysis of neural networks with an application to system identification. Journal of Guidance, Control, and Dynamics, 22, 695–701. Kuo, R. J., & Cohen, P. H. (1998). Intelligent tool wear estimation system through artificial neural networks and fuzzy modeling. Artificial Intelligence in Engineering, 12, 229–242. Kuo, R. J., & Cohen, P. H. (1999). Multi-sensor integration for on-line tool wear estimation through radial basis function networks and fuzzy neural network. Neural Networks, 12, 355–370. Lary, D. J., & Mussa, H. Y. (2004). Using an extended Kalman filter learning algorithm for feed-forward neural networks to describe tracer correlations. Atmospheric Chemistry and Physics Discussion, 4, 3653–3667. Li, S. (2001). Comparative analysis of backpropagation and extended Kalman filter in pattern and batch forms for training neural networks. Neural Networks, Proceedings IJCNN ’01. International Joint Conference, 1, 144–149. Lin, S. C., & Ting, C. J. (1995). Drill wear monitoring using neural networks. International Journal of Machine Tools & Manufacture, 36(4), 465–475. Liu, Q., & Altintas, Y. (1999). On-line monitoring of flank wear in turning with multilayered feed-forward neural network. International Journal of Machine Tools & Manufacture, 39, 1945–1959. Monostori, L. (1993). A step towards intelligent manufacturing: Modelling and monitoring of manufacturing processes through artificial neural networks. Annals of the CIRP, 42(1), 485–488. Obikawa, T., & Shinozuka, J. (2004). Monitoring of flank wear of coated tools in high speed machining with a neural network ART2. International Journal of Machine Tools & Manufacture, 44, 1311–1318.

123

396 Ozel, T., & Nadgir, A. (2002). Prediction of flank wear by using back propagation neural network modeling when cutting hardened H-13 steel with chamfered and honed tools. International Journal of Machine Tools & Manufacture, 42, 287–297. Ozel, T., & Karpat, Y. (2005). Predictive modeling of surface roughness and tool wear in hard turning using regression and neural networks. International Journal of Machine Tools & Manufacture, 45, 467–479. Panda, S. S., Singh, A. K., Chakraborty, D., & Pal, S. K. (2006). Drill wear monitoring using back propagation neural network. Journal of Materials Processing Technology, 172, 283–290. Poulachon, G., Moisan, A., & Jawahir, I. S. (2001). Tool-wear mechanisms in hard turning with polycrystalline cubic boron nitride tools. Wear, 250, 576–586. Puskorius, G. V., & Feldkamp, L. A., (1994). Neurocontrol of nonlinear dynamical systems with Kalman filter trained recurrent networks. IEEE Transactions on Neural Networks, 5, 279–297. Rangwala, S., & Donfeld, D. (1990). Sensor integration using neural networks for intelligent tool condition monitoring. Journal of Engineering for Industry Transactions ASME, 112, 219–228. Rumelhart, D. E., & McClelland, J. L. (1986). The PDP research group, parallel distributed processing: Explorations in the microstructure of cognition. Cambridge, MA: MIT Press. Sarkar, D. (1995). DILIP methods to speed up error back-propagation learning algorithm. ACM Computing Surveys, 27(4), 519–542. Schalkoff, R. J. (1997). Artificial neural networks. New York: McGrawHill Inc. Scheffer, C., Kratz, H., Heyns, P. S., & Klocke, F. (2003). Development of a tool wear-monitoring system for hard turning. International Journal of Machine Tools & Manufacture, 43, 973–985. Sick, B. (1998). Online tool wear monitoring in turning using timedelay neural networks. In: Proceedings of the 1998 International Conference on Acoustics, Speech, and Signal Processing, 1, Seattle, May, 1998, pp. 445–448. Sick, B. (2002). On-line and indirect tool wear monitoring in turning with artificial neural networks: A review of more than a decade of research. Mechanical Systems and Signal Processing, 16, 487–546.

123

J Intell Manuf (2008) 19:383–396 Singhal, S., & Wu, L. (1989). Training feed forward networks with extended Kalman filter algorithm. Proceedings – ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing, Glasgow, Scotland, 1187–1190. Sivarao, P. S. (2005). Expert system suitability in modeling and analysis of tool wear in drilling. Proceedings of 2005 International Conference on MEMS, NANO and Smart Systems, pp. 473–476. Sun, J., Rahman, M., Wong, Y. S., & Hong, G. S. (2004). Multiclassification of tool wear with support vector machine by manufacturing loss consideration. International Journal of Machine Tools & Manufacture, 44, 1179–1187. Takatsu, S., Shimoda, H., & Otani, K. (1983). Effect of CBN content on the cutting performance of polycrystalline CBN tools. International Journal of Refractory Metals & Hard Materials, 2(4), 175–178. Tönshoff, H. K., Arendt, C., & Amor, R. B. (2000). Cutting of hardened steel. Annals of CIRP, 49(2), 547–566. Usui, E., Shirakashi, T., & Kitagawa, T. (1978). Analytical prediction of three dimensional cutting process, part 3: Cutting temperature and crater wear of carbide tool. Journal of Engineering for Industry, 100, 236–243. Wang, Z., & Dornfeld, D. A. (1992). In-process monitoring using neural networks. Proceedings of the 1992 Japan—USA Symposium on Flexible Automation Part 1 (of 2), San Francisco, CA (13–15th July, 1992), pp. 263–270. Werbos, P. J. (1990). Back propagation through time: what is does and how to do it. Proceedings of the IEEE, 78(10), 1550–1560. Xie, L. J., Schmidt, J., Schmidt, C., & Biesinger, F. (2005). 2D FEM estimate of tool wear in turning operation. Wear, 258, 1479– 1490. Yen, Y. C., Söhner, J., Lilly, B., & Altan, T. (2004). Estimation of tool wear in orthogonal cutting using the finite element analysis. Journal of Materials Processing Technology, 146, 82–91. Zhang, L. (2005). Neural network-based market clearing price prediction and confidence interval estimation with an improved extended Kalman filter method. IEEE Transactions on Power Systems, 20(1), 59–66.