Multivariate global index and multivariate mean

0 downloads 0 Views 677KB Size Report
roughness Ra on face milling using neural network and. Taguchi robust design. The same ... Taguchi's orthogonal array to design parameters and neural networks of ... networks to predict and optimize AISI 1040 steel roughness on machin-.
Int J Adv Manuf Technol DOI 10.1007/s00170-016-8703-4

ORIGINAL ARTICLE

Multivariate global index and multivariate mean square error optimization of AISI 1045 end milling Robson Bruno Dutra Pereira 1 & Carlos Andrés Arango Hincapie 2 & Paulo Henrique da Silva Campos 2 & Anderson Paulo de Paiva 2 & João Roberto Ferreira 2

Received: 12 January 2016 / Accepted: 27 March 2016 # Springer-Verlag London 2016

Abstract Surface roughness is used as a product quality index and technical requirement of machined parts. Consequently, it is important to know the relationship between process parameters and roughness outcomes. Considering that there are different outcomes for measuring roughness, the present works aim the modelling and optimization of surface roughness based on principal component analysis using the multivariate mean square error and the multivariate global index methods. This paper presents a sequential methodology on roughness surface multivariate modelling and optimization. Initially, a complete factorial design was used with centre points, and the hardness of machined surfaces as a covariate was taken into account. On the second part, the axial points were supplied to complete the central composite design and fit a second-order model. Principal component analysis was applied to represent the set of roughness responses and fit a unique response surface regression model in terms of cutting data. The different non-linear programming methods were applied and compared through the global percentage error criteria, considering the outcomes targets. Confirmation runs on multivariate global index optimal point achieved original responses means within the confidence interval of optimal point and very near to the fitted value.

* Robson Bruno Dutra Pereira [email protected]

1

Department of Mechanical Engineering, Federal University of São João Del-Rei, 170 Frei Orlando Square, São João del Rei, MG 36880000, Brazil

2

Institute of Industrial Engineering and Management, Federal University of Itajubá, 1303 BPS Avenue, Itajubá, MG 37500-903, Brazil

Keywords Roughness surface . Principal component analysis . Analysis of covariance . Response surface methodology . Multivariate mean square error . Multivariate global index

1 Introduction Engineering surfaces, generated by machining processes, exhibit characteristic topographies which play a fundamental role in functional performance by affecting lubrication, friction, etc. Surface roughness is widely used as a product quality index and as a technical requirement for mechanical products [1, 2]. A surface obtained by machining processes consists of inherent irregularities left by a tool which is commonly defined as surface roughness [3]. Each cutting tool type will leave unique marks on the machined surface. The direction of the dominating surface pattern, lay, will be affected by a number of different factors in the processes related to the cutting tool, such as stability, overhang, cutting geometry, tool wear; the machinery aspects as machining environment, coolant application, machine conditions, power and rigidity; and the workpiece material structure, quality, design, clamping, previous machining process and other factors [4]. There are several theoretical models relating cutting parameters to surface roughness. According to Grzesik [3], which resumed these models for turning operations, mathematical models which can predict approximately the magnitude of the surface roughness under given cutting conditions are of practical interest. These models consider mainly cutting tool geometry and cutting conditions as feed rate per tooth fz. In milling operations, a number of additional factors affect the surface roughness due to differences in tooling construction and process kinematics [4]. It is a consensus that feed per tooth

Int J Adv Manuf Technol

is the most influential cutting condition on surface roughness. However, Petropoulos et al. [5] highlight that the experimental values of surface roughness obtained are higher than the theoretical ones, probably due to the chip formation mode (built-upedge, discontinuous chip, thermal variations, shear zone expansion to workpiece subsurface, etc.), besides chatter in the machine tool system, processed material defects, cutting tool wear, irregularities in feed mechanism, tool runout and others. Experimental models assure, in several cases, that another process parameters such as cutting speed vc, axial and radial depth of cut (ap and ae) also present statistical significance on surface roughness. Several studies presented experimental models to predict roughness outcomes considering different cutting conditions. Ehrnann [1] developed a computational model named surfaceshaping system considering the deterministic and nondeterministic cutting tool parameters such as tool runout, machine deformation and vibration, etc. Abouelatta and Ma [6] presented a model to predict the surface roughness on turning based both on cutting parameters and machine tool vibrations. The authors utilized MATLAB and SPSS softwares without, however, detailing the methods. Benardos and Vosniakos [7] presented a model for surface roughness Ra on face milling using neural network and Taguchi robust design. The same authors in another work [2] made a bibliographic review focused on surface roughness modelling on machining processes. The authors separated the models according to three approaches: models based on machining theory, experimental models and models based on designed experiments (DOE). In this last approach were cited works that applied response surface methodology, Taguchi, neural networks, fuzzy logic, genetic algorithms, etc. Pontes et al. [8] made a review about surface roughness modelling through neural networks, comparing different works using a set of criteria, as well as assessed how the research findings were validated. Pontes et al. [9] used Taguchi’s orthogonal array to design parameters and neural networks of radial base function (RBF) for prediction of mean roughness (Ra) in the turning process of ABNT/AISI 52100 hardened steel. This methodology made it possible to identify network configurations presenting high degree of accuracy and reduced variability in the proposed task. Lopes et al. [10] realized a study to modelling surface roughness through principal component analysis, considering the multivariate uncertainty as weighting matrix for the principal components. The main objective was to maximize the value of R2 predicted to improve model’s explanation and optimization results. The studied process was AISI 52100 hardened steel turning with Wiper insert. Simunovic et al. [11] presented a surface roughness modelling and optimization on face-milled aluminium alloy using response surface methodology and propagation of error (POE) approach, achieving good prediction results. Bhardwaj [12]

used Box-Cox transformation to improve the results obtained of the response surface model of surface roughness in end milling of EN 353 steel, achieving better prediction results. Kumar et al. [13] used Taguchi and artificial neural network to predict surface roughness of electric discharge machining of a titanium alloy. Günay et al. [14] used Taguchi to determine the optimal surface roughness in turning of high-alloy white cast iron, considering hardness as noise factor. Shanmughasundaram and Subramanian [15] also applied Taguchi to optimize the surface roughness of turned Al-fly ash/graphite hybrid composite. Chandrasekaran et al. [16] carried out an online optimization of surface roughness on turning process using a fuzzy setbased optimization strategy. Al-Zubaidi et al. [17] used the gravitational search algorithm (GSA) to minimize the surface roughness of Ti6Al4Valloy end milling. Sahoo et al. [18] used response surface methodology and artificial neural networks to predict and optimize AISI 1040 steel roughness on machining under dry environment. The authors concluded that the artificial neural networks model presented better results to predict roughness surface. Meenu and Sehgal [19] optimized the surface roughness on end milling of Ferritic-Pearlitic Ductile Iron Grade 80-55-06 using response surface methodology and the particle swarm optimization. Zhao et al. [20] applied response surface methodology to modelling and optimization of grinding and polishing process of integrally bladed rotors (IBRs). The authors also used the signal-to-noise method to achieve the optimal combination of the parameters. Comparing the results, IBR processed by using optimal parameters obtained by response surface optimization presented better surface quality. Çiçek et al. [21] applied response surface methodology and Taguchi to predict surface roughness on drilling of AISI 304 stainless steel under dry drilling conditions. They achieved that feed rate and cutting velocity were the most significant parameters on surface roughness, respectively. Jagadish et al. [22] used fuzzy logic to predict surface roughness on abrasive water jet machining of green composites. Experiments were performed considering a Taguchi (L27) orthogonal array. An integrated expert system containing Takagi–Sugeno–Kang fuzzy model with subtractive clustering was developed. A linear regression model was used to generate the training data required for training the fuzzy logic. The percentage error of the predicted results was calculated considering the experimental data to corroborate the proposed methodology. Sarkheyli et al. [23] presented a methodology based on adaptive network-based fuzzy inference system and modified genetic algorithm (MGA) to model surface roughness on wire electrical discharge machining. They proposed a new type of population as the training algorithm on MGA to optimize the modelling parameters. The methodology was compared with artificial neural networks and adaptive network-based fuzzy inference

Int J Adv Manuf Technol

system with traditional genetic algorithm, concluding statistically that the new methodology presented improved results in terms of accuracy of the optimal solution and coverage rate. Different approaches have been applied to modelling and optimization of surface roughness, and some works are based only in one surface roughness outcome, generally, mean surface roughness Ra. Nevertheless, there are different outcomes to describe surface roughness of a part. Besides, these outcomes should be analysed using multivariate methods since they certainly will present high level of correlation; once when a roughness measurement is realized, all the different roughness outcomes are calculated based on the availed profile. The present study aims the modelling and optimization of surface roughness based on principal component analysis (PCA) using the multivariate mean square error (MMSE) and the multivariate global index (MGI) methods. Initially, a complete factorial design with centre points was used and took into account hardness of machined surfaces as a covariate. On the second part, the axial points were supplied to complete the central composite design (CCD) and fit a second-order model. Principal component analysis is used to represent the set of roughness responses and fit a unique response surface regression model in terms of cutting data. The different non-linear programming methods were applied and compared through the global percentage error criteria. Finally, confirmation runs endorse the methodology effectiveness. The proposed approach offers a manner of resuming different roughness outcomes in terms of PCA scores fitted model and optimizes it by achieving a small global percentage error in relation to roughness outcomes targets.

2 Sequential strategy of multivariate optimization Machining processes are multivariate since it can be established functional relations for multiple outcomes (responses) from an identical data set. According to Mays [24], the use of design of experiments (DOE) methodology to a set of data consisting of correlated outcomes is very common. The correlation existence among various responses exerts a strong influence on transfer functions used to represent quality characteristics. The PCA is a multivariate statistical technique created by Hotelling [25] which is dedicated to explain the variancecovariance structure existent in a data set using linear combinations of original variables. According to some authors [26, 27], its objectives are: (1) dimensionality reduction and (2) data analysis and interpretation. Although p components are necessary to reproduce the total variability of a system, generally, the biggest part of this variability can be represented for k principal components. This means that there is almost the same amount of information on k principal components as on p original variables. The general

idea of PCA is that k principal components can substitute, without loss of important information, the p original variables. According to Rencher [27], PCA generally reveals relationships that would not be previously identified using the original data set, resulting in a broader interpretation of the phenomena. Johnson and Wichern [26] stress that PCA serves as an intermediate step on data analysis. 2.1 PCA Assuming that f1(x), f2, …, f2(x) are correlated and can be written in terms of a random vector YT = [Y1, Y2, …, Yp] and Σ is the variance-covariance matrix associated to this vector, which has λ1 ≥ λ2 ≥ … ≥ λp ≥ 0 as eigenvalues and e1, e2, …, ep as standardized eigenvectors. The standardized eigenvectors ei satisfy the following properties: (1) eTi ej = 0, i ≠ j, (2) eTi ei = 1, i = 1, 2, …, p and (3) Σei = λiei, i = 1, 2, …, p. ^ i ¼ ^eT The ith principal component can be estimated as PC i Y ¼ ^e1i Y 1 þ ^e2i Y 2 þ … þ ^epi Y p ; i ¼ 1;2;…; p and can be obtained as a maximization of the variance of this linear combination. Generally, the parameters Σ and ρ are unknown. Then, the sample variance-covariance matrix S replaces Σ and is defined as follows: 2

s11 6 s12 S¼6 4⋮ s1p

s11 s22 ⋮ s11

3 ⋯ s1p ⋯ s21 7 7; ⋱ ⋮5 ⋯ spp

with sii ¼

n 2 1X  yi −yi n j¼1

ð1Þ

p

^1 þ λ ^2 þ … þ λ ^p . which satisfies ∑ sii ¼ λ i¼1

The elements of the sample correlation matrix can be obtained as expressed on Eq. (2) to replace ρ.   qffiffiffiffi Cov xk ; ^yi ^ ^ e ki λi ¼ pffiffiffiffiffiffi ; i; k ¼ 1; 2; …; p r  ¼ rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   s ^yi ;xk kk V ar ^yi :V arðxk Þ

ð2Þ

It is convenient to write the linear combinations in terms of principal components scores, defined by Johnson and Wichern [26] as follows in Eq. (3). 2 6 6 6 6 6 6 T PC k ¼ Z e ¼ 6 6 6 6 6 4

x11 −x1 pffiffiffiffiffiffi s11

! !

x12 −x1 pffiffiffiffiffiffi s11 ⋮ ! x1n −x1 pffiffiffiffiffiffi s11

x21 −x2 pffiffiffiffiffiffi s22

!

xp1 −xp pffiffiffiffiffiffi spp



! 3T

7 7 !7 7 xp2 −xp 7 pffiffiffiffiffiffi 7 spp 7 7 ⋮ !7 7 7 xpn −xp 5 pffiffiffiffiffiffi spp

!

x22 −x2 ⋯ pffiffiffiffiffiffi s22 ⋮ ! ⋱ x2n −x2 ⋯ pffiffiffiffiffiffi s22 2

e11 6 e21 :6 4⋮ e1p

e12 e22 ⋮ e2p

3 ⋯ e1p ⋯ e2p 7 7 ⋱ ⋮5 … epp

ð3Þ

Int J Adv Manuf Technol

Following these statements, the first principal component accounts for greatest variability explanation of the data, followed by the second component and so on. Normally, the first k < p components explain the leading information, reducing the problem dimension. There are a variety of stopping rules to estimate the adequate number of non-trivial PCA axes (the PC scores) that must be adopted to represent the data set. Kaiser’s criteria ensures that only the principal components whose eigenvalues are greater than 1 (one) should be kept to represent the original data set [27, 28]. Moreover, the explained cumulative variance should be greater than 80 %.

According to Montgomery [29], nuisance factor is a factor that probably has an effect on the outcomes, but the experimenter is not interested with its effect. Nuisance factors can be treated with different techniques, according to its nature. If it is uncontrollable, unknown randomization can be used to guarantee that the error distribution is independent. If the nuisance factor is known but uncontrollable, the analysis of covariance can compensate its effect. Finally, if the nuisance source is known and controllable, blocking it can eliminate its influence on considered outcomes. In an experiment, another variable x, which is correlated with the outcome of interest y, and is uncontrollable but measurable, it can be named covariate or concomitant variable. The analysis of covariance (ANCOVA) involves adjusting the observed response variable for the effect of the concomitant one. If such adjustment is not performed, the concomitant variable could inflate the mean square error (MSE) and make true differences in the response due to treatments harder to detect. The procedure is a combination of analysis of variance and regression analysis [29]. Assuming that there is a linear correlation between the outcome y and the covariate x, the appropriate statistic model is presented on Eq. (4), i ¼ 1; 2; …; a j ¼ 1; 2; …; n

S yy ¼

ð4Þ

where yij is the jth observation on the response variable taken under the ith treatment or level of the single factor, xij is the measurement made on the covariate corresponding to yi j ; x is the mean of xij values, μ is the overall mean, τi is the effect of the ith treatment, β is a linear regression coefficient about yij on xij and εij is the random error component. It is assumed that the errors are NID (0, σ2), that β ≠ 0, that the relationship between yij and xij is linear, that the regression coefficients for each treatment are identical, that the treatment effects sum to zero (∑ai = 1τi = 0) and that the covariate xij is not affected by the treatments.

a n  X X i¼1

S xx ¼

yi j − y::

2

¼

j¼1

xi j − x::

2

j¼1

¼

y2:: an

ð5Þ

x2i j −

x2:: an

ð6Þ

j¼1

a n X X i¼1

y2i j −

j¼1

a n   X X ðx:: Þðy:: Þ xi j yi j − yi j −y:: ¼ an i¼1 j¼1

ð7Þ

a n  a n 2 1 X X X X y2 ¼n yi: − y:: ¼ y2i: − :: an n i¼1 j¼1 i¼1 j¼1

ð8Þ

a n  a n 2 1 X X X X x2 xi: − x:: ¼ x2i: − :: n i¼1 j¼1 an i¼1 j¼1

ð9Þ

S xy ¼

a n  X X

a n X X i¼1

a n  X X i¼1

i¼1

T yy

2.2 Analysis of covariance, ANCOVA

  yi j ¼ μ þ τ i þ β xi j −x:: þ εi j ;

The notation presented on Eq. (5) to Eq. (13) will assist to describe the analysis.

T xx ¼ n T xy ¼ n

xi j − x::

j¼1

a n  X X i¼1

xi: −x::

j¼1

a n   1X X ðx:: Þðy:: Þ yi: −y:: ¼ xi: yi: − n i¼1 j¼1 an

a n  2 X X E yy ¼ yi j −yi: ¼ S yy −T yy i¼1

E xx ¼

E xy ¼

ð12Þ

j¼1

a n    X X xi j −xi: yi j −yi: ¼ S xy −T xy i¼1

ð11Þ

j¼1

a n  2 X X xi j −xi: ¼ S xx −T xx i¼1

ð10Þ

ð13Þ

j¼1

The symbols S, T and E are used to denote the sums of squares and cross-products for total, treatments and error, respectively, and S = T + E. The sums of squares for x and y must be non-negative; however, the sums of cross products (xy) may be negative. The ANCOVA adjusts the response variable for the effect of the covariate. Considering the full model on ^ Eq. (4), the least squares estimators of μ, τi and β are μ xx ^ ¼ y:: ; τ i ¼ yi: −y:: −β ðxi: −x:: Þ and β ¼ Exy =E xx . The error sum of squares is SSE = Eyy − (Exy)2/Exx with a(n − 1) − 1 degrees of freedom. Equation (14) estimates the experimental error variance. M SE ¼

SS E aðn−1Þ−1

ð14Þ

If there is no effect treatment, the model on Eq. 4 can be replaced by Eq. (15), where the least squares estimators of μ ^ ¼ S xy =S xx . ^ ¼ y:: e β and β are μ   yi j ¼ μ þ β xi j −x:: þ εi j

ð15Þ

Int J Adv Manuf Technol

The sum of squares in this case is presented on Eq. (16), with an − 2 degrees of freedom.  2 0 SS E ¼ S yy − S xy =S xx ð16Þ SS′E − SSE provides a sum of squares due to the τi with a − 1 degrees of freedom for testing the hypothesis of no treatments effects. Equation (17) tests the hypothesis H0: τi = 0.  0  SS E −SS E =ða−1Þ ð17Þ F0 ¼ MS E The ANCOVA can also be developed using the general regression significance test, estimating the parameters using least squares method. Factorial plans can similarly present concomitant variables. In 2k case, assuming that the covariate affects the response identically for all treatments combinations, the unique difference in the procedure would be the treatments sum of squares (Tyy). For a 22 factorial with n replicates, the treatment sum of squares for factors A, B and AB is exposed on Eq. (18), which need to be partitioned into individual effects components SSA, SSB and SSAB. For more details, see Montgomery [29]. T yy

n n 1X X 2 y2… ¼ yi j: − ð2Þð2Þn n i¼1 j¼1

ð18Þ

2.3 Blocking In machining context, if the workpiece dimensions are not large enough to run more than a half fraction of all factorial combinations, then two workpieces should be used to machine all the different combinations and treatments. To safeguard the possible metallurgical differences between these two workpieces, these should be blocked. In this case, the block effect will be confounded with the interaction between the k control factors, in other words, with the highest order interaction. This scheme is used to block on 2k factorial designs. Montgomery [29] stresses that the number of centre points in these designs should be allocated equally among the blocks.

Fig. 2 Roughness measurement setup

Considering a 2k factorial design, the SSblocks can be computed as exposed in Eq. (19), and the alias structure in this case will include that block effect is equal to k order interaction. X B2 y2 i − …k k 2 n2 i¼1

nblocks

SS blocks ¼

ð19Þ

2.4 Response surface methodology Response surface methodology (RSM) is a collection of mathematical and statistical tools applied to modelling and optimization of problems whose responses of interest are influenced by many variables and whose relationship between dependent and independent variables is unknown [29]. A reasonable approximation for the real relationship between one response (y) and the set of independent variables x can be obtained using a first-order polynomial in some region of interest. Meanwhile, when curvature is present in some region, the approximating function must be a polynomial of higher order, like the second-order model exposed on Eq. (20). Y ¼ β0 þ

k X

β i xi þ

i¼1

k X i¼1

β ii x2i þ

k XX

β ii xi x j þ εμ ¼ b0

i< j

h i 1   þ ∇f ðxÞT þ xT ∇2 f ðxÞ x 2

ð20Þ In Eq. (20), βi and βii are the polynomial coefficients, k is the number of factors and ε is the error term; x is the vector of Table 1

Control factors

Control factors

Fig. 1 a Experimental setup; b tool detail

fz ap vc ae

Unit

mm/tooth mm m/min mm

Levels −2

−1

0

+1

2

0.05 0.375 275 13.5

0.10 0.75 300 15

0.15 1125 325 16.5

0.20 1.50 350 18

0.25 1.875 375 19.5

Int J Adv Manuf Technol Table 2 Std. order

Factorial design and results Blocks

Control factors

Responses

Covariate

PCA scores PC1

fz

ap

vc

ae

Ra

Ry

Rz

Rq

Rt

Hardness

1

1

0.2

0.750

300

15

1.980

9.710

8.863

2.337

10.043

86

3.797

2 3

1 1

0.1 0.1

1.500 0.750

300 350

15 15

1.170 1.147

5.007 4.750

4.763 4.440

1.357 1.307

5.143 4.757

90 85

−2.735 −3.125

4 5

1 1

0.2 0.1

1.500 0.750

350 300

15 18

1.687 1.043

8.047 5.777

7.387 4.777

1.997 1.210

8.300 5.847

90 85

1.473 −2.657

6

1

0.2

1.500

300

18

1.820

9.133

8.183

2.190

9.270

86

2.767

7 8

1 1

0.2 0.1

0.750 1.500

350 350

18 18

1.987 1.183

8.860 5.127

8.320 4.790

2.353 1.360

8.877 5.350

85 87

3.054 −2.607

9 10

1 1

0.2 0.2

1.125 1.125

325 325

17 17

1.570 1.630

6.903 7.223

6.480 6.813

1.820 1.903

7.097 7.387

84 85

0.102 0.578

11

1

0.2

1.125

325

17

1.663

7.347

6.897

1.940

7.433

88

0.749

12 13

1 2

0.2 0.1

1.125 0.750

325 300

17 15

1.573 1.177

7.040 5.527

6.737 5.163

1.847 1.367

7.273 5.627

87 85

0.315 −2.286

14

2

0.2

1.500

300

15

1.483

7.890

7.000

1.820

7.943

89

0.686

15 16 17

2 2 2

0.2 0.1 0.2

0.750 1.500 0.750

350 350 300

15 15 18

2.077 1.097 1.673

9.267 5.057 8.030

8.743 4.623 7.650

2.433 1.250 2.057

9.483 5.163 8.173

84 86 87

3.719 −2.995 1.570

18 19

2 2

0.1 0.1

1.500 0.750

300 350

18 18

1.163 1.193

5.223 5.330

4.947 5.130

1.343 1.370

5.383 5.493

86 88

−2.566 −2.366

20 21 22

2 2 2

0.2 0.2 0.2

1.500 1.125 1.125

350 325 325

18 17 17

1.740 1.593 1.557

8.517 7.250 6.757

7.480 6.590 6.290

2.067 1.830 1.790

8.737 7.390 7.017

87 86 87

1.932 0.372 −0.083

23 24

2 2

0.2 0.2

1.125 1.125

325 325

17 17

1.623 1.613

6.733 6.810

6.513 6.473

1.857 1.850

7.023 7.007

83 83

0.163 0.145

parameters, b0 is the regression constant term, ∇f(x)T is the gradient of the objective function corresponding to the firstorder regression coefficients and ∇2f(x)T is the Hessian matrix, formed by the quadratic and interaction terms of the estimated model of Y. Ordinal least squares (OLS) algorithm is generally used to estimate the coefficients (β) of each term in the experimental model using the factor’s matrix (X) which, in matrix form, can be written as in Eq. (21). Details about error estimation, curvature test and goodness-of-fit statistics are provided in Appendix.   ^ ¼ XT X −1 XT Y β ð21Þ There are several difficulties when dealing with multivariate responses. Independently, modelling each response variable takes no account of the relationships or correlations among the variables. According to Montgomery [29], it is necessary to take special care in analysing multi-response data in order to avoid misleading interpretations. The basic problem is associated with fitting multi-response models while ignoring: (i) dependence among the errors, (ii) linear dependencies among the expected value of the responses and (iii) linear dependencies in the original data.

To overcome these difficulties, a hybrid strategy based on multivariate statistics for summarizing and reducing the dimensionality of the data can be employed using PCA. Once the PCA factorizes the multivariate data into a number of independent factors which take into account the variances and correlations among the original variables, a natural formulation for the multi-response problem is to change the

Table 3 Correlation between responses for factorial design Ry Rz Rq Rt

Ra

Ry

Rz

Rq

0.945a 0.000b 0.975 0.000 0.996 0.000 0.948 0.000

0.988 0.000 0.963 0.000 0.999 0.000

0.988 0.000 0.989 0.000

0.965 0.000

a

Pearson correlation

b

p value

Int J Adv Manuf Technol

original response variables by a principal component score equation modelled through OLS algorithm, as is shown in Eq. (22). PC1 ¼ β0 þ

k X i¼1

β i xi þ

k X i¼1

βii x2i þ

XX

β i j xi x j

ð22Þ

i< j

2.5 Multivariate mean square error optimization Multivariate mean square error (MMSE) is an improvement of MSE optimization developed by Vining and Myers [30] and applied in optimization context by Box et al. [31] among others. In this approach, the multivariate dual response surface can be obtained replacing the estimated mean ŷ by an estimated principal component score regression (PCi) and the variance σ ^ 2 by the respective eigenvalue λi. Taking ζ PCi as the

target for the ith principal component, a multivariate mean square error formulation can be defined in Eq. (23).  2 ð23Þ MMSEi ¼ PC i −ζ PCi þ λi Using a response surface design, PCi is defined as the fitted quadratic model obtained as describe in Eqs. (21) and (22). ζ PCi is the target value of the ith principal component that must keep a straightforward relation with the targets of the original data set. To establish this relationship, it is possible to use the transformation in Eq. (24) as proposed by [22] to obtain an alternative multivariate capability index. Therefore, the general form of ζ PCi can be written as: p q h  i X h  i X ζ PC i ¼ eTi Z Y p ζ Y p ¼ ei j Z Y p ζ Y p i¼1

j¼1

i ¼ 1; 2; …; p; j ¼ 1; 2; …; q

ð24Þ

Covariate

Fig. 3 Analysis of covariate and block presence on a PRESS and b R2adj

Block

10,5

PRESS

10,0

9,5

9,0

8,5

(a) without

with

without

Covariate

98,0

with

Block

97,9

R 2adj

97,8

97,7

97,6

97,5

97,4

(b) without

with

without

with

Int J Adv Manuf Technol

In most part of manufacturing processes, one or two principal component equations are enough to represent the original system of p objective functions since the responses have some degree of correlation. Therefore, considering the optimization routine formed by the MMSE functions whose eigenvalues are equal or greater than the unity, it is possible to write: "

(

#ð 1k Þ

k

Minimize MMSET ¼ ∏ ðMMSEi jλi ≥ 1Þ

¼

i¼1

k

∏ i¼1

) 1 h i ðk Þ 2 PC i −ζ PCi þ λi jλi ≥ 1

i ¼ 1; 2; …; k;

Subject to : xT x ≤ρ2

k ≤p

ð25Þ where k is the number of MMSET functions considered according to the significant principal components, and the restriction is related to the experimental region. Although k may be mathematically equal to p, this equality rarely occurs whereas the use of PCA generally reduces the problem dimension according to the strength of variance-covariance structure among the responses. Eventually, it is possible to consider principal component equations with λ < 1 since the Lawley’s multivariate hypothesis test reveals its adequacy. The multiobjective optimum can be generally found locating the stationary point of Eq. (24), or if the optimization direction is different of the convexity direction of the fitted

Table 4

ANOVA for PC1

Source

DF SS

Blocks

1

Main effects fz ap vc ae 2-Way Interactions fzap apae 3-Way Interactions fzapae apvcae Curvature Residual error Lack of fit

4 1 1 1 1 2 1 1 2 1 1 1 13 7

Pure error Total

6 23

Adj SS

0.488 0.488

Adj MS MS

P

Fig. 4 Pareto chart of the standardized effects

model, the optimum is forced to lie within the experimental region. Other constraints may be added according to experimenter necessities. 2.6 Multivariate global index The multivariate global index (MGI) considers the eigenvalues of the correlation matrix as respective weights of the most representative PC scores. It can be mathematically expressed by the sum of products of significant components scores multiplied by its respective eigenvalues, considering additionally that the correlations among original responses, principal components or MGI are all positives. In this way, it is possible to achieve an optimization direction compatible to the original direction of each response. Considering a response surface modelled for PCI, a constrained non-linear programming strategy can be applied, as described in Eq. (26).

0.488

5.040

0.043

25.947 101.684 2.067 0.016 0.022 1.704

268.170 1050.930 21.360 0.170 0.230 17.610

0.000 0.000 0.000 0.688 0.641 0.000

m X MGI ¼ ½λi ðPCsi Þ i¼1   Subject to : xT x ≤ ρ2

1.448 1.448 1.961 1.961 2.777 2.777

1.448 1.961 1.388

14.960 20.260 14.350

0.002 0.001 0.001

Table 5

Axial points to complete the CCD

Std. order

Control factors

1.730 1.047 1.027 1.258 0.909

1.730 1.047 1.027 1.258 0.909

1.730 1.047 1.027 0.097 0.130

17.880 10.820 10.620

0.001 0.006 0.006

2.240

0.173

0.348 0.348 112.747

0.058

103.790 101.684 2.067 0.016 0.022 3.408

103.790 101.684 2.067 0.016 0.022 3.408

s = 0.311057 PRESS = 8.15816 R2 = 98.88 %, R2 pred = 92.76 %, R2 adj = 98.03 %

Minimize

25 26 27 28 29 30 31 32

ð26Þ

Responses

fz

ap

vc

ae

Ra

Ry

Rz

Rq

Rt

0.05 0.25 0.15 0.15 0.15 0.15 0.15 0.15

1.125 1.125 0.375 1.875 1.125 1.125 1.125 1.125

325 325 325 325 275 375 325 325

16.5 16.5 16.5 16.5 16.5 16.5 13.5 19.5

0.373 1.863 1.540 1.160 1.547 1.610 1.557 1.597

2.677 9.123 6.690 6.127 7.177 7.247 6.867 7.093

2.000 8.713 5.990 5.467 6.540 6.827 6.520 6.720

0.453 2.260 1.753 1.393 1.797 1.867 1.810 1.850

2.727 9.277 6.843 6.200 7.340 7.523 7.073 7.490

Int J Adv Manuf Technol

2.7 Global percentage error function The global percentage error (GPE) function calculates the sums of the percentage error of Pareto-optimal solutions obtained with multivariate optimization in relation to targets of univariate responses. This function can be used to compare the different optimization methods based on PCA. Equation (27) presents the GPE calculation, where y*i are the Pareto-optimal solutions obtained on PCA based optimization method and Ti are the outcomes targets obtained through univariate optimization for each original function [32]. GPE ¼

m * X yi −1 T i i¼1

ð27Þ

2.8 Confidence interval for optimal results Considering an optimal result, a confidence interval for mean response can be achieved, which is the range in which the estimated mean response, for a given set of predictor values, is expected to fall with a desired confidence level. The lower and upper bond of the confidence interval can be calculated with the Eq. (28), where ŷ0 is the predicted value considering the optimal vector x0; t(1 − α/2; n − p) is the t value considering 1 − α/2 confidence level and the MSE degrees of freedom equals to n − p, σ ^ 2 is estimated by MSE and (XTX)− 1 is the inverse covariance matrix of coefficients.

1 = 2 2  −1 IC ½100ð1−αÞ% ¼ ^y0  t ð1−α=2; n−pÞ ⋅ σ ^ xT0 XT X x0

ð28Þ

3 Experimental procedure Finishing end milling tests were made on a CNC FADAL vertical machining centre, model VMC 15, with maximum spindle rotation of 7500 RPM and 15 kW of power. Two workpieces of AISI 1045 steel with square sections of 100 × 100 mm and lengths of 300 mm were machined. The tool used was a positive end mill, code R390-025A25-11M with a 25-mm diameter, entering angle of χr = 90° with three inserts, code is R390-11T308M-PM GC 1025 from Sandvik Coromant. Figure 1 shows the experimental setup. The cutting direction was downcut. The roughness parameters Ra, Ry, Rz, Rq and Rt, in μm, were assessed using a Mitutoyo Surftest SJ-201, with a cutoff length of 0.25 mm. Three measurements were made for each cutting condition, and the average results were used in the analysis. The roughness measurement setup is presented in Fig. 2. The control factors and respective levels are resumed in Table 1. A central composite design (CCD) was adopted.

Table 6 Correlation between responses for CCD

Ry

Ra

Rz

Rq

Ry

0.950

Rz

0.000 0.974

0.989

Rq

0.000 0.996

0.000 0.968

0.988

Rt

0.000 0.954

0.000 0.999

0.000 0.990

0.970

0.000

0.000

0.000

0.000

The experiments were conducted sequentially. At first part, a complete factorial design 24 was done without replication on corner points and with replication on centre points (nc = 4). The replication on centre points in 2k designs provides an independent estimative of experimental error; moreover, it tests curvature existence in experimental region. After confirming the curvature existence, the 2 × k = 2 × 4 axial points were run to complete the CCD, allowing to obtain a complete quadratic model. On the factorial part of experiments, to anticipate the possible influence of the material discontinuity and anisotropy, the two different bars used on tests were blocked (b = 2). Therefore, considering the factorial points, centre points and blocks, 24 (nc × b + n × 2k = 4 × 2 + 1 × 24) experiments were carried out. The hardness of the machined surfaces for each test was accessed and considered as covariate, with aims of taking out its effect on surface roughness. The hardness measurements were made in a Brinell hardness tester. As strong correlation between responses was detected, PCA was used to write the roughness parameters as linear combinations in terms of principal components scores. Then, fitting a linear model for PC1 and confirming the lack of fit, the data from axial points served to fit a second-order model. This model was also fitted using PCA, once there was a strong correlation between the outcomes considered on CCD design. After obtaining the second-order model in terms of PCA, optimization was carried out using MMSE T and MGI methods. The PC scores were classified using Kaiser’s criteria. The methods were compared using the GPE criteria. Finally, confirmation runs were conducted on optimal point to endorse the methodology and the accuracy of the proposed models.

Table 7

Eigen analysis of the correlation matrix

Eigenvalue Proportion Cumulative

PC1

PC2

PC3

PC4

PC5

4.911 0.982 0.982

0.079 0.016 0.998

0.007 0.001 1.000

0.002 0.000 1.000

0.001 0.000 1.000

Int J Adv Manuf Technol Table 8

PC1 scores for CCD

Std. Order

1

2

3

4

5

6

7

8

9

10

11

PC1 Std. order PC1 Std. order PC1

−1.964

3.666

−2.387

0799

−2.750

3.575

−2.619

1.513

−2.288

1.599

−2.226

12 2.715

13 −2.042

14 2.959

15 −2.266

16 1.942

17 −6.565

18 3.005

19 −0.155

20 −1.533

21 0.344

22 0.659

23 0.202

24 0.538

25 0.235

26 0.675

27 0.831

28 0.434

29 0.490

30 0.064

31 0.284

32 0.269

All machining tests and measurements of roughness and hardness were conducted in random order. All the statistical analysis were conducted in Minitab 16 considering a significance level α = 0.05. All nonlinear programming problems were solved using the generalized reduced gradient (GRG) algorithm of MS Excel® on Solver®.

Applying the PCA to describe all the correlated outcomes as linear combinations, the eigenvalues of the principal components, which are equal to the estimated variance of the PCA ^ 1 = 4.9020, λ ^ 2 = 0.0892, λ ^3 scores, are respectively λ ^ ^ = 0.0062, λ4 = 0.0019 and λ5 = 0.0007. The total variability ^i ^ 1 =∑p λ explained by the first principal component is λ i¼1

4 Results and discussion 4.1 Part 1: factorial design with covariate using PCA scores Table 2 presents the factorial design, corresponding to the first part of experiments, the experimental results for Ra, Ry, Rz, Rq e Rt, the covariate measurements and the PCA score for PC1. The correlations between the responses are on Table 3. The strong correlation between responses justifies the application of principal component analysis. Table 9

Response surface regression for PC1

Term

Coef.

SE Coef.

T

P

Constant fz ap vc ae fz2 ap2 vc2 ae2 fzap fzvc fzae apvc

0.410 2.352 −0.335 0.043 0.051 −0.472 −0.238 0.098 0.065 −0.274 0.126 −0.077 −0.066

0.182 0.105 0.105 0.105 0.105 0.095 0.095 0.095 0.095 0.129 0.129 0.129 0.129

2.252 22.379 −3.188 0.407 0.489 −4.982 −2.514 1.036 0.690 −2.125 0.982 −0.600 −0.515

0.038 0.000 0.005 0.689 0.631 0.000 0.022 0.315 0.500 0.049 0.340 0.557 0.613

0.322 0.074

0.129 0.129

2.505 0.577

0.023 0.572

apae vcae

= 98.00 %. This means that the most important information about the roughness measures are in PC1 score. Therefore, the PC1 was used to do the analysis, representing the five roughness outcomes. To compare different linear models for PC1 in terms of covariate, blocking and model terms removal, a sensibility analysis was realized. The models were firstly compared in terms of normality and, as the residuals of the complete models for PC 1 , considering or not covariate and blocking, did not present normality, then the reduced models were compared in terms of error, adjustment and curvature presence. Figure 3 confirms that blocking was convenient to improve the model adjustment, measured by the adjusted coefficient of multiple determination R2adj, and to decrease the prediction error sum of squares (PRESS). Analysing the main effects plots, the linear model that better describes the data is the model with blocks and without covariate. Based on sensibility analysis results for the covariate hardness, the ANCOVA was replaced by the ANOVA analysis. Table 4 presents the ANOVA for PC1. About main effects, only fz and ap were significant, considering α = 0.05. The

s = 0.514897, PRESS = 23.9966 R2 = 97.04 %, R2 pred = 84.24 %, R2 adj = 94.60 %

Fig. 5 Surface response for PC1 in terms of fz and ap

Int J Adv Manuf Technol Table 10

Response surface coefficients for Ra, Ry, Rz, Rq and Rt

Coefficients

Responses Ra

β0 β1 β2 β3 β4 β11 β22 β33 β44 β12 β13 β14 β23 β24 β34 R2adj

Table 12

Ry

Rz

Rq

yi*

Rt

1.603 0.344 −0.071 0.030 0.003 −0.109 −0.051 0.006 0.005 −0.065 0.029 0.000 −0.029 0.060

7.008 1.690 −0.182 −0.050 0.050 −0.222 −0.095 0.106 0.048 −0.082 0.075 −0.093 0.021 0.204

6.599 1.601 −0.207 0.006 0.029 −0.262 −0.169 0.070 0.054 −0.196 0.056 −0.063 −0.050 0.185

1.855 0.429 −0.074 0.025 0.007 −0.109 −0.055 0.009 0.009 −0.073 0.027 0.005 −0.033 0.062

7.203 1.715 −0.179 −0.038 0.063 −0.252 −0.122 0.105 0.068 −0.103 0.075 −0.131 0.056 0.232

0.013 94.06 %

0.043 92.48 %

0.048 94.15 %

0.015 92.15 %

0.052 92.93 %

interactions fzap, apae, fzapae e apvcae were significant. The ANOVA attested significance for block, concluding that there are differences between the two workpieces used, which can be source of metallurgical aspects. The curvature test presented p value = 0.006 < 0.05 = α. Accordingly, the linear model did not well describes the data, so new tests were performed to fit a second order model. Nevertheless, the coefficient of determination is R2 = 98.88 % and R2adj = 98.03 % concluding that the model explains 98.03 % of the data variability. The R2pred = 92.76 % means that the linear model explains about 92.76 % of the variability in predicting new observations. Figure 4 presents the Pareto chart of the standardized effects, which confirms the significant terms for PC 1 ANOVA. It is clear that the feed per tooth is significant and has the biggest effect on roughness surface. However, the effect of axial depth of cut ap is important and the interactions among factors confirm that not only geometric effects of cutting phenomena take part on roughness of machined surfaces. The linear regression model, with coded units, relating PC1 with control factors is exposed on Eq. [28]. Table 11

yi* Ti errori

Ti errori

MGI Pareto-optimal responses, targets and error Ra

Ry

Rz

Rq

Rt

0.474 0.471 0.006

2.723 2.704 0.007

2.327 2.325 0.001

0.554 0.550 0.008

2.738 2.718 0.008

The model presents main effects, two-way and three-way interaction coefficients, besides the coefficient for the block to take the effect differences between two workpieces out of the data. To estimate the centre points (0, 0, 0, 0), the constant 0.4388 needs to be added to the result. Nevertheless, the significant result for curvature test (p value = 0.002 < 0.05 = α) indicates that a second-order model needs to be studied to better describe the data. ^ 1 ¼ −0:1463 þ 0:1425Block þ 2:5210f z −0:3594ap PC þ 0:0319vc þ 0:0371ae −0:3008f z ap þ 0:3500ap ae þ 0:3288f z ap ae −0:2558ap vc ae 4.2 Part 2: response surface based on PCA and optimization Table 5 presents axial points to complete the CCD, considering factorial and centre points presented in Table 2, for fitting a response surface model. Using the complete CCD data (Tables 2 and 5), the strong correlations between the responses are attested and exposed in Table 6. Once again, the PCA was used to represent the set of roughness responses in terms of PCA scores. Table 7 resumes the eigen analysis of the correlation matrix. Using Kaiser’s criteria, only PC1 presented significance ^ 1 = 4.911, which is greater than 1 and since it has eigenvalue λ

MMSET Pareto-optimal responses, targets and error Ra

Ry

Rz

Rq

Rt

0.474 0.471 0.006

2.722 2.704 0.007

2.328 2.325 0.001

0.554 0.550 0.008

2.737 2.718 0.007

ð29Þ

Fig. 6 Overlaid contour plot with MGI optimization results

Int J Adv Manuf Technol Table 13 Pay-off matrix and GPE Optimization results - yi* Objective function → Ra Ry Rz Rq

Ra 0.4708 2.7608 2.3339 0.5500

Ry 0.4855 2.7043 2.3523 0.5690

Rz 0.4732 2.7343 2.3253 0.5534

Rq 0.4709 2.7683 2.3364 0.5499

Rt 0.4866 2.7064 2.3509 0.5702

MMSET 0.4739 2.7219 2.3276 0.5545

MGI 0.4736 2.7233 2.3273 0.5541

Rt

2.7198

2.7461

2.7876

2.7176

2.7372

2.7385

Ti 0.4708 2.7043

2.7796 |yi*/Ti – 1| 0.0000 0.0209

0.0311 0.0000

0.0050 0.0111

0.0001 0.0237

0.0334 0.0008

0.0065 0.0065

0.0059 0.0070

2.3253 0.5499

0.0037 0.0001

0.0116 0.0347

0.0000 0.0063

0.0048 0.0000

0.0110 0.0369

0.0010 0.0083

0.0009 0.0077

2.7176

0.0228

0.0008

0.0105

0.0258

0.0000

0.0072

0.0077

GPE→

0.0475

0.0782

0.0329

0.0544

0.0821

0.0295

0.0292

explains 98.2 % of the data variability. Thus, PC1 scores were used to represent the set of five roughness outcomes and to fit the response surface in terms of machining conditions. Table 8 presents the PC1 score vector for five roughness outcomes considered. Using this score to do a univariate analysis, the response surface and regression analysis are presented in Table 9 and in Eq. (30). ^ 1 ¼ 0:410 þ 2:352 f z −0:335ap þ 0:043vc PC þ 0:051ae −0:472 f z 2 −0:238a2p þ 0:098v2c þ 0:065a2e ; −0:274 f z ap þ 0:126 f z vc −0:077 f z ae −0:066ap vc þ 0:322ap ae þ 0:074vc ae

ð30Þ

The second-order model for PC1 explains 94.60 % of the data variability. The Anderson Darling normality test for residuals of PC1 presented p value equals to 0.067, assuring that the hypothesis of normality of residuals cannot be rejected. According to the regression analysis, the factors which presented significant main effects were fz and ap. The interactions fzap and apae and the quadratic terms fz2 and ap2 were also significant, considering α = 0.05. In this way, besides the recognized effect of feed rate on roughness surface, considered in several roughness geometric models, the effect of ap and ae needs to be taken into account to explain surface quality of a machined part. Moreover, mechanistic models generally do not take into account interaction terms. Figure 5 presents the response surface for PC1 in terms of the significant factors fz and ap. Table 14 Confidence interval for MGI optimal point

Fit

SE fit

95 % CI

−6.2112

0.3933

(−7.0409; −5.3815)

The complete response surface models for the five roughness outcomes were obtained and resumed in Table 10, where the coefficients with italics emphasis presented statistical significance. All models presented excellent explanation of data variability in terms of R2adj. These models were optimized individually to obtain the targets Ti for each response. As only PC1 was significant, considering Kaiser’s criteria, the optimization methods MMSET and MGI were applied based on PC 1 fitted model presented in Eq. (30). The MMSET optimization model for this specific case with target for PC1 is ζ PCi = −6.21 obtained solving the univariate optimization for PC1 was solved. The levels obtained for fz, ap, vc e ae were, respectively, [0.050, 1.087, 327.302, 16.379]. Transforming these results for the five roughness outcomes, the Pareto-optimal responses yi*, the Targets for roughness parameters Ti and the percentage error are resumed in Table 11. Adding the percentage errors up, the GPE = 0.0295

Fig. 7 Power curve for number of confirmation runs determination

Int J Adv Manuf Technol Table 15 point

Confidence interval for original responses on MGI optimal

Responses (μm)

Fit

SE fit

95 % CI

Ra Ry Rz Rq

0.4736 2.7233 2.3273 0.5542

0.0641 0.3275 0.2761 0.0690

0.3383 2.0323 1.7447 0.4085

0.6090 3.4143 2.9098 0.6998

Rt

2.7385

0.3242

2.0545

3.4224

approximating all the individual targets with a small error. Table 13 presents the pay-off matrix and the GPE for individual optimization, MMSET and MGI methods, considering in each column the optimization of the individuals’ roughness responses, where the entries with italics emphasis are the target values Ti of the responses. In all univariate optimization cases, the GPE obtained were worse than the obtained in MMSET and in MGI optimization. 4.3 Confirmation runs

was obtained, confirming that the Pareto-optimal responses were very near the targets for the individual outcomes. The MGI was also applied for optimizing PC1. It can be seen that with m = 1 in MGI formulation, the problems equates the univariate PC1 optimization; in other ^ 1 does not have any effect on words, the weight λ changing the x* values. Solving the MGI nonlinear programming problem, the levels obtained for fz, ap, vc e a e, on uncoded units were [0.050, 1.085, 327.219, 16.383]. Applying these results on the five roughness outcomes resumed in Table 10, the Pareto-optimal responses y * , the Targets for roughness parameters Ti and the percentage error are resumed in Table 12. Adding the percentage errors up, the GPE = 0.0292 was obtained, confirming that the MGI, considering PC1, presented best result when compared with MMSE T. Figure 6 shows the overlaid contour plot for MGI results, with a feasible region considering the upper and lower bounds of the five correlated roughness outcomes. Despite all, roughness outcomes present the same optimization direction; in other words, in minimization, they represent different geometric features of the machined surface. Then, optimizing one roughness outcome separately does not guarantee the optimization of all of them. Therefore, regarding the variance-covariance structure between responses, PC1 score represents all the outcomes and may be optimized, Table 16

Confirmation runs

Confirmation runs

1 2 3 4 5 6 7 8 Mean

Responses (μm) Ra

Ry

Rz

Rq

Rt

0.500 0.523 0.503 0.497 0.493 0.400 0.423 0.497 0.480

3.057 3.113 2.650 2.600 3.077 2.217 2.723 3.383 2.853

2.597 2.617 2.450 2.423 2.787 1.930 2.093 2.390 2.411

0.610 0.633 0.610 0.603 0.600 0.467 0.507 0.593 0.578

3.263 3.177 2.650 2.690 3.260 2.327 2.943 3.583 2.987

Considering the pay-off matrix and the GPE for MGI method equals to 0.0292, a 95 % confidence interval of MGI optimal point was obtained using Eq. (28), considering the optimal vector x0 = [0.050, 1.085, 327.219, 16.383], and is resumed in Table 14. To guarantee the possibility on detecting a difference that h  −1 i1 =2 equals to t ð1−α=2; n−pÞ ⋅ σ ^ 2 xT0 XT X x0 ¼ 0:8297, as graphically determined by the power curve on Fig. 7, eight confirmation runs were conducted. It is important to define a confidence interval for optimal point at each original roughness outcome. Then, considering the MGI optimal vector x0 = [0.050, 1.085, 327.219, 16.383], these intervals are resumed in Table 15. Table 16 presents the confirmation runs results and the mean for each outcome. The limits calculated by Eq. (28) are the range in which the estimated mean response, for a given set of predictor values, is expected to fall. Comparing the mean results obtained on confirmation, resumed in Table 16 with the limits resumed in Table 15, it can be concluded that all mean responses for confirmation runs fell within the confidence intervals and close to the fitted values, endorsing the expected results.

5 Conclusions This paper proposes a methodology to study correlated machining outcomes related to finishing of machined surfaces. With a CCD, a few number of tests were performed, achieving a good estimative of the experimental error. Starting from a factorial design, the lack of fit test proved a curvature presence and the necessity of axial points to estimate a second-order model. The univariate ANCOVA for PC1, using the concomitant variable hardness and blocking the material, was realized to anticipate the possible material discontinuity and anisotropy. The model with blocks presented significant effect and better adjustment and smallest error. However, the covariate did not presented significant effect and improvement in the model. Therefore, the ANCOVA was replaced to ANOVA analysis. The most important factor on PC1, and consequently on roughness surface was fz, confirming the bibliography review,

Int J Adv Manuf Technol

which states that the feed per tooth is the most important parameter on roughness surface. The parameter ap, interactions fzap and apae and the quadratic terms fz2 and ap2 were significant, confirming that some mechanistic models for roughness surface, which take into consideration only the feed per tooth, may present inconsistent results. Considering the correlation between outcomes and the variance-covariance structure, the PCA was used to represent the roughness data set. Kaiser’s criteria were adopted and classified only PC1 as statistically significant. Consequently, a response surface model was fitted for PC1 scores explaining 94.6 % of data variability. The optimization method which better approximated the targets of roughness responses was MGI, with parameter levels for fz, ap, vc e ae, respectively, equals to x0 = [0.050, 1.085, 327.219, 16.383]. GPE results for MGI presented 2.92 % of global percentage error. The confirmation runs endorsed the results, once all mean responses fell within the confidence intervals and close to the fitted values. The proposed methodology could be employed to modelling and the optimization of analogous manufacturing process. Acknowledgments The authors gratefully acknowledge Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES), and Fundação de Amparo à Pesquisa do estado de Minas Gerais (FAPEMIG) for supporting this research.

R2adj value will decrease. Besides, when R2 and R2adj differ dramatically, possible nonsignificant terms have been included in the model. SS R SS E ¼ 1− ð33Þ SS yy SS yy    n−1  1−R2 ð34Þ R2ad j ¼ 1− n−p The prediction error sum of squares (PRESS) is a measure of residual scaling. To calculate PRESS, it is necessary to select an observation I and fit a regression model to the remaining n − 1 observations. Using the tentative model to predict the removed observation, and denoting this predicted value as ŷ(i), the prediction error for i can be calculated as e(i) = yi − ŷ(i). This procedure need to be done to each observation i = 1, …, n. The PRESS statistic can be calculated as in Eq. (35). R2 ¼

PRESS ¼

n X i¼1

e2ðiÞ ¼

n h X

yi −^yðiÞ

i2

ð35Þ

i¼1

Another way to compute PRESS is on Eq. (36), without the necessity of fitting n regression models, where hii are the diagonal elements of hat matrix H = X(XTX)− 1XT.  n  X ei 2 PRESS ¼ 1−hii i¼1

ð36Þ

Appendix In 2k factorial designs, centre points allow an independent estimate of error and tests the curvature on the experimental region. Considering nc centre points, a single degree of freedom sum of squares for curvature can be calculated as in Eq. (31). The mean square error MSE can be calculated as Eq. (32). The null hypothesis for curvature test is H0 : ∑kj = 1βjj = 0.   n F nC y F −yC SS curvature ¼ ð31Þ n F þ nC M SE ¼

SS E nC −1

ð32Þ

The coefficient of multiple determination (R2) represents the variability percentage of the data explained by the model. This index is mathematically expressed in Eq. (33). R2 may increase with unnecessary terms in the model. Therefore, it is possible to obtain poor predictions with models which present large R2. The adjusted R2adj, calculated with Eq. (34), is more appropriate to compare models with different number of terms. When unnecessary terms are added in the model,

References 1.

2.

3. 4.

5.

6.

7.

8.

Ehrnann KF, Hong MS (1994) A generalized model of the surface generation process in metal cutting. CIRPAnn-Manuf Technol. doi: 10.1016/S0007-8506(07)62258-6 Benardos PG, Vosniakos G-C (2003) Predicting surface roughness in machining: a review. Int J Mach Tools Manuf. doi:10.1016/ S0890-6955(03)00059-2 Grzesik W (1996) A revised model for predicting surface roughness in turning. Wear. doi:10.1016/0043-1648(95)06825-2 Grzesik W, Bogdan K, Adam R (2010) Surface integrity of machined surfaces. In: Davim JP (ed) Surface integrity in machining. Springer, London, pp 143–179. doi:10.1007/978-1-84882-874-2_5 Petropoulos GP, Pandazaras CN, Davim JP (2010) Surface texture characterization and evaluation related to machining. In: Davim JP (ed) Surface integrity in machining. Springer, London, pp 37–66. doi:10.1007/978-1-84882-874-2_2 Abouelatta OB, Madl J (2001) Surface roughness prediction based on cutting parameters and tool vibrations in turning operations. J Mater Process Technol. doi:10.1016/S0924-0136(01)00959-1 Benardos P, Vosniakos G (2002) Prediction of surface roughness in CNC face milling using neural networks and Taguchi’s design of experiments. Robot Comput Integr Manuf. doi:10.1016/S07365845(02)00005-4 Pontes FJ, Ferreira JR, Silva MB, Paiva AP, Balestrassi PP (2010) Artificial neural networks for machining processes surface

Int J Adv Manuf Technol

9.

10.

11.

12.

13.

14.

15.

16.

17.

18.

roughness modeling. Int J Adv Manuf Technol. doi:10.1007/ s00170-009-2456-2 Pontes FJ, de Paiva AP, Balestrassi PP, Ferreira JR, da Silva MB (2012) Optimization of radial basis function neural network employed for prediction of surface roughness in hard turning process using Taguchi’s orthogonal arrays. Expert Syst Appl. doi:10. 1016/j.eswa.2012.01.058 Lopes LGD, Gomes JHF, de Paiva AP, Barca LF, Ferreira JR, Balestrassi PP (2013) A multivariate surface roughness modeling and optimization under conditions of uncertainty. Measurement. doi:10.1016/j.measurement.2013.04.031 Simunovic K, Simunovic G, Saric T (2013) Predicting the surface quality of face milled aluminium alloy using a multiple regression model and numerical optimization. Meas Sci Rev. doi:10.2478/msr2013-0039 Bhardwaj B, Kumar R, Singh PK (2014) An improved surface roughness prediction model using Box-Cox transformation with RSM in end milling of EN 353. J Mech Sci Technol. doi:10. 1007/s12206-014-0837-4 Kumar S, Batish A, Singh R, Singh TP (2014) A hybrid Taguchiartificial neural network approach to predict surface roughness during electric discharge machining of titanium alloys. J Mech Sci Technol. doi:10.1007/s12206-014-0637-x Günay M, Yücel E (2012) Application of Taguchi method for determining optimum surface roughness in turning of high-alloy white cast iron. Measurement. doi:10.1016/j.measurement.2012. 10.013 Shanmughasundaram P, Subramanian R (2013) Influence of graphite and machining parameters on the surface roughness of Al-fly ash/graphite hybrid composite: a Taguchi approach. J Mech Sci Technol. doi:10.1007/s12206-013-0630-9 Chandrasekaran M, Muralidhar M, Dixit US (2014) Online optimization of a finish turning process: strategy and experimental validation. Int J Adv Manuf Technol. doi:10.1007/s00170-014-6171-2 Al-Zubaidi S, Ghani JA, Haron CHC (2013) Optimization of cutting conditions for end milling of Ti6Al4V Alloy by using a Gravitational Search Algorithm (GSA). Meccanica. doi:10.1007/ s11012-013-9702-2 Sahoo AK, Rout AK, Das DK (2015) Response surface and artificial neural network prediction model and optimization for surface roughness in machining. Int J Ind Eng Comput. doi:10.5267/j.ijiec. 2014.11.001

19.

Sehgal AK (2013) Surface roughness optimization by respose surface methodology and particle swarm optimization. Int J Eng Sci Technol. doi:10.1007/s00170-014-6020-3 20. Zhao T, Shi Y, Lin X, Duan J, Sun P, Zhang J (2014) Surface roughness prediction and parameters optimization in grinding and polishing process for IBR of aero-engine. Int J Adv Manuf Technol. doi:10.1007/s00170-014-6020-3 21. Çiçek A, Kivak T, Ekici E (2013) Optimization of drilling parameters using Taguchi technique and response surface methodology (RSM) in drilling of AISI 304 steel with cryogenically treated HSS drills. J Intell Manuf. doi:10.1007/s10845-013-0783-5 22. Jagadish BS, Ray A (2015) Prediction of surface roughness quality of green abrasive water jet machining: a soft computing approach. J Intell Manuf. doi:10.1007/s10845-015-1169-7 23. Sarkheyli A, Zain AM, Sharif S (2015) A multi-performance prediction model based on ANFIS and new modified-GA for machining processes. J Intell Manuf. doi:10.1007/s10845-013-0828-9 24. Mays DP (2001) The impact of correlated responses and dispersion effects on optimal three level factorial designs. Commun Stat Simul Comput. doi:10.1081/SAC-100001866 25. Hotelling H (1933) Analysis of a complex of statistical variables into principal components. J Educ Psychol. doi:10.1037/h0071325 26. Johnson RA, Wichern DW (2007) Applied multivariate statistical analysis. Pearson Prentice Hall, Upper Saddle River 27. Rencher AC (2002) Methods of multivariate analysis. Wiley, New York 28. Jackson, DA (1933) Stopping rules in principal components analysis: a comparison of heuristical and statistical approaches. Ecology, 2204–2214 29. Montgomery DC (2007) Design and analysis of experiments. Wiley, New York 30. Myers RH, Khuri AI, Vining G (1992) Response surface alternatives to the Taguchi robust parameter design approach. Am Stat. doi:10.2307/2684183 31. Box GEP, Hunter WG, MacGregor JF, Erjavec J (1973) Some problems associated with the analysis of multiresponse models. Technometrics. doi:10.2307/1266823 32. de Freitas Gomes JH, Júnior ARS, de Paiva AP, Ferreira JR, da Costa SC, Balestrassi PP (2012) Global criterion method based on principal components to the optimization of manufacturing processes with multiple responses. Strojniški Vestn-J Mech Eng. doi:10. 5545/sv-jme.2011.136