Large-Scale Reliability Based Design Optimization for Vehicle Crash ...

2 downloads 0 Views 245KB Size Report
based design optimization of a car body structure for crash safety. ... Computer analysis of vehicle crashworthiness has become a powerful ..... can guard against.
Spring Research Conference - Section on Quality & Productivity (Q&P)

Large-Scale Reliability Based Design Optimization For Vehicle Crash Safety L. Gu and R. J. Yang Vehicle Safety Research Department, Scientific Research Laboratory, Ford Motor Company 2101 Village Road, Dearborn, Michigan 48124

KEY WORDS: CRASHWORTHINESS, OPTIMIZATION, ROBUST DESIGN, FINITE ELEMENT METHOS. INTRODUCTION The focus of this paper is on a large-scale reliability based design optimization of a car body structure for crash safety. The application of multidisciplinary design optimization (MDO) to automotive vehicle structure design has been an interest topic over the past several years (Yang et al. 1994, Schramm et al. 1999). Sobieski et al. (2000) and Kodiyalam et al. (2001) reported a method with very significant reduction in computing time for such large-scale MDO problems. The Finite element based full vehicle structural crash simulation is a design tool commonly used throughout the automotive industry to evaluate vehicle crash performance. In most applications, the optimization and robustness analysis are applied on surrogate models (or response surface models). After evaluating several response surface models for crashworthiness, Yang et al. (2000) recommended to use second order polynomial regression model and moving least square regression for crash safety optimization. Gu (2001) proposed sequential replacement algorithm for polynomial based regression model and evaluated different order of polynomial models. With a limited number of simulations, it was found that the second order polynomial response surface is good enough for many applications of crash safety. Another key element outlined in this paper is that it uses strategic sampling of the full-scale model, which is invoked only sparingly to save effort. The full-scale model is still used as the principal source of information about the system and therefore the computational cost per evaluation does not change. This strategy is effective and suitable to parallel computing.

VEHICLE CRASH SIMULATION Over the past ten years, finite element based crash analysis codes have been successfully used to simulate almost all real world vehicle crash events. Computer analysis of vehicle crashworthiness has become a powerful and effective tool to reduce the cost and time to bring new vehicles to market that meet both corporate and government 1280

crash safety requirements. This increasing reliance on simulation has been due to the development of theoretically sound and robust nonlinear Finite Element (FE) methods coupled with the availability of affordable but powerful computers. Major automotive companies routinely use finite element simulation to analyze and design vehicles to meet federal and corporate safety guidelines for frontal impact, side impact, roof crush, side pole impact, interior head impact, rear impact and rollover. Since vehicle crash events are highly nonlinear events involving both geometry and material nonlinearity, crash simulation is fundamentally computation intensive and requires fast and powerful supercomputers to have reasonable turnaround time for the analyses. The finite element formulations used for crash simulation apply to large deformations and nonlinear materials. They are only limited by the element’s capabilities to deal with large distortions. The updated Lagrangian formulations are widely used in explicit finite element method. The weak form of updated Lagrangian formulation (principle of virtual power) can be written as (Belytschko et al, 2001): ∂(δv i ) σ ij dΩ − ∫ δvi ρbi dΩ − ∑ ∫ δv j t j dΓ + ∫ δv i ρv&i dΩ = 0 Ω ∂x j Ω Γt Ω ∫

(1) In the finite element method, displacement field is approximated by u i ( X , t ) = N I ( X )u iI (t ) (2) The velocities are obtained by taking the material time derivative of the displacements, u&i ( X , t ) = N I ( X ) u&iI ( t ) (3) The accelerations are similarly given by u&&i ( X , t ) = N I ( X ) u&&iI ( t )

(4)

The test function in (1), or variation, is not a function of time, can be approximated as δvi ( X ) = N I ( X )δv iI (5) To construct the discrete finite element equation, we substitute (2)-(5) into the principle of virtual power, giving

Spring Research Conference - Section on Quality & Productivity (Q&P)

∂N I σ ij dΩ − ∫ N I ρbi dΩ − ∑ ∫ N I t i dΓ + ∫ N I ρv&i dΩ = 0 Ω ∂x j Ω Γt Ω ∫

(6) Equation (6) can be rewritten as && = f Mu

ext

UNIFORM SAMPLING METHOD

− f int

(7)

where M=





ρNN T dΩ

is the consistent mass matrix, u&& is the nodal acceleration vector, and f ext =

and f int =





assessment directly, using a full vehicle FE model, could take weeks or months and incur significant computational cost due to the large number of simulations required.

N I ρbi dΩ + ∑ ∫ N I t i dΓ Γt

∂N I σ ij dΩ Ω ∂x j ∫

are respectively the external and internal force vectors. The central difference method is employed for the solution of eq. (7). The velocity and the acceleration at any time step can be written in the form: − un u u& n +1 / 2 = n +1 ∆t n − u& n −1 / 2 u& &&n = n +1 / 2 u ∆t

(8)

Velocities and positions at any time step can be updated using the following equations. u& n + 1 / 2 = u& n −1 / 2 + ∆ t n M − 1 − f next − f nint (9)

(

u n + 1 = u n + ∆ t n u& n + 1 / 2

)

One key element in the method is that it uses strategic sampling of the full-scale model. The sampling method for computer analysis has been studied by many authors. Two major sampling technicals have been widely used for computer experimental design. McKay et al. (1979) proposed a method of generating a set of experimental points Pn = {x1, x2, … ,xn} called Latin hypercube sampling (LHS). LHS provides a more efficient estimate of the overall mean of the response than the estimate based on the simple random sampling. Fang et al. (2000), on the other hand, proposed the uniform design concept that allocates experimental point uniformly in the domain. Uniform design (Fang, et al.) seeks sampling points that are uniformly scattered on the domain. The fundamental idea in uniform design is that it uses Lp discrepancy as the criterion to measure the uniformity. The uniform design tries to find a set of samplings with maximizing the uniformity. Suppose there are s factors of interest over a standard domain Cs. The goal here is to choose a set of n points Pn = {x1,…, xn} such that these points are uniformly scattered on Cs. Let M(Pn) be a measure of the uniformity of Pn; we seek a set of Pn that maximizes the uniformity over all possible n points on Cs. Several uniform design critera can be used for this purpose. The centered L2 discrepancy criterion giving by Hickernell (1998) was used in our study:

M ( Pn ) = (

(10)

At any time step n, the displacements un are known. The nodal force fn can be determined by sequentially evaluating the strain-displacement equations, the constitutive equation and the nodal forces. Thus, the entire right-hand side of (9) can be evaluated, and the displacement field can then be determined by (10). The updated displacement field and velocity field can be accomplished without solving any equations provided that the mass matrix M is diagonal. The diagonal mass matrix M can be obtained by lumping the consistent mass matrix (Belytschko et al, 2001). Typical FEA model for any crash mode may contain in excess of 150,000 shell elements. Simulation time range from ten to twenty hours depending on the size of the problem, computer hardware and the number of processors used to perform the simulation. Efforts to parallelize commercial crash analysis software have met limited success when applied to realistic simulation problems. Single simulations can rarely make effective use of more than eight processors. Even with the increase in processor speed and availability of less expensive high-speed computers, performing an optimization or robustness 1281

13 s 2 n s 1 ) − ∑ ∏ ( 1 + | x kj − 0.5 |2 ) − 12 nk l 2

1 1 1 − | x kj − 0.5 |2 ) + 2 ∑ ∑ ∏ [ 1 + x kj − 0.5 ] − 2 2 n



1

n n s

1 2

1 2

∑ ∑ ∏ [ | x ji − 0.5 | + | x ji − x ki |]

n2 k j i

(11) The L2 discrepancy criterion proposed by Warnock can also be used for this purpose (1972).

M ( Pn ) = 3 − s − +

1

21− s n s ∑ ∏ ( 1 − x kl2 ) n k l

n n s

∑ ∑ ∏ [ 1 − max( x ki , x ji )]

n2 k j i

(12) The global optimization algorithm, threshold-accepting algorithm, is used to generate uniform samplings (Fang et al. 2000). The number of CAE simulations in uniform Latin hypercube sampling is determined by the total number of

Spring Research Conference - Section on Quality & Productivity (Q&P)

variables including control variables and noise variables in the optimization problem. To construct a reasonably accurate response surface, a minimum of 3N sampling points, where N being the total number of design variables, are generally required. It is therefore recommended that the number of initial simulations should be between 3N – 4N for best results. Note that the number of levels for each variable is flexible.

RESPONSE SURFACE MODEL Vehicle impact, which is the application focus of this paper, is a nonlinear event in terms of the structural and dummy responses. With increasing fidelity of the vehicle and dummy models, over a million degrees of freedom, a single crash analysis can require several hours of computing time on a state-of-the-art computer server with multiple processors. Even employing multiple processors with each crash simulation, the computational cost of these analyses along with the iterative nature of design optimization procedures prohibits rigorous optimization and robustness studies. In addition, crash analysis is unstable since some runs corresponding to design perturbations may fail due to modeling and element penetration issues. Hence it is critical that surrogate models (i.e. approximations) be constructed apriori using the results from a number of actual crash simulations for use with crashworthiness optimization. Polynomial-based regression models have been successfully used for vehicle safety analysis (Yang et al. 2002). Gu (2001) proposed a sequential replacement algorithm in conjunction with Residual Sum of Squares (RSS) checking criterion to get the best fitting polynomial model. The basic idea of the sequential replacement algorithm is that once two or more terms have been selected, it is determined that any of those terms can be replaced with another that gives a smaller RSS (Miller, 1990). The procedure must converge as each replacement reduces the RSS that is bounded below. Define the polynomial spaces Gl as: G0 = {1} G1 = G0 ∪ {x1 ,..., x n } G2 = G1 ∪ {x1 x1 ,..., x1 x n , x n x n } where G1 and G2 are linear and quadratic space, respectively. n is the total number of design variables. In this study, design variables include gauges, material yield stress, vehicle impact speed and vehicle position. The linear regression model ˆy( x ) for y(x) is given by p

ˆy( x ) = ∑ ai g i ( x ) i =1

(13)

where g i ( x ) ∈ Gl (l = 1, 2), p is the number of fitting terms in the fitted equation. For linear model given by (3), define merit function:

1282

m

p

i =1

i =1

J 2 = ∑ [y i − ∑ ai g i (x)]

2

In the least square procedure, the best parameters ai are picked to minimize J 2. Many of the criteria that have been suggested in the literature to identify the “best” subset are monotone functions of RSS for subsets with the same number of independent variables (Myers, 1990). Hence, the problem of finding the “best” polynomial can often be reduced to the problem of finding those polynomials of size p with minimum RSS. RSS is given by: l

RSS = ∑ ( y i − yˆ i ) 2

(14)

i =1

where l is the total number of sampling points ( l > p ). Most procedures that produce statistical inference depend heavily on the relation between the total and regression sums of squares. In the case of linear regression by linear polynomial, the relationship can be simplified as follows: l

l

l

i =1

i =1

i =1

∑ ( yi − y ) 2 = ∑ ( yˆ i − y ) 2 + ∑ ( yi − yˆ i ) 2

(15)

where y is the mean of yi, and ˆy i = yˆ ( x i ) . Equation 5 represents the following conceptual identity (Myers, 1990): (Total variability) = (Variability explained) + (Variability unexplained) The coefficient of determination, often referred to, symbolically, as R2 is related to (15) and a much used measure of the fit of the regression line. The R2 is given by: ∑ ( yi − yˆ i ) 2 (16) R2 = 1− ∑ ( yi − y ) 2 R2 is surely a measure of the model’s capability to fit the sampling data. The insertion of any new regressor into a model cannot bring about a decrease in R2. Though there are rules and algorithms that allow for selection of best model, the statistic itself is not conceptually prediction oriented. It is not recommended as a sole criterion for choosing the best prediction model from a set of candidate models. The adjusted R2, denoted by R 2 , is used by many authors as a criterion for identifying the best prediction models in subset selection. The R 2 can guard against “overfitting” by including marginally important model terms at the expense of error degrees of freedom. Adjusted R2 is given by:

R 2 =1−

∑ ( yi − yˆ i ) 2 (l − 1) × ∑ ( yi − y ) 2 (l − p)

(17)

The procedure must converge as each replacement reduces the RSS that is bounded below. In practice, the procedure usually converges very rapidly. Sequential replacement algorithm normally used in conjunction with stepwise selection. It can be obtained by taking the stepwise

Spring Research Conference - Section on Quality & Productivity (Q&P)

selection and applying a replacement procedure after each new term is added. Results of sequential replacement are usually better than those from stepwise. Sequential replacement requires more computational time than that required by stepwise selection but it is feasible to apply to problems with several hundreds terms in the solution sets when subset of 20-30 terms are required. Another advantage for sequential replacement algorithm is that it does not have artificial parameter to be turned.

OPTIMIZATION AND ROBUSTNESS ASSESSMENT The safety optimization and robustness (SOAR) method can be used for multi-objective optimization for vehicle crash safety. The optimization problem can be described as: Minimize f(x) Subject to g j ( x ) ≤ 0 , j = 1, 2, , m

L

xl ≤ x ≤ xu

(18) where m is the total number of constraints. Based on the surrogate models, the deterministic optimization was performed by using sequential quadratic programming (SQP) with mixed types of design variables. After obtaining a deterministic optimum, a robustness assessment is made by modeling the variables with uncertainties. These uncertainties result from many sources, e.g., gauges, materials, test variability, etc. Due to the uncertainties, the design variables need to be treated as random variables. Robustness assessment is performed by constructing an empirical cumulative distribution function (CDF) of the response variable generated from Monte Carlo simulation. The sequential replacement response surface can be used as a surrogate model to perform both the deterministic optimization and robustness assessment. Because the response surface model is represented by a small system of equations, it can be evaluated quickly at very little computational expense compared to the full finite element simulation. Thousands of samples may be required to realize a good representation of the response probability distribution when Monte Carlo simulation is used.

RELIABILITY-BASED DESIGN OPTIMIZATION (RBDO) In most safety simulations, if optimization is used at all, it is usually only deterministic. The variability of design parameters is only considered through the use of safety factors. It is important to incorporate uncertainty in engineering design optimization and develop computational techniques that enable engineers to make efficient and reliable decisions. Methods available in the literature to this goal include reliability-based design optimization (Choi, 1996, Du and Chen, 2002) and robust design (Torng and Yang, 1994, Lin et al. 1999 and Koch et al. 2002).

1283

The reliability-based design optimization problem can be generally formulated as (Choi et al. 1996): Minimize f( µx ) Subject to

(

)

R j ≤ P g j ( x ) ≤ 0 , j = 1,..., m

µ lx ≤ µ ≤ µ ux

(19) where f and gj are the objective and constraint functions, respectively, x is the random design vector with mean µx, m is the number of probabilistic constraints, Rj is the desired reliability and the probabilistic constraints are described by the performance function gj(x) with gj(x)