Multi-Variable Optimization(Updated)

12 downloads 0 Views 692KB Size Report
EVOLUTIONARY OPTIMIZATION METHOD. - COMPARE ALL ... :3. 2/. 2. :2. 0. 0. GOTO xx. ELSE. GOTO. xxIF. STEP. Min x. STEP xx. POINTS. CREATE. ELSE.
MULTI VARIABLE OPTIMIZATION

Min f(x1, x2, x3,----- xn) UNIDIRECTIONAL SEARCH - CONSIDER A DIRECTION S

r r x(α ) = x + α s

- REDUCE TO Min f (α ) - SOLVE AS A SINGLE VARIABLE PROBLEM

Min Point

r s

Uni directional search (example) Min f(x1, x2) = (x1-10)2 + (x2-10)2 S = (2, 5) (search direction) X = (2, 1) (Initial guess)

DIRECT SERACH METHODS - SEARCH THROUGH MANY DIRECTIONS - FOR N VARIABLES 2N DIRECTIONS -Obtained by altering each of the n values and taking all combinations

EVOLUTIONARY OPTIMIZATION METHOD - COMPARE ALL 2N+1 POINTS & CHOOSE THE BEST. - CONTINUE TILL THERE IS AN IMPROVE MENT - ELSE DECREASE INCREAMENT STEP 1: x0 = INITIAL POINT

∆i = STEP REDUCTION PARAMETER FOR EACH VARIABLE ∈ = TERMINATION PARA METER

STEP 2 :

IF

∆ 2

Feasible region Minimum point

Infeasible region

x2

x1

Process 1. Choose ε1 , ε 2 , R, Ω. 2. Form modified objective function P ( x k , R k ) = f ( x k ) + Ω( R k , g ( x k ), h( x k ))

3. Start with

xk

. Find x k +1 so as to minimize P. (use ε1 )

4. If P( x k +1 , R k ) − P( x k , R k −1 ) < ε 2 Terminate. k +1 R = cR, k=k+1; 5. Else

go to step 2.

• At any stage minimize P(x,R) = f(x)+Ω(R,g(x),h(x)) R = set of penalty parameters Ω= penalty function

Types of penalty function • Parabolic penalty Ω = R{h(x)}2

- for equality constraints - only for infeasible points • Interior penalty functions - penalize feasible points • Exterior penalty functions - penalize infeasible points • Mixed penalty functions - combination of both

• Infinite barrier penalty Ω = R ∑ g j (x)

- inequality constraints. - R is very large. - Exterior. • Log penalty Ω=-R ln[g(x)] -inequality constraints - for feasible points - interior. -initially large R. - larger penalty close to border.

• Inverse penalty ⎡ 1 ⎤ Ω = R⎢ ⎥ ⎣ g ( x) ⎦

- interior. - larger penalty close to border. - initially large R • Bracket order penalty R < g ( x) > 2

- =A if A Fmax, retrtact half the distance to x , and continue till f(xm) < Fmax – If xm is feasible and f(xm) < Fmax, Go to S5 – If xm is infeasible, Goto S4

Complex Search Algo (contd) • S4: Check for feasibility of the solution – For all i, reset violated variable bounds • if xim < xiL, xim = xiL • if xim > xiU, xim = xiU

– If the resulting xim is infeasible, retract half the distance to the centroid, repeat till xm is feasible

Complex Search Algo (contd) • S5: Replace xR by xm, check for termination – fmean = mean of f(xP), – xmean = mean (xP) p 2 ( ( ) ) f x − f ≤ε ∑ mean p

∑x p

p

− xmean

2

≤δ

Complex Search Algo (contd)

Characteristics of complex search • For complex feasible region • If the optimum is well inside the search space, the algo is efficient • Not so good if the search space is narrow, or the optimum is close to the constraint boundary