Genetic Search Algorithms to Fuzzy Multiobjective Games - wseas

1 downloads 0 Views 1MB Size Report
y1(3x1 + 6x2 + 2x3)/9 + y2(6x1 + 2x2 + 9x3)/9. +y3(8x1 + 7x3)/9. +y4(2x1 + 8x2 + 4x3)/9 ≥ σ1, y1(x1 + 3x2 + 2x3)/8 + y2(4x1 + 6x2 + 5x3)/8. +y3(7x1 + x2 + 3x3)/ ...
SELECTED TOPICS in APPLIED COMPUTER SCIENCE

Genetic Search Algorithms to Fuzzy Multiobjective Games: a Mathematica Implementation ANDRE A. KELLER CNRS UMR8019, Universit´e de Lille 1 Sciences et Technologies Cit´e Scientifique 59655 Villeneuve d’Ascq Cedex FRANCE [email protected] Abstract: Genetic stochastic search algorithms (GAs) have soon demonstrated their helpful contribution for finding solutions to the complex real-life optimization problems. These algorithms have been applied extensively for solving Nash equilibria of fuzzy bimatrix games with single objective. The experience shows the ability of the GAs to find solutions to equivalent nonlinear programming problems, without an exhaustive search and computing gradients. This paper is an attempt to handle the complexity of the real-life situations, when the decision makers are facing to multiple objectives in a fuzzy environment. The hybridation of GA and classical local optimization techniques is also suggested. The software MATHEMATICA 7.0.1 and the GA-based optimization package GENOCOP III are used to implement these techniques within a high-performance computing environment. Key–Words: fuzzy matrix games, multiple objectives, genetic algorithm, Nelder-Meade algorithm, Mathematica.

1

Evolutionary Optimization

Evolutionary methods have proved their helpfull assistance to complex real-life problems such as with nonlinear bounded optimization problems and decentralized planning systems 1 [1, 7, 11, 21, 22].

1.1

Bounded Optimization Problems

Let a nonlinear bounded programming problem min f (x), x ∈ Rn s.t. x ∈ [xl , xu ]. A multimodal example with bounds may be a weighted combination of two sinc functions g(x, y) = 3f (x+10, y+10)+2f (x−5, y+5), x, y ∈ [−20, 10], where p sin( x2 + y 2 ) p 2 f (x, y) = 50 p − x + y2. x2 + y 2 The hybridation of GA and classical local optimization techniques is proposed [5, 8, 16] to find the global optimization solution : GA is used first with a small number of generations, and second the Newton’s method for local optimization computes the solution from this neighborhood. The real-valued GA consists in Mathematica routines: a population of chromosomes is created randomly and the genetic processes

Figure 1: GA’s first 10 generations

1 The multilevel optimization problems in large hierarchical organizations [1, 23] are not presented in this preliminary study.

ISSN: 1792-4863

351

ISBN: 978-960-474-231-8

SELECTED TOPICS in APPLIED COMPUTER SCIENCE

of selection, crossover and mutation are then used for each iteration 2 3 . The GA is ended after an arbitrary small number of 10 generations calculation. The best result we obtain is (ˆ x, yˆ) = (−9.17265, −9.71843). The exact solution of the global optimization problem being at (x∗ , y ∗ ) = (−9.898, −9.966), the error of the GA 10 iterations approximation is (x∗ − x ˆ, y ∗ − yˆ) = (−.052682, −.247571). The exact solution is reached afterwards 4 by using a Mathematica primitive for local maximization FindMaximum[f [x, y], {{x, x ˆ}, {y, yˆ}}, M ethod → ”Gradient”]. The initial state and the first 10 generations are shown in Fig. 1.

1.2

Figure 2: Iterative gradient method

Constrained Optimization Problems

where x is the vector of n optimization variables. The search space S ⊆ Rn is defined by the lower bounds xl and upper bounds xu . ItQis represented by the qdimensional rectangle S = qi=1 [xli , xui ], q ≤ n. For this problem, the set F ⊆ S of feasible points is defined by m constraints such that

Let the standard nonlinear programming problem with n bounds, p inequality constraints, and m − p equality constraints be [3, 10, 17] min f (x), x ∈ Rn x

subject to: dom F =

gi (x) ≤ 0, i = 1, . . . , p, x ∈ [xl , xu ],

m \

dom gi ∩

dom hi .

i=p+1

i=1

hi (x) = 0, i = p + 1, . . . , m,

It is included in the search space S defined. We have x ∈ F ⊆ S ⊆ Rn . GAs may introduce penalty function which penalize infeasible solutions [17] , such as

where f, gi , hi : Rn 7→ R are respectively the cost function, the inequality and equality constraints, and 2

The Mathematica notebook consists in modules such as in the appendix A: 1- the module createPopulation[nSize] randomly constructs a population of nSize chromosomes and renders separately the values of (x, y) in initialP opulation and the fitness in f itList, 2rankPopulation[initialP opulation, f itList, pSize] sorts the chromosomes according to their fitness, 3- the selection process uses selectPopulation[..., keepRate], rankWeighting[...] and selectPairing[...], 4- the crossover process is using crossOver[...] to get two offspring for each mating parent, and 5- the mutation process is using mutatePopulation[..., mutationRate], fitMatingPopulation[...], fitMutatedPopulation[...]. 3 The stochastic GA is based on the paradigm of noncooperative games. 4 For this example, Mathematica primitive for a global optimization by using the NM simplex algorithm

fp (x) = f (x) +

m X

Ci dκi ,

i=1

with ( di =

δi gi (x), i = 1, . . . , p, |hi (x)|, i = p + 1, . . . , m.

where fp (x) denotes the penalized objective function, Ci a nonzero constant for violation of constraint i, di the distance metric of constraint i and κ a user-defined parameter. The lagrangian for this problem is defined by L(x, ~λ, µ ~ ) = f (x) +

NMaximize[{f [x, y], xl ≤ x ≤ xu , yl ≤ y ≤ yu }, {x, y}, M ethod → ”N elderM ead”].

p X i=1

Using the Mathematica extra package Optimization ‘U nconstrainedP roblems‘, the primitive F indM inimumP lot[−f [x, y], {{x, x ˆ}, {y, yˆ}}, M ethod → ”N ewton] shows a 4 steps path between the best and the exact solutions (see Fig. 2). ISSN: 1792-4863

p \

λi gi (x) +

m X

µi hi (x),

i=p+1

where ~λ, µ ~ denote the dual variables associated to the constraints. The GA-based GENOCOP III package (see appendix B) is used for solving the following numerical 352

ISBN: 978-960-474-231-8

SELECTED TOPICS in APPLIED COMPUTER SCIENCE

In this study, we assume that the two players have only fuzzy goals with certain payoffs.

nonlinear example min f (x, y) = x2 + 9y 2 , x, y ∈ R x,y

2.2

subject to:

Nishizaki-Sakawa’s Model

Definition 1 Let the fuzzy goals for players I and II be denoted by p1 = (p11 , . . . , pr1 ) ∈ D1 ⊆ Rr and p2 = (p12 , . . . , ps2 ) ∈ D2 ⊆ Rs . The Player I’s kth fuzzy goal Gk1 is a fuzzy set characterized by the membership function (MF)

g1 (x, y) ≡ −2x − y + 1 ≤ 0, g2 (x, y) ≡ −x − 3y + 1 ≤ 0, g3 (x, y) ≡ (x + 3)2 + 3(y + 1)2 − 25 ≤ 0, x ∈ [−1, 2], y ∈ [−.5, 1.5].

µk1 : D1k 7→ [0, 1].

The best solution (ˆ x, yˆ) = (.500091, .166636) with f (ˆ x, yˆ) = .5 by using GA is close to the exact optimum (x∗ , y ∗ ) = (.5, .16666) for 100 generations (see the illustration in appendix B).

Similarly, the Player II’s lth fuzzy goal Gl2 is a fuzzy set characterized by the MF µl2 : D2l 7→ [0, 1].

2

Multiple Objectives Fuzzy Matrix Games Using Genetic Algorithms

A fuzzy goal expresses the player’s degree of satisfaction for the corresponding payoff [20]. Players are assumed to specify intervals for their degree of satisfaction, such as a ≤ p ≤ a ¯ for Player I. For p < a, k Player I’s kth MF is µ (p) = 0, for p > a ¯ we have µk (p) = 1 and for a ≤ p ≤ a ¯, µk (p) is supposed continuous and strictly increasing. If the Player I’s MF of the fuzzy goal µk (xAk y) is a linear function, we then have for any mixed strategies (x, y)  k  ¯k   1, xA y ≥ a

Multiobjective optimization problems involve simultaneous optimization of multiple objectives. Paretooptimal solutions are associated to such problems [24]. The Nishizaki-Sakawa ’s model is used for this presentation [20].

2.1

Bimatrix Games Definitions

A single objectives bimatrix game (a non-zero-game with two players) is evaluated in a fuzzy environment where the objectives are uncertain [14, 15]. A bimatrix game is represented by G = (S m , S n , A, B). Let em be an m-dimensional vector of ones, en having a dimension n. The players’ strategy spaces are the convex polytopes S m = {x ∈ R≥0 , x0 em = 1} and S n = {y ∈ R≥0 , y0 en = 1}. The list of the r payoff matrices for Player I is represented by Ak = (akij )m×n , k ∈ K , {1, . . . , r}. The list of the s payoff matrices for Player II is represented by Bl = (blij )m×n , l ∈ L , {1, . . . , s}. Pure strategies of the players correspond to the rows and columns for each matrix : if Player I chooses a pure strategy i ∈ I , {1, . . . , m} and Player II a pure strategy j ∈ J , {1, . . . ,n}, Player I obtains the payoff vector a1ij , . . . , arij and Player II the payoff vector  b1ij , . . . , bsij . Mixed strategies are define by the probabilities x ∈ X , {x ∈ Rm | e0m .x = 1, x ≥ 0} for Player I and y ∈ X , {y ∈ Rn | e0n .y = 1, y ≥ 0} for Player II. For any pair of mixed strategies (x, y), the Player I’s k-th expected payoff is x0 Ak y and the Player II’s l-th expected payoff x0 Bl y. The objectives of players I and II will be the programming problems [maxx x0 Ay subject to x0 em = 1, x ≥ 0], and [maxy x0 By subject to y0 en = 1, x ≥ 0], respectively. ISSN: 1792-4863

µk (xAk y) =

1−

a ¯k −xAk y , ak a ¯k −ak k k

≤ xAk y ≤ a ¯k

   0, xA y ≤ a ,

where the payoff ak gives the worst degree of satisfaction for Player I w.r.t. the kth objective and is computed by ak = min min xAk y = min min akij . x∈X y∈Y

i∈I j∈J

On the contrary, the payoff a ¯k will give the best degree of satisfaction for Player I w.r.t. the kth objective and is computed by a ¯k = max max xAk y = max max akij . x∈X y∈Y

i∈I

j∈J

The fuzzy decision rule by Bellman and Zadeh 5 may then be used to aggregate the goals, such as   a ¯k − xAk y k k µ(x, y) = min µ (xA y) = min 1 − k . k∈K k∈K a ¯ − ak 5

R.E. Bellman and L.A. Zadeh, Decision making in a fuzzy environment, Management Sci., 17, 141–164, 1970. One another method for aggregating multiple fuzzy goals is weighting the objectives. 353

ISBN: 978-960-474-231-8

SELECTED TOPICS in APPLIED COMPUTER SCIENCE

We also have (see Nishizaki and Sakawa [20], p.47) X  m X n k k µ(x, y) = min a ˆij xi yj + c , k∈K

The nonlinear programming problem is

i=1 j=1

max

x,y,σ1 ,σ2 ,p,q

σ1 + σ2 − p − q

where

subject to: a ˆkij =

y1 (2x1 + 2x2 + 4x3 )/9 + y2 (6x1 + 7x3 )/9

akij ak k , c = − . a ¯k − ak a ¯k − ak

+y3 (5x1 + 5x2 + 6x3 )/9 +y4 (7x1 + 4x2 + 9x3 )/9 ≥ σ1 ,

Theorem 2 (Equilibrium solution) An equilibrium solution w.r.t. the degree of attainment of the aggregated fuzzy goal is the optimal solution of a nonlinear programming problem max

x,y,σ1 ,σ2 ,p,q

y1 (3x1 + 6x2 + 2x3 )/9 + y2 (6x1 + 2x2 + 9x3 )/9 +y3 (8x1 + 7x3 )/9 +y4 (2x1 + 8x2 + 4x3 )/9 ≥ σ1 , y1 (x1 + 3x2 + 2x3 )/8 + y2 (4x1 + 6x2 + 5x3 )/8

σ1 + σ2 − p − q

+y3 (7x1 + x2 + 3x3 )/8

subject to:

+y4 (2x1 + 8x2 + 9x3 )/8 ≥ σ1 ,

ˆ y + ck1 ≥ σ1 , k ∈ Nr xA

y1 (x1 + 8x2 + 4x3 )/8 + y2 (6x1 + 2x2 + 9x3 )/8

k

l

ˆ y+ xB

cl2

≥ σ 2 , l ∈ Ns

+y3 (7x1 + 3x2 + 3x3 )/8 + y4 (x1 + 4x2 + 5x3 )/8

ˆ y + ck1 em ≤ pem , ∃k ∈ Nr A

−1/8 ≥ σ2,

l Bˆ0 x + cl2 en ≤ qen , ∃l ∈ Ns x0 em = 1, y0 en = 1, x ≥ 0, y ≥ 0.

y1 (8x1 + x2 + 5x3 )/9 + y2 (2x1 + 9x2 + 2x3 )/9

k

+y3 (7x2 + 8x3 )/9 + y4 (8x1 + 6x2 + 5x3 )/9 ≥ σ2 , y1 (5x1 + 3x2 + x3 )/7 + y2 (x1 + 4x2 + 8x3 )/7 +y3 (2x1 + 8x2 + x3 )/7 + y4 (4x1 + 3x2 + 2x3 )/7

The optimal solutions are obtained by solving r × s problems including each, only one inequation from each two classes of r and s inequalities (≤). The optimal value of the selected problem must be zero (see proof by Nishizaki and Sakawa [20], p.93).

−1/7 ≥ σ2 , (4y1 + 7y2 + 6y3 + 9y4 )/9 ≤ p, (6x1 + 2x2 + 9x3 )/8 − 1/8 ≤ q, x1 + x2 + x3 = 1, y1 + y2 + y3 + y4 = 1, x1 , x2 , x3 , y1 , y2 , y3 , y4 , p, q ≥ 0.

2.3

Numerical example 6.

In the following example, two players example Players I and II have respectively three and four pure strategies, and three different objectives. The goals of the two players are fuzzy. The payoff matrices for Players I and II respectively, are

The solutions 7 of the 3 × 3 possible problems are shown in Fig. 3. The optimal solution of the selected problem P 8 is close to zero.



   2 6 5 7 3 6 8 2 A1 =  2 0 5 4  , A2 =  6 2 0 8  4 7 6 9 2 9 7 4   1 4 7 2 A3 =  3 6 1 8  , B1 =  2 5 3 9    8 2 0 8 B2 =  1 9 7 6  , B3 =  5 2 8 5 

ISSN: 1792-4863

 1 6 1 7 8 2 3 4  4 9 3 5  5 1 2 4 3 4 8 3  1 8 1 2

Figure 3: Optimal solutions

354

ISBN: 978-960-474-231-8

SELECTED TOPICS in APPLIED COMPUTER SCIENCE

not be adequate. Let a simple OP be [max f (x) subject to xl ≤ x ≤ xu ], where x, xl , xu ∈ Rn . In binarycoded GAs, each parameter value is encoded as a gene (binary string) and concatenate together into a chromosome (a vector of parameter values). The problem is then translated into a combinatorial problem, the points of which are corners of a high-dimensional cube. Let P (t) be a population of potential solutions at generation t, and new individuals (offspring) C(t), the pseudo-code is shown as Algorithm A.1 [11]. An initial population of individuals (chromo-

Fig. 4 shows the best solutions obtained by using GENOCOP III, for different sizes of the generations from 100 to 1000 000. The optimal solutions for Player I are x∗1 = x∗3 = 0 and x∗2 = 1 w.r.t. a degree of attainment of the goal of 64.9 per cent. The optimal solutions for Player II are y1∗ = .408, y2∗ = .1196, y3∗ = .2935 and y4∗ = .1791 w.r.t. a degree of attainment of the goal of 12.5 per cent.

Algorithm A.1: simple genetic algorithm begin /* initial random population */ t:=0; generate initial P(t); evaluate fitness of P(t); while (NOT finished) do; begin /* new generation */ for populationSize/2 do; begin /* reproductive cycle */ select two individuals for mating; recombine P(t) to yield offspring C(t); evaluate offspring’s fitness; select P(t+1) from P(t) and C(t); t:=t+1; end if population has converged then finished := TRUE end end

Figure 4: Best solutions by using GA

3

Conclusion

This study tried to explore some major difficulties of real-life problems: nonlinearities and discontinuities, multiplicity of constraints, incertainty (the fuzzy environment) and interacting agents (game theory) with multiple objectives. Fuzzy bimatrix games are equivalent to nonlinear programming problems, whose resolution is performed by using a hybridazed approach of the genetic algorithm and a classical local optimization technique. The computations were using a combination of the Mathematica software and the GA package GENOCOP III. Numerical examples have been presented for more precisions.

A

somes) is generated at random, and will evolve over successive improved generations towards the global optimum . The individuals evolve through successive generations t (iterations) by means of genetic operators. More precisely, a new population P (t + 1) is formed by selecting the more fit individuals, whose members undergo reproduction by means of crossover and mutation. Usually, a gene has converged when 95 % of the population has the same value and the population converges when all the genes have converged .

Simple Genetic Algorithm with Mathematica

Evolutionary computations with Mathematica have been notably introduced by Bengtsson [2], Freeman [9], Jacob [13].

Principles and Pseudo-code Genetic algorithms (GAs) are stochastic search techniques which procedures are inspired from the genetic processes of biological organisms by using encodings and reproduction mechanisms [12]. These principles may be adapted to real-world optimization problems (OPs) for which the traditional gradient methods may

Binary Encoding and Fitness Let the OP be simply the scalar function [max f (x, y), x, y ∈ R], each variable may be represented by a 5-bit binary number8 . An

6 This numerical application is an adaptation of the Nishizaki and Sakawa’s example, p.94 [20].

ISSN: 1792-4863

8 Real-number encoding has been proposed to prevent some drawbacks of the binary encoding [6]: in industrial engineering

355

ISBN: 978-960-474-231-8

SELECTED TOPICS in APPLIED COMPUTER SCIENCE

individual (or chromosome) contains two parameters (or genes) and consists of 10 binary digits, such as for (x,y)= (8, 10)10 ≡ (01000|01010)2 . Let a simple OP with bounds be max f (x) = x sin (10πx) + 1, x ∈ [−1, 2]. Suppose that the required precision is six decimals. The range of x is 3 = 2 − (−1). The domain should be divised into at least 3 × 106 equal size subranges. Since we have 2097152 = 221 ≤ 3 × 106 ≤ 222 = 4194304, 22 bits are required as a binary vector. The lower and upper bounds of x are the 22-bit strings (000 . . . 000) and (111 . . . 111) respectively. The mapping from the 22-bits string (< b21 , b20 , . . . , b0 >)2 into a real number x in [−1, 2] requires two steps: firstly, convert the binary string to base 10 (< b21 , b20 , . . . , b0 >)2 =

21 X

bi 2i

 10

and the mutation that make one or more changes in a single individual string. Each gene (string) is selected with a probability proportional to its fitness value. The biased roulette-wheel mechanism consists in a wheel with N divisions, where the size is in proportion to the fitness value (see Fig. A.2). The wheel is spun N

= x0 ,

i=0

Figure A.2: Biased roulette-wheel

and secondly find the corresponding real number 3 x = −1 + x0 22 . 2 −1 For example, (1000101110110101000111) = x0 represents 0.637197, since we have x0 = 2288967 and 3 x = −1 + 2288967 × 4194303 = .637197. The fitness of individuals depends on the performance of the corresponding phenotypes (objective function). The Mathematica module for decoding the chromosome is shown in Fig. A.1.

times each times chosing the individual indicated by the pointer. At a crossover single point, the chromosomes of two performant individuals (parents) are cut at some random position. The tail segments are then swapped over to create two new chromosomes (see Fig. A.3). The Mathematica primitives 9 are shown in

Figure A.1: Coding procedure

Figure A.3: Single point crossover Figs. A.4- A.5. The mutation operator alters one or more genes of the offspring. The crossover and mutation probabilities, denoted by pc and pm respectively, are key parameters of control besides the population size N .

Genetic Operators There are three types of operators for the reproduction phase: the selection operator of more fitted individuals, the crossover operator that creates new individuals by combining parts of strings of two individuals optimization problems, 100 variables in the range [−500, 500] with a 6 digits precision would produce a binary solution vector of length 3000 and generate a search space of 101000 . ISSN: 1792-4863

9 The Mathematica primitives are adapted from Bengtsson [2] and Freeman [9].

356

ISBN: 978-960-474-231-8

SELECTED TOPICS in APPLIED COMPUTER SCIENCE

Example Let the OP be max f (x) = 1 + cos(πx) + (3x mod 1), x ∈ [0, 1]. The exact solution is given by x∗ = .2739, f (x∗ ) = 1.5358. The application uses the simple Mathematica notebook due to Bengtsson [2]. The population size is 32, the string length is 6 and the mutation rate is .002. Fig. A.6 shows the initial and the final eleventh generation of chromosomes.

Figure A.4: Simple GA crossover

Figure A.6: Simple GA application

B

The GENOCOP (for GEnetic algorithm for Numerical Optimization of COnstrained Problems) system [4, 17, 18, 19] retains a floating point representation: for a problem with n variables, the ith chromosome in a permisible solution is coded as an n-dimensional vector. The GENOCOP system initially creates a population of potential solutions. Two sub-populations are considered: the first population Ps consists of

Figure A.5: GA mutation

ISSN: 1792-4863

GA-based Optimization Package GENOCOP III

357

ISBN: 978-960-474-231-8

SELECTED TOPICS in APPLIED COMPUTER SCIENCE

search points satisfying only the linear constraints and the second population Pr consists of reference points satisfying all the constraints. The development in one population influences the evaluations of individuals in the second. The reference points are infeasible search points are ”repaired” for evaluation. The feasibility of the points in Ps is maintained through specific operators 10 Linear equations must be eliminated at the beginning to prevent from instabilities. The number of variables is reduced by substitutions and supplementary inequality conditions are added. The nonlinear equation hk (x) = 0, k = q + 1, m is replaced by a pair of inequations ε ≤ hk (x) ≤ ε, where the parameter ε defines the precision of the system. GENOCOP is available from the University of North Carolina / College of Engeneering at Charlotte (USA) : ftp.uncc.edu/coe/evol (file ”genocopIII.tar.Z”) 11 . Different data and controlled parameters are defined in the input file, such as: the linear inequalities and ranges for the variables, the population size, the number of generations, a ”0” for a minimization problem and a ”1” for a maximizing objective, a ”1” for a start from a single point (identical individuals) or ”0” for a start from a random population, the probability of replacement, etc. Figs. B.1 to B.3 illustrate and give more details about the example of section 1.2. Fig. B.1 shows the feasible region F. From the convexity of the feasible region, it follows that for each point a ∈ F, there exists feasible ranges, such as [xa , x ¯a ] for a fixed value y(a), and [y a , y¯a ] for a fixed x(a). The listing of the ouput file is shown in Fig. B.2. The solutions and errors for different number of generations are shown in Fig. B.3. The exact solution is pratically obtained after 100 iterations of the GA approach.

Figure B.1: Feasible region and solution

References: [1] J.F. Bard. Practical Bilevel Optimization: Algorithms and Applications. Kluwer Academic Publishers, Boston–Dordrecht–London, 1998. [2] M.G. Bengtsson. Genetic algorithms. Mathematica 2.2 notebook, National Defense Research Establishment, Linkoping, Sweden, 1993. http://library.wolfram.com/.../MathSource/569/. 10

Given a search point s ∈ Ps , if s is fully feasible (s ∈ F), then eval(s) = f (s). Otherwise the system selects one of the reference points in Pr , creates random points z such as z = as + (1 − a)r, a being random numbers. For a fully feasible z, we have eval(s)=eval(z) = f (z). If f (z) is greater that f (r), then z replaces r as a new reference point (see [18] for more technical details). 11 The implementation of the package has been adapted for this study, by using the compiler ACC 1.4 from Absoft C/C + +. ISSN: 1792-4863

Figure B.2: Listing of the output file 358

ISBN: 978-960-474-231-8

SELECTED TOPICS in APPLIED COMPUTER SCIENCE

[13] C. Jacob. Illustrating Evolutionary Computation with Mathematica. Academic Press, San Diego, CA, 2001. [14] A.A. Keller. Fuzzy multi-objective optimization modeling with mathematica. WSEAS Trans. Syst., 8(3):368–378, 2009. [15] A.A. Keller. Fuzzy multiobjective bimatrix game: introduction to the computational techniques. In N.E. Mastorakis, M. Demiralp, V. Mladenov, and Z. Boikovic, editors, Recent Advances in System Theory & Scientific Computation, Mathematics and Computers in Science and Engineering, pages 148–156, Moscow, Russia, August 2009. WSEAS, WSEAS Press.

Figure B.3: Best and exact solutions [3] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, Cambridge, UK–New York–Melbourne–Cape Town, 2004. [4] D. Bunnag and M. Sun. Genetic algorithm for constrained global optimization. Appl. Math. Comput., 171:604–636, 2005.

[16] N.E. Mastorakis. Solving non-linear equations via genetic algorithms. pages 24–28, Lisbon, Portugal, June 16-18 2005. Proceedings of the 6th WSEAS Inter. Conf. on Evolutionary Computing.

[5] R. Chelouah and P. Siarry. Genetic and NelderMead algorithms hybridized for a more accurate global optimization of continuous multiminima functions. European J. Oper. Res., 148:335–348, 2003.

[17] Z. Michalewicz. Genetic Algorithms + Data Structures = Evolution Programs. Springer, Berlin–Heidelberg–New York, 1999. [18] Z. Michalewicz and D.B. Fogel. How to Solve It: Modern Heuristics, 2nd edition. Springer, Berlin–Heidelberg–New York, 2004.

[6] Y.-W. Chen. An alternative approach to the matrix non-cooperative game with fuzzy multiple objectives. J. Chin. Inst. Ind. Eng., 19(5):9–16, 2002.

[19] Z. Michalewicz and M. Schoenauer. Evolutionary algorithms for constrained parameter optimization problems. Evolutionary Computation, 1(4):1–32, 1996.

[7] K. Deb. Multi-Objective Optimization using Evolutionary Algorithms. John Wiley & Sons, Chichester–New York–Weinheim, 2001.

[20] I. Nishizaki and M. Sakawa. Fuzzy and Multiobjective Games for Conflict Resolution. PhysicaVerlag, Springer, Heidelberg, 2001.

[8] N. Durand and J-M. Alliot. A combined Nelder-Mead simplex and genetic algorithm. Orlando, Florida, 1999. Genetic and Evolutionary Computation Conference. Available from: http://www.recherche.enac.fr/opti/papers/articles.

[21] M. Sakawa. Large Scale Interactive Fuzzy Multiobjective Programming. Physica-Verlag, Springer, Heidelberg–New York, 2000.

[9] J. A. Freeman. Simulating Neural Networks with Mathematica. Addison-Wesley, Reading, Mass.–Menlo Park, Cal.–New York, 1994.

[22] M. Sakawa. Genetic Algorithms and Fuzzy Multiobjective Optimization. Kluwer Academic Publishers, Boston–Dordrecht–London, 2002.

[10] C. Geiger and C. Kanzow. Theorie und Numerik restringierter Optimierungsaufgaben. Springer Verlag, Berlin–Heidelberg–New York, 2000.

[23] M. Sakawa and I. Nishizaki. Cooperative and Noncooperative Multi-Level Programming. Operations research — Computer Science Interfaces. Springer Science+ Business Media, Dordrecht–Heidelberg–London–New York, 2009.

[11] M. Gen and R. Cheng. Genetic Algorithms & Engineering Optimization. John Wiley & Sons, New York–Chichester–Toronto, 2000.

[24] K.-B. Sim and J.-Y. Kim. Solution of multiobjective optimization problems: coevolutionary algorithm based on evolutionary game theory. Artif. Life Robotics, 8:174–185, 2004.

[12] D.E. Goldberg. Genetic Algorithms in Search, Optimization, and Machine Learning. AddisonWesley Publishing Co, Reading, MA–Menlo Park, Cal.–Sydney–Tokyo, 1989. ISSN: 1792-4863

359

ISBN: 978-960-474-231-8