An Interactive Fuzzy Satisficing Method for

0 downloads 0 Views 128KB Size Report
veloped in various ways as an optimization method based on the probability theory. In particular, for ... linear programming problems involving random variable.
Cahit Perkgoz et al./Asia Pacific Management Review (2005) 10(1), 29-35

An Interactive Fuzzy Satisficing Method for Multiobjective Stochastic Integer Programming Problems through a Probability Maximization Model Cahit Perkgoza,*, Masatoshi Sakawaa, Kosuke Katoa and Hideki Katagiria a

Department of Artificial Complex Systems Engineering, Graduate School of Engineering, Hiroshima University Accepted in October 2004 Available online

Abstract In this paper, we deal with multiobjective integer programming problems involving random variable coefficients in objective functions and/or constraints. After reformulation of them on the basis of a probability maximization model for the chance constrained programming, incorporating fuzzy goals of the decision maker for the objective functions, we propose an interactive fuzzy satisficing method to derive a satisficing solution for the decision maker using genetic algorithms as a fusion of the stochastic programming and the fuzzy one. Keywords: Multiobjective stochastic integer programming; Fuzzy programming; Probability maximization model; Genetic algorithms

1. Introduction

kawa et al., 2001) and the variance minimization model (Sakawa et al., 2002). However, in these literatures, they dealt with only the case that decision variables are continuous. In this paper, we focus on multiobjective stochastic integer programming problems, and present an interactive fuzzy satisficing method based on a probability maximization model. In order to consider the nonlinearity of problems solved in the interactive fuzzy satisficing method and to cope with large-scale problems, we adopt genetic algorithms as a solution method.

In the real world, we often encounter the situation that we have to make a decision under uncertainty because it is difficult to obtain all the information needed for decision making. For such decision making problems involving uncertainty, there are two typical approaches: stochastic programming and fuzzy programming. Stochastic programming techniques have been developed in various ways as an optimization method based on the probability theory. In particular, for multiobjective linear programming problems involving random variable coefficients, Stancu-Minasian (1990) handled the minimum risk approach, while Leclercq (1982) and Teghem et al. (1986) presented an interactive method. On the other hand, fuzzy mathematical programming representing the ambiguity in a decision making situation by fuzzy concepts has attracted attention of many researchers. Fuzzy multiobjective linear programming techniques have been developed by numerous researchers, and a lot of successful applications have been appearing (Sakawa, 1993). As a hybrid of the stochastic approach and fuzzy approach, Mohan and Nguyen (2001) proposed an interactive satisficing method, but the method requires a large amount of work which the decision maker must do in the interaction procedure. In particular, Sakawa et al. (2000) proposed an interactive fuzzy satisficing method based on the expectation optimization model for multiobjective stochastic linear programming problems. Furthermore, they have extended it to the simple recourse model (Sa-

2. Multiobjective Stochastic Integer Programming Problems In this paper, we focus on the following multiobjective stochastic integer programming problem in which parameters in objective functions and the right-hand side of constraints are random variables. minimize zl ( x , ω ) = cl (ω ) x , l = 1, ..., k subject to Ax ≤ b(ω ) (1) x j ∈ {0, 1, ..., ν j }, j = 1, ..., n

where x is an n-dimensional integer decision variable column vector for the decision maker (DM), A is an m × n coefficient matrix, cl (ω ) , l=1,...,k, are assumed to be n-dimensional Gaussian random variable row vectors with mean cl and covariance matrix Vl = (v ljh ) = bi (ω ), Cov{clj (ω ), clh (ω )}, j = 1, ..., n, h = 1, ..., n. i=1,...,m, are also assumed to be Gaussian random vari-

* Corresponding Author: Department of Artificial Complex Systems Engineering, Graduate School of Engineering, Hiroshima University, 1-4-1 Kagamiyama, Higashi-Hiroshima, 739-8527 JAPAN, Tel: 0824-24-7693, Fax: 0824-22-7195, E-mail: [email protected]

29

Cahit Perkgoz et al./Asia Pacific Management Review (2005) 10(1), 29-35

ables N( α i , σ i 2 ), having mean α i and variances σ i 2 , and are independent of each other.

⎧ ⎫ ⎞ ⎛ f −c x⎪ ⎪ c (ω )x − cl x ⎜ f −c x ⎟ = Pr ⎨ l ≤ l l ⎬ = Φ⎜ l l ⎟ ⎜ xT V x ⎟ xT Vl x xTVl x ⎪⎭ ⎪⎩ l ⎠ ⎝ where Φ (⋅) is the distribution function of the standard Gaussian distribution. Therefore, (3) is converted to the following multiobjective integer programming problem. ⎛ f −c x ⎞ ⎜ ⎟ maximize pl (x) = Φ⎜ l l ⎟, l = 1, ..., k ⎜ xT V x ⎟ l ⎠ ⎝ subject to Ax ≤ bˆ (5)

3. Chance Constraint Programming Since (1) contains random variable coefficients, solution methods for ordinary mathematical programming problems cannot be directly applied. Consequently, we deal with the constraints in (1) as chance constrained conditions; that is the constraints need to be satisfied with a certain probability (satisficing level) and over. Replacing the constraints in (1) by chance constrained conditions with satisficing levels β i , i = 1,..., m, the problem can be converted as: minimize z l ( x, ω ) = cl (ω ) x, l = 1, ..., k subject to Pr{ai x ≤ bi (ω )} ≥ β i , i = 1, ..., m (2) x j ∈ {0, 1, ..., ν j }, j = 1, ..., n

x j ∈ {0, 1, ..., ν j }, j = 1, ..., n

5. An Interactive Fuzzy Satisficing Method

We In order to consider the imprecise nature of the decision maker's judgments for each objective function in (5), if we introduce the fuzzy goals such as “pl(x) should be substantially greater than or equal to a certain value”, (5) can be rewritten as: maximize µl ( pl ( x )), l = 1, ..., k subject to Ax ≤ bˆ (6)

where ai is the ith row vector of A and bi (ω ) is the ith element of b(ω ) . Then, constraints in (2) can be rewritten by using distribution functions Fi (r ) = Pr{bi (ω ) ≤ r} of random variables bi (ω ), i = 1,..., m , the ith constraint in (2) can be rewritten as: Pr{ai x ≤ bi (ω )} ≥ β i ⇔ 1 − Pr{bi (ω ) ≤ ai x} ≥ β i ⇔ ai x ≤ F −1 (1 − β i ) Letting bˆ = F −1 (1 − β ) in the above equation, (2) i

x j ∈ {0, 1, ..., ν j }, j = 1, ..., n where µl (⋅) is a membership function to quantify a fuzzy goal for the lth objective function in (5). To be more specific, if the decision maker feels that pl(x) should be greater than or equal to at least pl,0 and pl ( x) ≥ pl,1(> pl ,0 ) is satisfactory, the shape of a typical membership function is shown in Figure 1. Here, pl,0 and pl,1 are the points where the values of the corresponding membership functions µ l ( pl ( x )), l = 1, ..., k , are equal to zero and one, respectively. Since the problem (6) is regarded as a fuzzy multiobjective decision making problem, there rarely exists a complete optimal solution that simultaneously optimizes all objective functions. As a reasonable solution concept for the fuzzy multiobjective decision making problem, Sakawa (2001) defined M-Pareto optimality on the basis of membership function values by directly extending the Pareto optimality in the ordinary multiobjective programming problem.

i

can be transformed into the following problem: minimize z l ( x, ω ) = cl (ω ) x, l = 1, ..., k subject to Ax ≤ bˆ

(3)

x j ∈ {0, 1, ..., ν j }, j = 1, ..., n

where bˆ = (b1 , b2 ,..., bm )T and we denote the feasible

region of (3) by X. 4. Probability Maximization Model

Substituting the minimization of the objective functions z l ( x, ω ) = cl (ω ) x, l = 1,..., k in (3) for the maximization of the probability that each of objective functions z l ( x, ω ) is less than or equal to a certain permissible level fl, the problem can be converted as: maximize pl ( x ) = Pr{z l ( x, ω ) ≤ f l }, l = 1, ..., k subject to Ax ≤ bˆ (4) x j ∈ {0, 1, ..., ν j }, j = 1, ..., n

Since all elements of c l (ω ) are assumed to be Gaussian random variables, cl (ω ) x is also a Gaussian random variable with mean c l x and variance x T Vl x . Then, (c l (ω ) x − c l x )

x T Vl x is a random variable

following the standard Gaussian distribution with mean 0 and variance 1. Thereby, each of the objective functions in (4), pl ( x ) = Pr{zl ( x, ω ) ≤ f l } , is rewritten as: pl ( x ) = Pr{z l ( x, ω ) ≤ f l } = Pr{cl (ω ) x ≤ f l }

Figure 1. Shape of a Membership Function

30

Cahit Perkgoz et al./Asia Pacific Management Review (2005) 10(1), 29-35

maximum zlmax of z l ( x ) = E{z l ( x , ω )}, l = 1, ..., k , under the chance constrained conditions with satisficing levels β i , i = 1, ..., m, by solving the following problems. minimize z ( x ), l = 1,...,k (9)

By introducing an aggregation function µ D (⋅) for k membership functions in (6), the problem can be rewritten as: maximize µ D ( x ) subject to Ax ≤ bˆ (7)

x ∈X

maximize x ∈X

x j ∈ {0, 1, ..., ν j }, j = 1, ..., n

l

zl ( x ), l = 1,...,k

(10)

through a genetic algorithm with double strings using linear programming relaxation based on reference solution updating (GADSLPRRSU) which is a direct extension of a genetic algorithm with double strings based on reference solution updating (GADSRSU) for linear 0-1 programming problems (Sakawa, 2001). Here, E{z l ( x, ω )}, l = 1, ..., k , are expectations of the objective functions and E{z l ( x , ω )} = c l x , l = 1, ..., k , where cl are the mean row vectors of the objective functions. Then, ask the decision maker to specify permissible levels f l , l = 1, ..., k for objective functions with zlmin and zlmax . Step 3*: Calculate the individual minimum pl , min and maximum pl ,max of pl ( x ), l = 1,..., k in the multiobjective probability maximization problem (5) by solving the following problems through GADSCRRSU. minimize p ( x ), l = 1,...,k (11)

where the aggregation function µ D (⋅) represents the degree of satisfaction or preference of the decision maker for the k fuzzy goals. Following the conventional fuzzy approaches, as aggregation functions, the minimum operator of Bellman and Zadeh (1970) and the product operator of Zimmermann (1978) are often adopted. However, it should be emphasized here that such approaches are preferable only when the decision maker feels that the minimum operator or the product operator is appropriate. In other words, in general decision situations the decision maker does not always use the minimum operator or the product operator when combining the fuzzy goals. Probably the most crucial problem in (6) is the identification of an appropriate aggregation function which adequately represents the decision maker's fuzzy preferences. If µ D (⋅) can be explicitly identified, then (6) reduces to a standard mathematical programming problem. However, this rarely happens, and as an alternative, an interaction with the decision maker is necessary for finding the satisficing solution of (6). In an interactive fuzzy satisficing method, to generate a candidate for the satisficing solution which is also Pareto optimal, the decision maker is asked to specify the aspiration levels of achievement for all membership functions, called the reference membership levels (Sakawa, 1993). For the decision maker's reference membership levels µ l , l = 1, ..., k , the corresponding M-Pareto optimal solution, which is nearest to the requirements in the minimax sense or better than that if the reference membership levels are attainable, is obtained by solving the following augmented minimax problem (8). k ⎧⎪ ⎫⎪ min max ⎨(µl − µl ( pl (x)) ) + ρ (µl − µl ( pl (x)) )⎬ l =1,...k ⎪ ⎪⎭ l =1 ⎩ subject to Ax ≤ bˆ (8)

x ∈X

maximize x ∈X

l

pl ( x ), l = 1,...,k

(12)

Then ask the decision maker to determine membership functions µ l ( pl ( x )) for objective functions in (5). Step 4*: Ask the decision maker to set the initial reference membership levels µl , l = 1, ..., k . Step 5: Solve the augmented minimax problem (8) corresponding to the reference membership levels µl , l = 1, ..., k by using GADSCRRSU. Step 6*: The decision maker is supplied with the optimal solution to (8), the corresponding membership function values and objective function values. If the decision maker is satisfied with the current solution, stop. Otherwise, ask the decision maker to update the reference membership levels µl , l = 1, ..., k by considering the current membership function values µl ( pl ( x ∗ )) and return to step 5. Here it should be stressed to the decision maker that any improvement of one membership function can be achieved only at the expense of at least one of the other membership functions.



x j ∈ {0, 1, ..., ν j }, j = 1, ..., n

We can now construct the interactive algorithm in order to derive the satisficing solution for the decision maker from the M-Pareto optimal solution set. The steps marked with an asterisk involve interaction with the decision maker.

6. Genetic Algorithm with Double Strings Using Continuous Relaxation Based on Reference Solution Updating (GADSCRRSU)

In this section, we discuss GADSCRRSU, a general solution method for solving integer programming problems in (13). minimize f ( x ) subject to Ax ≤ b (13) x j ∈ {0, 1, ..., ν j }, j = 1, ..., n

Interactive fuzzy satisficing method Step 1*: Ask the decision maker to specify the satisficing levels β i , i = 1, ..., m, for each of the constraints in (1). Step 2*: Calculating the individual minimum z lmin and

31

Cahit Perkgoz et al./Asia Pacific Management Review (2005) 10(1), 29-35

the value of Cmax is not known in advance; Cmax may be taken as the largest f(x) value observed thus far, as the largest f(x) value in the current population. As a result, in minimization problems the fitness of an individual is defined as − f ( x), if f (x) < Cmax ⎧C ~ F (x) = ⎨ max (15) otherwise ⎩0,

Figure 2. Double String Representation

In (13), x is an n-dimensional integer decision variable vector.

In a reproduction operator based on the ratio of fitness of each individual to the total fitness such as an expected value model, it is frequently pointed out that the probability of selection depends on the relative ratio of fitness of each individual. Thus, several scaling mechanisms have been introduced (Goldberg, 1989; Michalewicz, 1996). Here, a linear scaling (Goldberg, 1989) is adopted. ~ In the linear scaling, fitness Fi of an individual is ~′ ~ ~ transformed into Fi according to Fi ′ = a ⋅ Fi + b, where the coefficients a and b are determined so that the mean ~ fitness Fmean of the population should be a fixed point ~ and the maximal fitness Fmax of the population should be ~ equal to c mult ⋅ Fmean . The constant cmult , usually set as 1.2 ≤ c mult ≤ 2.0, means the expected value of the number of the best individual in the current generation surviving in the next generation. ~ In order Fi ′ will be nonnegative for all i, Goldberg (1989) proposed the following algorithm for linear scaling.

6.1 Individual Representation The individual representation by double strings shown in Figure 2 is adopted in GADSCRRSU. In the figure, each of s(j), j=1, ..., n is the index of an element in a solution vector and each of gs ( j ) ∈ {0,1,...,ν j }, j=1, ..., n is the value of the element, respectively. 6.2 Decoding Algorithm As in (Sakawa, 2001), a decoding algorithm of double strings for the integer programming problem (13) is constructed. In the algorithm, a feasible solution x*, called a reference solution, is used as the origin of decoding. Because solutions obtained by the decoding algorithm using a reference solution tend to concentrate around the reference solution, the reference solution updating procedure is adopted (Sakawa, 2001).

Algorithm for linear scaling ~ Step 1: Calculate the mean fitness Fmean , the maximal ~ ~ fitness Fmax and the minimal fitness Fmin of the population. ~ ~ ⋅F − Fmax c ~ Step 2: If Fmin > mult mean , cmult − 1.0

6.3 Usage of Continuous Relaxation In order to find an approximate optimal solution with high accuracy in reasonable time, we need some schemes such as the restriction of the search space to a promising region, the generation of individuals near the optimal solution and so forth. From the point of view, the information about an optimal solution to the corresponding continuous relaxation problem minimize f ( x ) subject to Ax ≤ b (14) 0 ≤ x j ≤ ν j , j = 1, ..., n

then go to step 3. Otherwise, go to step 4. Step 3: Set a=

(cmult − 1.0) ⋅ F~mean and b = F~mean ⋅ (F~max − cmult ⋅ F~mean) , ~ ~ Fmax − Fmean

~ ~ Fmax − Fmean

and go to step 5.

is used in the generation of the initial population and the mutation. When this problem is convex, we can obtain a global optimal solution by some convex programming technique, e.g., the sequential quadratic programming. When it is non-convex, because it is difficult to find a global optimal solution, we search an approximate optimal solution by some approximate solution methods such as genetic algorithms or simulated annealing.

Step 4: Set ~ F a = ~ mean~ and b = Fmean − Fmin

~ ~ Fmin ⋅ Fmean ~ ~ , Fmean − Fmin

and go to step 5. ~ ~ Step 5: Calculate Fi′ = a ⋅ Fi + b, for i=1,2,…,N. 6.5 Termination Conditions

6.4 Fitness of an Individual and Scaling

When applying genetic algorithms to the multiobjective integer programming problems, an appropriate solution of desirable precision must be obtained in a proper time. For this reason, two parameters Imin and Imax, which respectively denote the number of generations to be searched at least and at most, are introduces. The following termination conditions are imposed:

The individuals with high fitness values will, on average, reproduce more often than those with low fitness values. In minimization problems such as the minimization of some cost function f(x), by introducing Cmax satisfying Cmax − f (x) ≥ 0, it is desirable to define the fit~ ness of an individual x as F (x) = Cmax − f (x). However,

32

Cahit Perkgoz et al./Asia Pacific Management Review (2005) 10(1), 29-35

~ ~ ~ • If t > I min and ( Fmax − Fmean ) / Fmean < ε , stop. Here, ~ ~ Fmean and Fmax are the mean fitness and the maximal fitness of the population, respectively. ε is the convergence criterion. • If t > I max , stop.

Step 3: Choose two crossover points h, k (h ≠ k) from {1,2,…,n}at random. Then, set l := h. First, perform operations in steps 4 - 6 for X' and Y. Step 4: Let j := ((l-1) % n) + 1 (p % q is defined as the remainder when an integer p is divided by an integer q). After finding j' such that sY ( j ) = s X ′ ( j ′) , interchange ( s X ′ ( j ), g s X ′ ( j ) ), with ( s X ′ ( j′) = g s X ′ ( j ′) ). Furthermore, set l := l+1, and go to step 5. Step 5: (1) If hk, then go to step 6. If hk and l>(k+n), then go to step 6. If h>k and l ≤ (k+n), then return to step 4. Step 6: (1) If hk, let g s X ′ ( j ) := g sY ( j ) for all j such that 1 ≤ j ≤ k or h ≤ j ≤ n, and go to step 7. Step 7: Carry out the same operations as in steps 4 - 6 for Y' and X. Step 8: Preserve X' and Y' as the offsprings of X and Y. Step 9: If r