New Hybrid Matheuristics for Solving the

0 downloads 0 Views 298KB Size Report
It is not so easy to classify the existing optimization methods. Beyond the ... nique. Metaheuristic approaches and mathematical programming techniques are ...... Gilmore, P.C., Gomory, R.E.: The theory and computation of knapsack functions.
New Hybrid Matheuristics for Solving the Multidimensional Knapsack Problem Sa¨ıd Hanafi1,2,3 , Jasmina Lazi´c4,5, Nenad Mladenovi´c4,5, Christophe Wilbaut1,2,3 , and Igor Cr´evits1,2,3 1

Univ Lille Nord de France, F-59000 Lille, France UVHC, LAMIH, F-59313 Valenciennes, France 3 CNRS, FRE 3304, F-59313 Valenciennes, France {said.hanafi,christophe.wilbaut,igor.crevits}@univ-valenciennes.fr 4 Brunel University, West London UB8 3PH, UK Mathematical Institute, Serbian Academy of Sciences and Arts, Kneza Mihaila 36, 11000 Belgrade, Serbia {Jasmina.Lazic,Nenad.Mladenovic}@brunel.ac.uk 2

5

Abstract. In this paper we propose new hybrid methods for solving the multidimensional knapsack problem. They can be viewed as matheuristics that combine mathematical programming with the variable neighbourhood decomposition search heuristic. In each iteration a relaxation of the problem is solved to guide the generation of the neighbourhoods. Then the problem is enriched with a pseudo-cut to produce a sequence of not only lower, but also upper bounds of the problem, so that integrality gap is reduced. The results obtained on two sets of the large scale multidimensional knapsack problem instances are comparable with the current state-of-the-art heuristics. Moreover, a few best known results are reported for some large, long-studied instances. Keywords: Integer Programming, Multidimensional Knapsack Problem, Decomposition, Matheuristics, Variable Neighbourhood Search.

1

Introduction

The 0-1 Multidimensional Knapsack Problem (MKP) is a resource allocation problem which can be formulated as follows: ⎡ n  max cj xj ⎢ j=1 ⎢ n  (P ) ⎢ ⎢ subject to aij xj ≤ bi ∀i ∈ M = {1, 2, . . . , m} ⎣ j=1

xj ∈ {0, 1}

∀j ∈ N = {1, 2, . . . , n}

Here, n is the number of items, m is the number of knapsack constraints. The right hand side bi (i ∈ M ) represents capacity of knapsack i, A = [aij ] is the weights matrix, whose element aij represents the resource consumption of the item j ∈ N M.J. Blesa et al. (Eds.): HM 2010, LNCS 6373, pp. 118–132, 2010. c Springer-Verlag Berlin Heidelberg 2010 

VNDS with pseudo-cuts for the MKP

119

in the knapsack i ∈ M , and cj (j ∈ N ) is the profit income for the item j ∈ N . The optimal objective function value of problem (P ) is denoted as ν(P ). As other knapsack problems MKP has a simple formulation and a significant number of practical applications. MKP is known to be NP-hard [11] and it is often used as a benchmark model for testing general purpose combinatorial optimization methods. Thus there were numerous contributions over several decades to the development of both exact and heuristic solution methods. Some efficient exact methods for the cases m = 1 and m > 1 are proposed in [23,25], and in [2,9], respectively. In the general case with m > 1, several efficient heuristic approaches can also be found in [3,15,30]. Among other existing applications we can cite capital budgeting problems [21] or cutting stock problems [13]. For reviews of recent developments and applications of MKP, the reader is referred to [7,31]. It is not so easy to classify the existing optimization methods. Beyond the classical separation between exact methods and heuristic methods, several papers are devoted to the taxonomy of hybrid methods. Hybrid (or cooperative) methods are not new in the operational research community. This class of approaches includes several subclasses among which techniques combining metaheuristics and exact algorithms have a dominating place. For instance in [26] Puchinger and Raidl distinguish collaborative combinations in which the algorithms exchange information but are not part of each other, and integrative combinations in which one technique is a subordinate embedded component of another technique. Metaheuristic approaches and mathematical programming techniques are two highly successful streams, so it is not surprising that the community tries to exploit and combine the advantages of both. A new subclass of methods appeared recently within the term matheuristic. Matheuristics combine metaheuristics and approaches relying on mathematical programming problem formulations. So a number of methods for solving optimisation problems which can be considered as matheuristics have emerged over the last decades. Often, exact optimisation method is used as the subroutine of the metaheuristics for solving a smaller subproblem [22]. Neighbourhood search type metaheuristics, such as Variable Neighbourhood Search (VNS)[24], are proved to be very effective when combined with optimization techniques based on the mathematical programming problem formulations [16]. Several methods combining the temporary fixation of some explicit or implicit variables to some particular values with the complete exploration of (small) neighbourhoods have emerged since a few years such as: the local branching (LB) proposed in [6], variable neighbourhood branching (VNB) [16], the relaxation induced neighbourhood search (RINS) proposed in [4], the distance induced neighbourhood search (DINS) proposed in [12]. Following the ideas of LB, VNB and RINS, another method for solving mixed integer programming problems, based on the principles of Variable Neighbourhood Decomposition Search (VNDS) [17], was proposed in [20]. This method uses the solution of the linear relaxation of the initial problem to define subproblems to be solved within the VNDS framework. Hanafi and Wilbaut proposed in [18,30] some extensions and improvements of a convergent algorithm for 0-1 mixed integer programming (denoted as

120

S. Hanafi et al.

LPA for Linear Programming-based Algorithm). LPA [27] solves a series of small subproblems generated by exploiting information obtained through a series of linear programming relaxations. In this paper we propose new hybrid methods, which combine LPA with VNDS, for solving MKP. Our method dynamically improves lower and upper bounds on the optimal value. Different heuristics are derived by choosing a particular strategy of updating the bounds, and thus defining different schemes for generating a series of sub-problems. The results obtained on two sets of available and correlated instances show that our approach is efficient and effective: – our proposed algorithms are comparable with the state-of-the-art heuristics – a few new best known lower bound values are obtained. The remainder of the paper is organized as follows. In Section 2 we present the new hybrid matheuristics based on the LPA and VNDS. In Section 3, computational results are presented and discussed. In Section 4, some final outlines and conclusions are provided.

2

New Hybrid VNDS Based Heuristics

Notation. As mentioned previously, both LPA and VNDS deal with reduced problems obtained by fixing some of the variables. Given an arbitrary binary solution x0 and an arbitrary subset of variables J ⊆ N , the problem reduced from original problem P and associated with x0 and J can be defined as: ⎡ max cT x ⎢ s.t. Ax ≤ b P (x0 , J) ⎢ ⎣ xj = x0j ∀j ∈ J xj ∈ {0, 1} ∀j ∈ N We further define the sub-vector associated with the set of indices J and solution x0 as x0 (J) = (x0j )j∈J , the set of indices of variables with integer values as B(x) = {j ∈ N | xj ∈ {0, 1}}, and the set of indices of variables with value v ∈ {0, 1} as B v (x) = {j ∈ N | xj = v}. We will also use the short form notation P (x0 ) for the reduced problem P (x0 , B(x0 )). If C is a set of constraints, we will denote with (P | C) the problem obtained by adding all constraints in C to the problem P . To define the neighbourhood of a solution, we introduce the following partial distance between two arbitrary binary solutions x and y of the problem, relative to J ⊆ N (note that, in case when J = N , this distance is actually a Hamming distance):  | xj − yj |. δ(J, x, y) = j∈J

Let X be the solution space of the problem P considered. The neighbourhood structures {Nk | k = kmin , . . . , kmax }, 1 ≤ kmin ≤ kmax ≤| N |, can be defined

VNDS with pseudo-cuts for the MKP

121

knowing the distance δ(N, x, y) between any two solutions x, y ∈ X. The set of all solutions in the kth neighbourhood of x ∈ X is denoted as Nk (x), where Nk (x) = {y ∈ X | δ(N, x, y) = k}. Related work. The LPA consists in generating two sequences of upper and lower bounds until justifying the completion of an optimal solution of the problem [18]. This is achieved by solving exactly a series of sub-problems obtained from a series of relaxations (the linear programming relaxation for instance) and adding a pseudo-cut in each iteration, which guarantees that sub-problems already explored are not revisited. It can be proved that LPA converges to an optimal solution of the problem considered or indicates that the problem is infeasible in a finite number of iterations under certain conditions [18,30]. In practice, reduced problems within LPA can be very complex themselves, so LPA is normally used as a heuristic limited by a total number of iterations or a running time. For a detailed algorithmic description of LPA, the reader is referred to [18]. VNDS is a two-level VNS scheme for solving optimisation problems, based upon the decomposition of the problem [17]. Our new heuristics are based on a new variant of VNDS for solving 0-1 MIP problems, called VNDS-MIP [20]. It combines VNS with a general-purpose CPLEX MIP solver. A systematic hard variable fixing (or diving) is performed, following the variable neighbourhood search steps. Instead of solving just a single reduced problem P (x), as in the case of LPA, a series of sub-problems P (x, Jk ) is solved. Sets Jk ⊆ N , with J0 = ∅, are chosen according to distances δ(Jk , x, x), until the improvement of the incumbent objective value is reached. If there is an improvement, variable neighbourhood descent branching (denoted as VND-MIP in this paper) is performed as the local search in the whole solution space and the whole process is reiterated with respect to the new incumbent solution. A detailed algorithmic description of VNDS for 0-1 MIPs, denoted as VNDS-MIP, can be found in [20]. Combining LPA with VNDS. In the basic VNDS-MIP the search space is not being reduced during the solution process (except for temporarily fixing the values of some variables). This means that the same solution vector may be examined many times, which may affect the efficiency of the solution process. In this section we propose to restrict the search space by introducing pseudo cuts like in LPA, in order to avoid the multiple exploration of the same areas. Another objective is to strenghten the upper bound of the problem and to reorient the search by changing the LP-solution. In the VNDS-MIP, decomposition is always performed with respect to the solution of the linear relaxation LP(P ) of the original problem P . In other words, the upper bound (in the context of maximisation) is computed only once and only the lower bound is updated during the search process. This way, the solution process ends as soon as all sub-problems P (x, Jk ) are examined or the maximum time limit is reached. In order to introduce further diversification into the search process, pseudo-cuts δ(J, x, x) ≥ k, for some subset J ⊆ B(x) and certain integer k ≥ 1, are added whenever sub-problems P (x, J) are explored, completely or partially, by exact

122

S. Hanafi et al. VNDS-MIP-PC1(P, d, x∗ , kvnd ) 1 Choose stopping criteria (set proceed1=proceed2=true); 2 LB = ct x∗ ; P = (P | ct x > LB); //Add objective cut. 3 while (proceed1) do 4 Find an optimal solution x of LP(P ); set U B = ν(LP(P )); 5 if (B(x) = N ) then LB = U B; goto 22; 6 δj =| x∗j − xj |; reorder xj so that δj ≤ δj+1 , j = 1, . . . , p − 1 7 Set nd =| {j ∈ N | δj = 0} |, kstep = [nd /d], k = p − kstep ; 8 while (proceed2 and k ≥ 0) do 9 Jk = {1, . . . , k}; x = M IP SOLV E(P (x∗ , Jk ), x∗ ); 10 if (ct x > ct x∗ ) then 11 LB = ct x ; P = (P | ct x > LB); //Update objective cut. 12 x∗ = VND-MIP(P, kvnd , x ); LB = ct x∗ ; goto 17; 13 else 14 if (k − kstep > p − nd ) then kstep = max{[k/2], 1}; 15 Set k = k − kstep ; 16 endif 17 Update proceed2; 18 endwhile 19 Add pseudo-cut to P : P = (P | δ(B(x), x, x) ≥ 1); 20 Update proceed1; 21 endwhile 22 return LB, U B, x∗ . Fig. 1. VNDS-MIP with pseudo-cuts

or heuristic approaches. The addition of these pseudo-cuts guarantee the change of the LP solution x and also updates the current upper bound on the optimal value of the original problem. This way, even if there is no improvement when decomposition is applied with respect to the current LP solution, the search process continues with the updated LP solution. Finally one obvious way to narrow the search space is to add the objective cut ct x > ct x∗ , where x∗ is the current incumbent solution, each time the objective function value is improved. This updates the current lower bound on the optimal objective value and reduces the new feasible region to only those solutions which are better (regarding the objective function value) than the current incumbent. The pseudo-code of the new procedure, called VNDS-MIP-PC1, is presented in Figure 1. Input parameters for the VNDS-MIP-PC1 algorithm are an instance P of the MKP, parameter d which defines the number of variables to be released in each iteration, initial feasible solution x∗ of P and the maximum size kvnd of a neighbourhood explored within VND-MIP. The algorithm returns the best solution found within the stopping criteria defined by the variable proceed. Proposition 1. The total number of iterations of VNDS-MIP-PC1 is bounded by (3n − 2n )(d + log2 n). Proof. The number of iterations in the outer loop of VNDS-MIP-PC1 is limited by the number of all possible LP solutions which contain integer components.

VNDS with pseudo-cuts for the MKP

123

n k There are k 2 possible solutions with k integer components, so there are  n n k 2 = 3n possible LP solutions having integer components. However, k=0 k the algorithm stops if the LP solution x is integer feasible, so the 2n solution vectors whose all n components are integer do not contribute to the total number of outer loop iterations in VNDS-MIP-PC1. As a result, the total number of outer loop iterations in VNDS-MIP-PC1 is 3n − 2n . Since at most d + log2 n reduced problems are examined in each inner iteration of VNDS-MIP-PC1, the above proposition follows.  The optimal objective function value ν(P ) of current problem P is either the optimal value of (P | δ(B(x), x, x) ≥ 1), or the optimal value of (P | δ(B(x), x, x) = 0) i.e. ν(P ) = max{ν(P | δ(B(x), x, x) ≥ 1), ν(P | δ(B(x), x, x) = 0)}. If the improvement of objective value is reached by solving subproblem P (x∗ , Jk ), but the optimal solution of P is ν(P | δ(B(x), x, x) = 0), then the solution process continues by exploring the solution space of (P | δ(B(x), x, x) ≥ 1) and fails to reach the optimum of P . Therefore, VNDS-MIP-PC1, if used as an exact method, provides a feasible solution of the initial input problem P in a finite number of steps, but does not guarantee the optimality of that solution. However, one can observe that if sub-problem P (¯ x) is solved exactly before adding the pseudo-cut δ(B(x), x, x) ≥ 1 in P then the algorithm converges to an optimal solution. In practice, when used as a heuristic with the time limit as a stopping criterion, VNDS-MIP-PC1 has a good performance (see Section 3). In the previous algorithm, the solution space of P (x∗ , J ) is the subset of the solution space of P (x∗ , Jk ), for k < , k,  ∈ N. This means that, in each iteration of VNDS-MIP-PC1, when exploring the search space of the current subproblem P (x∗ , Jk−kstep ), the search space of the previous subproblem P (x∗ , Jk ) gets revisited. In order to avoid this repetition and possibly allow more time for exploration of those areas of P (x∗ , Jk−kstep ) search space which were not examined before, we can discard the search space of P (x∗ , Jk ) by adding cut δ(Jk , x∗ , x) ≥ 1 to the current subproblem. The corresponding pseudo-code of this variant, called VNDS-MIP-PC2(P, d, x∗, kvnd ), is obtained from VNDS-MIP-PC1(P, d, x∗, kvnd ) (see Figure 1) by replacing line 9 with the following line 9 : 9 : Jk = {1, . . . , k}; x = M IP SOLV E(P (x∗ , Jk ) | δ(Jk , x∗ , x) ≥ 1), x∗ ); P = (P | δ(Jk , x∗ , x) ≥ 1); and by dropping line 19 (the pseudo-cut δ(B(x), x, x) ≥ 1 is not used in this heuristic). The following obvious properties of the pseudo-cut δ(Jk , x∗ , x) ≥ 1 are given in the next Proposition: Proposition 2. (i) The pseudo-cut δ(Jk , x∗ , x) ≥ 1 does not necessarily change the LP solution but ensures that the current subproblem P (x∗ , Jk ) does not get examined again later in the search process; (ii) The pseudo-cut δ(Jk , x∗ , x) ≥ 1 does not discard the original optimal solution from the reduced search space.

124

S. Hanafi et al.

Proof. Statement (i) is obvious, and (ii) holds since ν(P ) = max{ν(P | δ(Jk , x∗ , x) ≥ 1), ν(P | δ(Jk , x∗ , x) = 0)}.



The next proposition is obvious and the proof is omitted: Proposition 3. The VNDS-MIP-PC2 method finishes in a finite number of steps and either returns an optimal solution x∗ of the original problem (if LB = U B), or proves the infeasibility of the original problem (if LB > U B).

3

Computational Results

Hardware and Software. All values presented are obtained using a Pentium 4 computer with 3.4GHz processor and 4GB RAM and general purpose MIP solver CPLEX 11.1. We use C++ programming language to code our algorithms and compile them with g++ and the option -O2. Test bed. We validate our heuristics on two sets of available and correlated instances of MKP. The first set is composed by 90 instances from the OR-Library, with n = 500 and m = 5, 10, 30 (and denoted as 5.500, 10.500 and 30.500 respectively). Although the OR-Library contains instances with n = 100 and n = 250 as well, the larger instances with n = 500 are known to be difficult. In particular, the optimal solutions of the instances with m = 30 are not known, whereas the running time needed to prove the optimality of the solutions for the instances with m = 10 is in general very significant [2]. The second set of instances is composed by 18 MKP problems generated by Glover & Kochenberger (GK)[14], with number of items n between 100 and 2500, and number of knapsack constraints m between 15 and 100. We select these problems because they are known to be very hard to solve by branch-and-bound technique. Methods compared. As the VNDS-MIP algorithm is valid for 0-1 MIP, we can use it directly for solving the MKP. Thus in this section we compare three algorithms: VNDS-MIP, VNDS-MIP-PC1 and VNDS-MIP-PC2. Each algorithm is run once for each instance. Due to the consistency of CPLEX solver, the objective function value obtained by a certain algorithm applied to a particular instance is the same in different runs of the algorithm. Initial solution. All our variants start with the same initial solution obtained by solving the input MKP instance with the CPLEX MIP solver with parameter CPX PARAM INTSOLLIM set to 2 (since the setting CPX PARAM INTSOLLIM = 1 yields the trivial solution vector whose all components are equal to 0). In that case the associated CPU required by CPLEX is less than 1 second in all cases. CPLEX parameters. The CPLEX MIP solver is used in each method compared. We choose to set the CPX PARAM MIP EMPHASIS to FEASIBILITY for the first feasible solution, and then change to the default BALANCED option after the first feasible solution has been found.

VNDS with pseudo-cuts for the MKP

125

Table 1. Average results on the OR-Library with 1 hour total running time limit

class 5.500

10.500

30.500 Global

VNDS-MIP VNDS-MIP-PC1 VNDS-MIP-PC2 α Avg. Gap #opt Avg. Gap #opt Avg. Gap #opt 0.25 0 10 0.002 6 0.002 7 0.5 0 10