Solving Convex MINLP Optimization Problems ... - Semantic Scholar

0 downloads 0 Views 287KB Size Report
Oct 18, 2005 - DOI: 10.1007/s10589-005-3076-x. Solving Convex MINLP Optimization Problems. Using a Sequential Cutting Plane Algorithm. CLAUS STILL.
Computational Optimization and Applications, 34, 63–83, 2006 c 2005 Springer Science + Business Media, Inc. Manufactured in The Netherlands.  DOI: 10.1007/s10589-005-3076-x

Solving Convex MINLP Optimization Problems Using a Sequential Cutting Plane Algorithm CLAUS STILL [email protected] Department of Mathematics, ˚Abo Akademi University, F¨anriksgatan 3B, FIN-20500, ˚Abo, Finland TAPIO WESTERLUND [email protected] Process Design Laboratory, ˚Abo Akademi University, Biskopsgatan 8, FIN-20500, ˚Abo, Finland Received December 12, 2003; Revised April 6, 2005 Published online: 18 October 2005 Abstract. In this article we look at a new algorithm for solving convex mixed integer nonlinear programming problems. The algorithm uses an integrated approach, where a branch and bound strategy is mixed with solving nonlinear programming problems at each node of the tree. The nonlinear programming problems, at each node, are not solved to optimality, rather one iteration step is taken at each node and then branching is applied. A Sequential Cutting Plane (SCP) algorithm is used for solving the nonlinear programming problems by solving a sequence of linear programming problems. The proposed algorithm generates explicit lower bounds for the nodes in the branch and bound tree, which is a significant improvement over previous algorithms based on QP techniques. Initial numerical results indicate that the described algorithm is a competitive alternative to other existing algorithms for these types of problems. Keywords: convex programming, branch and bound, cutting plane algorithms, mixed integer nonlinear programming

1.

Introduction

This paper considers solving convex mixed integer nonlinear programming (MINLP) problems of the form min f (x, y), s.t. g(x, y) ≤ 0,

(P)

x ∈ X, y ∈ Y, where X = { x ∈ R nr | x L B ≤ x ≤ x U B } is a bounded, box-constrained set and Y = { y ∈ Z n z | y L B ≤ y ≤ y U B } is a finite bounded set. We assume the functions f and g are convex and continuously differentiable. MINLP problems are important in a number of applications, such as chemical process synthesis [16], product marketing [18], capital budgeting [27], portfolio optimization [9] and trim-loss optimization [21, 22]. Additionally, new methods for solving non-convex problems containing posynomials have been defined. These problems are solved by solving a sequence of convex MINLP problems [5]. Efficient algorithms for solving convex MINLP problems are, therefore, also important in a global optimization context.

64 2.

STILL AND WESTERLUND

Background

Early MINLP methods, such as Outer Approximation [13], solve these types of problems by decomposing them into a sequence of mixed integer linear programming problems (MILP) and nonlinear programming (NLP) problems. More information about Outer Approximation and other MINLP algorithms can be found in, for instance, [16] and [17]. The Extended Cutting Plane (α-ECP) algorithm, introduced in [35], uses another approach. Rather than solving a sequence of MILP and NLP problems, it solves a sequence of MILP problems. In each iteration, one or several cutting planes are constructed based on the current iterate. These cutting planes, together with the cutting planes generated in previous iterations, are used as constraints in an MILP problem. The MILP problem is solved and the current iterate updated to be the solution of the MILP problem. The procedure is repeated until the current iterate is feasible, in which case the iterate is an optimal solution to the original MINLP problem. The original α-ECP algorithm has since been extended to pseudo-convex MINLP optimization problems [28, 33, 36, 38]. It may be observed here that all intermediate MILP problems need not necessarily be solved to optimality, any feasible integer point to the MILP problem may be used to generate new cutting planes as long as the last MILP problem solved is solved to optimality. Solving all MILP problems to optimality is, however, a good default strategy for smaller problems. The nonlinear branch and bound algorithm [10, 20] takes an opposite approach compared to the α-ECP algorithm. Here a single branch and bound tree is solved such that each node in the tree is an NLP programming problem. Thus, rather than solving multiple MILP problems, where each MILP problem is solved by a branch and bound tree with linear subproblems, the algorithm solves a single branch and bound tree with nonlinear subproblems. The nonlinear branch and bound algorithm first creates the root node by relaxing the integer requirements of the MINLP problem. New nodes are created in the tree by solving the NLP subproblem in some existing leaf in the tree and then branching on any integer variable taking non-integer values in the solution point to the NLP subproblem. Once integer solutions are found, they provide upper bounds on the optimal value of the original MINLP problem. The optimal value of an NLP subproblem, again, provides a lower bound on the optimal value for that particular branch of the tree. The tree is searched until no unexamined nodes remain in the tree with potentially better solutions than the currently obtained best solution. The package BARON described in [31] solves MINLP problems by branch and bound. In a more recent development, based on ideas from [6], the algorithm described in [26] branches early before solving the NLP problem to optimality. The NLP problems are solved by a Sequential Quadratic Programming (SQP) method that is interrupted before an optimal solution is found. Typically, only one quadratic programming (QP) problem will be solved in each node of the tree. Thus, the total number of QP problems solved is typically smaller compared to solving each node to optimality. The algorithm in [26] was compared to MINLP-BB, a standard nonlinear branch and bound algorithm, and numerical results in the paper indicate an improvement over nonlinear branch and bound by a factor of up to three. Another algorithm that is based on solving one branch and bound tree is the algorithm described in [29]. Here LP problems are solved at non-integer nodes and NLP problems at integer nodes.

SOLVING CONVEX MINLP OPTIMIZATION PROBLEMS

65

In this article we introduce a new algorithm for solving MINLP problems, the Sequential Cutting Plane (SCP) algorithm. The algorithm integrates cutting plane techniques with branching techniques. Rather than solving a linearized MILP problem to feasibility or optimality, we apply cutting planes in each node of the branch and bound tree. The technique differs from the α-ECP method, where the generation of cutting planes is separated from the branching process. For the NLP subproblems, we solve a sequence of linear programming (LP) problems utilizing ideas from [34] in contrast to the algorithm in [26], which is based on the filterSQP method [14, 15]. The filterSQP method solves a sequence of QP problems in order to solve the NLP subproblems. Note that the SCP algorithm could also be considered to be a form of Successive Linear Programming (SLP) as the described version in this paper does not accumulate the cutting planes between the iterations. However, a more general version of the SCP algorithm could, if desired, also retain the cutting planes between the LP subiterations. The new algorithm has several interesting features. In the algorithm, we solve only one branch and bound tree. Thus, we need not solve a new MILP problem from scratch each time we apply new cutting planes. We also avoid having to solve each NLP subproblem to optimality by doing early branching. The proposed algorithm also generates explicit lower bounds for each node in the branch and bound tree, which is a significant improvement compared to [26]. In that algorithm, you only obtain implicit bounds on the nodes and have to solve a QP problem in order to determine whether to drop the node or not. In the proposed algorithm, you obtain explicit lower bounds on the nodes when performing NLP iterations in the nodes. The first LP subiteration within an NLP iteration provides a lower bound on the node. When branching, the child nodes inherit the lower bound of the parent node. Whenever the current upper bound is improved, you may drop any node with a lower bound greater than or equal to the current upper bound. You may, therefore, in some cases drop nodes in the tree without solving any additional LP problems for those nodes. Explicit lower bounds have a significant impact on the convergence speed as it means less subproblems solved. Contrary to [26], we solve a sequence of LP problems rather than solving a sequence of QP problems. Numerical experiments indicate that the NLP part of the SCP algorithm has good convergence properties when the iterate is far from an optimal solution. We therefore get good estimates of the optimal value of each NLP subproblem in the branch and bound tree already after applying one NLP iteration, which in turn makes the tree smaller. The numerical results support this claim.

3. 3.1.

SCP Algorithm Overview

The algorithm builds a branch and bound tree where each node represents a relaxed NLP subproblem of the original problem (P). Each NLP subproblem is solved using a sequence of LP problems, but the NLP subproblem is not solved to optimality. We then choose an integer variable with a non-integral value in the current iterate and branch on this variable generating two new NLP subproblems. The first LP problem in an NLP iteration provides a lower bound for the optimal value of the NLP subproblem. The

66

STILL AND WESTERLUND

lower bounds of the nodes can be used for removing nodes from the tree any time we improve the currently best known solution for the original problem (P). We summarize the SCP algorithm in pseudo-code in Algorithm 1. The tolerances for the algorithm used in the numerical experiments are given in Section 4.2. Next we take a closer look at the different parts of the algorithm.

3.2.

Branch and Bound

We solve the original problem (P) by solving a sequence of relaxed NLP subproblems min s.t.

f (x, y), g(x, y) ≤ 0, x ∈ X, y ∈ Y k ,

(NLPk )

where the integrality requirements have been dropped for Yk and additional upper and lower bounds have been added for the variables yi during the branching process. For each subproblem (NLPk ) we record the current iterate for the problem, say (xk , k y ). When branching on the current iterate, we select a branch variable ykl , where ykl

SOLVING CONVEX MINLP OPTIMIZATION PROBLEMS

67

may be any variable not having an integer value. We then create two new NLP problems (NLPL ) and (NLPR ), where YL has the same bounds as Yk and an additional upper bound   yl ≤ ylk , where ylk  is the integer value closest to ykl from below. Similarly, YR will have the same bounds as Yk and an additional lower bound   yl ≥ ylk , where ylk  is the integer value closest to ykl from above. We refer to (NLPL ) as the left node and (NLPR ) as the right node of the parent node (NLPk ). By successively repeating the branching procedure, the integer variables are eventually forced to take integer values. Also, since we assumed Y contains a finite number of integer values, the branch and bound tree will be finite. 3.3.

Solving NLP subproblems

Similar to the algorithms described in [6] and [26], we do not solve the NLP subproblems (NLPk ) to optimality. We may interrupt the NLP procedure before an optimal point has been found in order to make the branch and bound algorithm faster. If the current iterate is converging to a non-integral point, we may branch early on any variable in yk that has a non-integral value, rather than waste effort on finding an optimal, non-integer, solution for the current subproblem. Contrary to [6] and [26], which both use an SQP solver for solving the NLP subproblems, we use an NLP version of the Sequential Cutting Plane algorithm, described in [34], to solve the NLP subproblems. The difference from the SQP approach is that we solve a sequence of LP problems rather than QP problems in order to find a solution to the NLP subproblem. Numerical experience with the SCP algorithm indicate that the algorithm has good convergence properties also when iterates are far from an optimal solution. The algorithm is competitive when compared to algorithms based on an SQP approach, as the importance of quadratic convergence properties close to an optimal solution is smaller in a branch and bound environment. Especially for the early branching, it is important that a single iteration in each subproblem produces a fairly good estimate of the optimal value of that subproblem. 3.3.1. Overview of the SCP algorithm for NLP problems. The NLP version of the SCP algorithm, [34], solves the NLP problem (NLPk ) to optimality by doing a sequence of NLP iterations. In each NLP iteration, a sequence of LP subiterations is performed. In each subiteration (i) within the NLP iteration, an LP problem (LP(i) ) is generated in the current iterate (xk , yk ). The LP problem is of the form T

min ∇ f (k) d, T s.t. g (k) + ∇g (k) d ≤ 0, (r ) T (i) (d ) H d = 0 , r = 1, . . . , i − 1 ; i > 1, x k + dx ∈ X, y k + d y ∈ Y k ,

(L P (i) )

68

STILL AND WESTERLUND

where ∇ f(k) = ∇ f(xk , yk ), ∇ g(k) = ∇ g(xk , yk ) and g(k) = g(xk , yk ). Furthermore, d = (dx , dy ) and d(r) , r = 1, . . ., i−1 are the previously obtained search directions within the NLP iteration. Also, H(i) is the current estimate of the Hessian of the Lagrangian L(x, y, λ) = f (x, y) +

m 

λ j g j (x, y) .

j=1

The Hessian estimate may be obtained, for instance, using the BFGS update formula, see [19]. The BFGS update formula was used in our implementation of the algorithm. Note that you could also use exact Hessians, if they are known for the objective f and constraint functions gj . The dual optimal solution to the LP problem (LP(i) ) provides a Lagrange multiplier estimate, which can further be used for estimating the Hessian of the Lagrangian. Thus, the Hessian approximation can be updated in each LP subiteration. The solution to each LP problem (LP(i) ) provides a search direction d(i) = (dx (i) , dy (i) ). A line search is then performed in the obtained search direction minimizing a modified function based on the Lagrangian of (NLPk ), ˜ L(x, y, λ) = f (x, y) +

m 

λ j g j (x, y)+ + ρ

j=1

m  (g j (x, y)+ )2 , j=1

where gj (x, y)+ = max (gj (x, y), 0) and ρ (> 0) is a penalty parameter. The current iterate (xk , yk ) is then updated, (xk , yk ) := (xk , yk )+α (i) (dx (i) , dy (i) ), where α (i) is the step length found in the line search. A new LP problem is then constructed in the updated iterate (xk , yk ). The new LP problem is constructed in a similar way as the previous LP problem with equality constraints requiring the new search direction to be a conjugate direction to the previously obtained search directions with respect to the current estimate of the Hessian of the Lagrangian. Hence d is computed from problem (LP(i) ) as a conjugate direction to old directions d(r) by using the linear equality constraints  (r ) T (i) d H d = 0,

r = 1, . . . , i − 1.

(1)

The new LP problem is then solved, a new line search performed and the iterate updated again. The procedure is repeated until the LP problem becomes infeasible, a sufficient number of steps have been taken or the current solution to the LP problem is sufficiently close to zero. For NLP problems, the NLP iteration would then be repeated until we find an optimal point. However, for MINLP problems we perform only one NLP iteration in each node of the branch and bound tree and branch early in order to improve the performance of the algorithm. Convergence properties of the NLP algorithm have been analyzed in [34]. 3.3.2. Infeasible LP problems. Since the NLP problems are convex, the linearizations of the constraints underestimate the NLP problem. Therefore, if the first LP problem LP(1) solved in the NLP iteration is infeasible, then the corresponding NLP problem

SOLVING CONVEX MINLP OPTIMIZATION PROBLEMS

69

is infeasible as well and the node may be fathomed due to the convexity of the NLP problems. Note that this is not true if some of the problems LP(i) are infeasible for i > 1. In these cases, the LP problems contain additional linear equality constraints to force the solution to be a conjugate direction to the previously obtained search directions within the NLP iteration, see Eq. (1). Thus the LP problems LP(i) are no longer necessarily outer approximations of the NLP problem when i > 1. For these cases, the NLP iteration is terminated and we continue Algorithm 1 from Step 4 with the current iterate (xk , yk ), i.e. the iterate that was used to generate the infeasible LP problem.

3.4.

Selecting the branching variable

The most commonly used branch selection criterion in mixed integer optimization has been to select the branching variable using the most fractional variable rule [24, 20]. Here the idea is to select the variable from the solution of the current node that is the furthest away from an integer value. If the user has prior knowledge of the structure of the problem and the importance of the integer variables, the branching process can also be accelerated by using branching priorities on the variables [3, 20]. Analysis of the effect of each integer variable on the objective has also been proposed [3, 20]. Pseudo-costs are associated with the integer variables y describing the effect of fixing the variables to the nearest discrete values. The variable with the greatest pseudo-cost is selected as the branching variable, since we may expect that this variable will have the greatest effect on the lower bound of the problem. When implementing the algorithm, we tried selecting the branching variable using pseudo-costs calculated based on the estimate of the Lagrangian. The numerical results we obtained were poor, so we opted to use the most fractional variable rule in our implementation. Other selection criteria may be studied in the future.

3.5.

Branching strategies

The order in which we select the nodes in the branch and bound tree has a great impact on the optimization speed. We implemented five commonly used branching strategies in our algorithm: depth-first, best-first, breadth-first, depth-first-then-breadth and depthfirst-then-best. In depth-first, each time we branch we select one of the child nodes created as the next node to optimize. Here a solution is found relatively fast although, on the other hand, there may be long search paths deep into the tree. In the best-first strategy, we select the node with the smallest estimated lower bound on the optimal solution as the next node to optimize. Although it may take longer to search the entire tree, the first solution found is usually close to the optimal solution of the problem. In breadth-first, again, we examine a certain level in the tree entirely before proceeding to the next level in the tree in order to get shorter search paths into the tree. On the other hand, a large number of nodes may be examined. Finally, depth-first-then-breadth and depth-first-then-best are hybrids of the above methods. In these methods, a depth-first search is performed until

70

STILL AND WESTERLUND

the first solution candidate for the problem is found. The algorithm then switches to a breadth-first or best-first search strategy respectively.

3.6.

Selecting the branching direction

Although using the Lagrangian was not successful for selecting the branching variable, we did obtain promising results using the Lagrangian to select the branching direction in a depth-first context. When branching, we create two new nodes and the question is which of these nodes to select as the next NLP subproblem to solve. Typically, the node is selected based on the branching variable and its distance to the nearest integer. If ylk is closer to ylk  than ylk , then the left node is chosen, otherwise the right node is chosen. In our algorithm, we used a new strategy where the estimate of the Lagrangian of the current NLP subproblem was used to evaluate which node to select. We update the current iterate (xk , yk ) to be within the updated upper and lower bounds. Let (xL , yL ) be the updated iterate for the left node, (xR , yR ) the updated iterate for the right node and λk the current Lagrange multiplier estimate. We choose the node with the smaller value of the Lagrangian, i.e. we choose the left node if L(x L , y L , λk ) < L(x R , y R , λk ) and the right node otherwise. We obtained encouraging numerical results with the new selection strategy indicating that it can, in some cases, improve the convergence speed significantly. See Section 4.6 for more information on the numerical results. Note that in our implementation, we always choose the right node if L(xL , yL , λk ) = L(xR , yR , λk ). A more advanced selection method could be considered in the future. However, when solving the ten test problems considered here, the Lagrangian estimates were never equal for any of the nodes.

3.7.

Obtaining lower bounds

It is important to be able to estimate lower bounds for the NLP subproblems as they allow us to drop nodes from consideration once the lower bound of the subproblem of a node is greater than the current upper bound for the MINLP problem. Any solution that is feasible in the original MINLP problem provides an upper bound on the optimal value. Thus, the upper bound is usually taken to be the minimum of the objective function values for the obtained feasible solutions of the MINLP problem. A lower bound for the current subproblem may be obtained from the solution of the first LP problem in the NLP iteration. The first LP problem we are solving for the NLP subproblem (NLPk ) is T

min ∇ f (k) d, T s.t. g (k) + ∇g (k) d ≤ 0, k x + dx ∈ X, y k + d y ∈ Y k

(2)

SOLVING CONVEX MINLP OPTIMIZATION PROBLEMS

71

and the solution d (1) to this problem can be used to obtain a lower bound for the NLP subproblem. We can show that the optimal value of the NLP subproblem has a lower bound T

L k = f (k) + ∇ f (k) d (1) .

(3)

Theorem 1. Assume d(1) is an optimal solution to (2). Then Lk = f (k) + ∇f (k)T d (1) is a lower bound for the optimal value of (NLPk ). ¯ y¯ ) = (x k + d¯x , y k + d¯y ) to (NLPk ) Proof: Assume there is an optimal solution (x, k ¯ y¯ ) < L . Since g is convex, such that f (x, T

g(x, y) ≥ g (k) + ∇g (k) (x − x k , y − y k ), ¯ y¯ ) is feasible in (NLPk ), and, as (x, T ¯ ¯ y¯ ) ≥ g (k) + ∇g (k) d. 0 ≥ g(x,

Therefore, d¯ = (d¯x , d¯y ) is feasible in problem (2). ¯ y¯ ) < L k , Furthermore, since f is convex and we assumed f (x, T T ¯ y¯ ) < L k = f (k) + ∇ f (k) d (1) f (k) + ∇ f (k) d¯ ≤ f (x,

and thus T T ∇ f (k) d¯ < ∇ f (k) d (1)

contradicting the fact that d (1) was optimal for (2).



In the branch and bound algorithm, we use (3) to calculate lower bounds for the nodes. Any node, where the lower bound of the node is greater than or equal to the current upper bound of (P), can be fathomed. The described method to calculate lower bounds differs from earlier MINLP algorithms based on nonlinear branch and bound. In [6], Lagrangian duality is used to obtain the lower bounds. To obtain a lower bound, the following nonlinear programming problem is solved: min f (x, y) + λT g(x, y), s.t. x ∈ X, y ∈ Y k

(4)

for a given set of Lagrange multipliers λ. The optimal value of this problem is used as a lower bound for the given NLP subproblem. The drawback with this method is that solving (4) implies solving an additional NLP problem to get an estimate of the lower bound, which is clearly undesirable. Another drawback is that the quality of the solution will depend on the quality of the Lagrange multipliers chosen. For multipliers

72

STILL AND WESTERLUND

close to an optimal solution, the solution to (4) is close to an optimal solution of (P). If, however, the multipliers are not well chosen, the optimal value of (4) may not provide a good lower bound. In [26], lower bounds are not explicitly calculated. Instead, surrogate constraints T

f (k) + ∇ f (k) d ≤ U −  are added to each QP problem solved. Here U is an upper bound on the optimal value of the original problem. The advantage is that we do not need to solve a separate NLP problem as in [6]. However, the lower bounding only applies to the current subproblem being solved. We have to solve a QP problem in each node before knowing whether to drop the node or not. In contrast, with the explicit lower bounds offered by the SCP algorithm, we can drop nodes in any branch of the tree each time we obtain a new improved upper bound during the branching process without having to solve any additional LP problems for the dropped nodes. 3.8.

Minimizing the number of LP subiterations

Some further optimization can be done in order to minimize the number of LP subiterations performed. Assume the current iterate for a node in the branch and bound tree is optimal in the node and we branch on some integer variable with a non-integer value. We then modify the iterate after branching according to the additional bound added in the branching operation. If the modified iterate is feasible in the new node and the value of the objective is unchanged, then the iterate is optimal in the new node and we therefore do not need to perform any NLP iteration in the newly created node. We summarize this procedure in the following theorem for one branching direction and note that the proof for the other branching direction is similar. Theorem 2. Assume that the current iterate (xk , yk ) in node (NLPk ) is optimal for the node and assume that the variable ylk has a non-integer value and we branch on this variable. Assume further that we create a new node (NLPL ) by modifying the upper bound of y such that yl ≤ ylk  and we let the iterate (xL , yL ) for (NLPL ) be equal to (xk , yk ), except for the modification ylL = ylk . Then, if g(xL , yL ) ≤ 0 and f (xk , yk ) = f (xL , yL ), the iterate (xL , yL ) is optimal for (NLPL ). ¯ y¯ ) for (NLPL ) such Proof: Assume (xL , yL ) is not optimal. Then there must be an (x, L L ¯ y¯ ) ≤ 0. Since the feasible set for (NLPL ) is a subset ¯ y¯ ) < f (x , y ) and g(x, that f (x, k ¯ y¯ ) is feasible in (NLPk ). But then f (x, ¯ y¯ ) < f (x k , y k ) of the feasible set for (NLP ), (x, k k k  contradicting the fact that (x , y ) was optimal for (NLP ). 3.9.

Convergence to an optimal solution

Convergence to an optimal solution for the MINLP problem can be ensured by the fact that the branch and bound tree is finite and by us having shown earlier, [34], that the NLP part of the algorithm converges to an optimal solution.

SOLVING CONVEX MINLP OPTIMIZATION PROBLEMS

73

We must further assume that the NLP part of the algorithm terminates after a finite number of NLP iterations for a particular NLP subproblem. If the algorithm finds a nonoptimal integer solution in the branch and bound tree, it will solve the NLP subproblem to optimality. The solution may not be an optimal solution to (P) and thus the algorithm may get stuck in an NLP subproblem before finding an optimal solution, unless we can guarantee that the solution to the NLP subproblem is found in finite time. In practice, we may ensure termination in finite time since we define tolerances εg > 0 for the constraint violation and εL > 0 for determining whether the norm of the gradient of the Lagrangian is zero or not. Thus, in practice, the algorithm finds a solution within the defined tolerances in finite time. Theorem 3. Assume that the NLP subproblems are solved within finite precision. Then the SCP algorithm will converge to an optimal solution of problem (P) within the precision tolerances. Proof: As noted earlier, the branch and bound tree is finite, as there is a finite number of elements in Y. Also, the branching process does not cut away any feasible solution of (P). Node fathoming only drops nodes where lower bounding shows they cannot have solutions better than the currently obtained candidate solution to (P) or when the node is infeasible. Therefore, an optimal solution has either been found already, or it is part of some of the unexamined nodes in the tree. Since the NLP subproblems are solved within finite precision, the solution will be found in finite time for these subproblems. As the tree is finite and we may find the NLP solution to any of the nodes in finite time we will, eventually, have examined the entire tree in finite time including nodes containing an optimal solution. Thus, we always find an optimal solution to (P) within the precision tolerances.  4.

Numerical experiments

The SCP algorithm has been implemented in MATLAB using the implementation of the NLP version described in [34] as the base for solving the NLP problems in each node. SOS1 branching, introduced in [2], was implemented in the algorithm as well as several heuristics to improve convergence. Among these, bound tightening [32] in each node proved very successful for some of the problems. The performance of the algorithm was compared to the α-ECP solver [37], the nonlinear branch and bound solver MINLP-BB [26] available at the NEOS server [8, 11] and a nonlinear branch and bound solver (SCP-BB) based on the same branching rules and NLP solver as the presented SCP algorithm. The only difference between the SCP-BB and SCP algorithms is that each node is solved to optimality before branching is applied in the SCP-BB algorithm. For α-ECP, the intermediate MILP problems were solved to optimality. We used the number of nodes generated in the branch and bound tree, as well as the number of quadratic, mixed integer linear or linear programming problems solved, to compare the performance of the algorithms. The number of problems solved gives an indication of the relative performance of the algorithms, although we must take into

74

STILL AND WESTERLUND

account that the effort to solve each type of problem (LP, QP or MILP) may vary quite significantly. In addition, we used the number of simplex iterations needed to solve the problems for comparing the α-ECP and SCP algorithms. Since both algorithms can use the CPLEX library for solving the LP and MILP problems, this criterion is a good indicator of the relative performance of these algorithms as most of the work in the algorithms is carried out in the simplex iterations. Unfortunately, the SCP algorithm uses the CPLEX library only to solve LP problems. Thus, it may not take advantage of the various advanced procedures used in the CPLEX library to solve MILP problems such as Gomory cuts and various node heuristics [4]. Indeed, if we look at the average number of nodes needed to solve one MILP problem for the α-ECP algorithm compared to the number of nodes needed in the branch and bound tree for the SCP algorithm, we see that the average number of nodes required for α-ECP is considerably smaller. Thus, it is reasonable to assume that the performance of the SCP algorithm could be further improved by adding such functionality to the algorithm. 4.1.

Test problems

Ten convex MINLP test problems were chosen from the literature. The test problems come from various applications. Problems SYNTHES1–SYNTHES3 are process synthesis problems [13] and OPTPRLOC considers the positioning of products in a multiattribute space [18, 13]. Problems BATCH and BATCHDES are batch plant design problems [23]. TRIMLOSS2 is a trim-loss minimization problem from the paper industry [21] and GBD a small test problem from [30]. Finally, ALAN and MEANVARX are portfolio risk minimization models from [1] and [9]. Problem formulations for the problems were taken from the problem collections MacMINLP [25] and MINLPLib [7]. Test problem characteristics are listed in Table 2. In the table, the number of real and integer variables are listed as well as the number of linear inequality, linear equality and nonlinear inequality constraints. Descriptions of the headers in all tables are listed in Table 1. The problems range in size from 4 to 46 variables and 4 to 73 constraints. Mostly, the constraints in the problems are linear and only a few constraints are nonlinear. The only problem containing a significant number of nonlinear constraints is the problem OPTPRLOC. 4.2.

Algorithm tolerances

For the SCP algorithm, the tolerances we used in the numerical tests were εg = 10−3 for the constraint violation and εL = 10−3 for testing the norm of the gradient of the Lagrangian. We used up to five initial NLP iterations on the continuous relaxation of (P) and a penalty parameter ρ = 10−1 . Similarly, the tolerance used in the α-ECP algorithm was εg = 10−3 and in MINLPBB ε = 10−3 (i.e. the option “eps=1E-3” for the MINLP-BB version on the NEOS server). Intermediate MILP problems were solved to optimality for α-ECP.

SOLVING CONVEX MINLP OPTIMIZATION PROBLEMS Table 1.

Description of the headers used in the tables.

Header

4.3.

75

Description

#LP

Number of LP problems solved in order to solve the problem.

#MILP

Number of MILP problems solved in order to solve the problem.

#Nds/#Nodes

Number of nodes in the branch and bound tree. For α-ECP: number of nodes summed from all MILP problems solved.

#QP

Number of QP problems solved in order to solve the problem.

#Rows

Number of constraint rows in the largest LP problem solved.

#Splx

Number of simplex iterations required to solve the problem.

α-ECP(g > ε)

α-ECP algorithm such that linearizations are added in each iteration for all violated constraints.

α-ECP(max (g))

α-ECP algorithm such that one linearization is added in each iteration for the constraint with maximal constraint violation.

m

Total number of constraints of the problem.

me

Number of linear equality constraints of the problem.

mi

Number of linear inequality constraints of the problem.

mn

Number of nonlinear inequality constraints of the problem.

MINLP-BB

MINLP-BB algorithm (filterSQP-based NLP branch and bound).

n

Total number of variables of the problem.

nr

Number of real variables of the problem.

nz

Number of integer variables of the problem.

Objective

Type of objective function (linear/nonlinear).

OptLP

Number of LP problems solved during branch and bound before finding an optimal solution in the tree.

Problem

Name of the optimization problem.

SCP

SCP algorithm for MINLP problems as described in this paper.

SCP-BB

SCP algorithm-based NLP branch and bound (SCP algorithm modified such that each node is solved to optimality).

SCP(best)

Best-first search.

SCP(bfs)

Breadth-first search.

SCP(dfs)

Depth-first search.

SCP(dftb)

Depth-first-then-breadth search.

SCP(dftbest)

Depth-first-then-best search.

SCP(Intdist)

Integer distance-based branch direction selection strategy.

SCP(Lagrange)

Lagrangian-based branch direction selection strategy.

Test results

Test results for the SCP algorithm (SCP) and the branch and bound-based version (SCPBB) are compared to results for the α-ECP and the filterSQP-based branch and bound solver (MINLP-BB) in Table 3, where the number of nodes in the branch and bound tree and the number of LP, QP or MILP problems needed to solve the test problems are compared.

76 Table 2.

STILL AND WESTERLUND Test problem characteristics. Problem

n

nz

m

mi

me

mn

Objective

ALAN

8

4

4

7

5

2

0

Nonlinear

BATCH

46

22

24

73

60

12

1

Nonlinear

BATCHDES

19

10

9

19

12

6

1

Nonlinear

4

1

3

4

4

0

0

Nonlinear

MEANVARX

35

21

14

42

34

8

0

Nonlinear

OPTPRLOC

30

5

25

30

5

0

25

Nonlinear

SYNTHES1

6

3

3

6

4

0

2

Nonlinear

SYNTHES2

11

6

5

14

10

1

3

Nonlinear

GBD

Table 3.

nr

SYNTHES3

17

9

8

23

17

2

4

Nonlinear

TRIMLOSS2

37

6

31

24

16

6

2

Linear

Test results for the test problems. SCP

Problem

#LP

SCP-BB

#Nds

#LP

#Nds

α-ECP (max (g))

α-ECP (g > ε)

MINLP-BB

#MILP

#MILP

#QP

#Nds

#Nds

#Nds

ALAN

15

7

15

7

5

5

5

5

20

13

BATCH

56

17

144

15

24

162

12

85

151

35

BATCHDES

18

9

21

9

14

32

7

12

21

5

1

3

1

3

3

3

3

3

6

3

MEANVARX

17

11

16

9

8

8

8

8

18

11

OPTPRLOC

139

79

251

77

38

1072

7

309

632

103

SYNTHES1

16

5

24

5

10

26

5

9

15

5

SYNTHES2

22

13

32

13

19

26

9

13

47

17

GBD

SYNTHES3

34

15

68

15

27

147

16

91

70

25

TRIMLOSS2

169

107

246

93

10

302

9

312

1420

479

The α-ECP algorithm solved the fewest number of optimization subproblems, but the number of problems solved cannot directly be compared to the SCP and MINLP-BB algorithms, since it will require substantially more work to solve an MILP problem compared to solving a QP or LP problem. The α-ECP algorithm shows excellent performance on the TRIMLOSS2 problem, especially compared to the MINLP-BB algorithm. Comparing to Table 4, we see that α-ECP (and SCP) requires approximately the same number of simplex iterations as the number of solved QP problems for MINLP-BB! The TRIMLOSS2 problem contains mainly linear constraints and most of the integer variables are binary and part of SOS1 sets. Thus, the advanced branch and bound strategy used in CPLEX worked well on this example. The performance of the SCP algorithm was better than that of the MINLP-BB algorithm, both in terms of subproblems solved, as well as the size of the branch and bound tree. In fact, for several problems, SCP solved significantly fewer LP problems than MINLP-BB solved QP problems. Furthermore, the results are consistent across

77

SOLVING CONVEX MINLP OPTIMIZATION PROBLEMS Simplex iterations and problem sizes for SCP and α-ECP.

Table 4.

SCP Problem

SCP-BB

α-ECP(g > ε)

#Splx

#Splx

#Rows

#Splx

#Rows

#Splx

33

9

35

9

29

11

29

11

1241

78

3519

78

1367

95

705

94

113

21

134

21

244

31

111

30

3

4

3

4

10

6

10

6

MEANVARX

106

43

112

43

154

51

154

51

OPTPRLOC

1418

33

3504

33

5596

42

1706

54

SYNTHES1

13

8

24

8

130

13

52

12

SYNTHES2

91

16

144

16

333

29

146

28

SYNTHES3

236

26

542

27

1115

45

650

53

TRIMLOSS2

1432

34

2201

33

1427

31

1501

35

ALAN BATCH BATCHDES GBD

#Rows

α-ECP(max (g)) #Rows

all test problems. MINLP-BB only solved one problem faster and for that problem the difference to SCP was only one QP problem. Taking into account that we are comparing the number of solved LP problems to QP problems, and that solving an LP problem typically demands less work than solving a QP problem, these initial numerical experiments show that the new algorithm is promising and competitive with existing commercial algorithms. Obviously, more tests are needed in the future to get a more detailed picture of algorithm performance. Note the rather interesting results for the SCP algorithm for problem GBD. Here only one LP problem is solved and three nodes created in the tree. The peculiar results here are due to the optimization procedures described in Section 3.8. As noted, the branch and bound tree was generally smaller for the SCP algorithm compared to MINLP-BB. The lower bounding feature of the SCP algorithm reduces the number of nodes in the tree. Another reason is that convergence is good for the SCP algorithm when iterates are far from an optimal solution. Also, the SCP implementation automatically identifies SOS1 sets in the optimization problems, whereas the MINLPBB implementation does not. Problems BATCH and TRIMLOSS2 contained SOS1 sets. A performance profile, see [12], of the test results comparing the number of LP problems solved by the SCP and SCP-BB algorithms with the number of QP problems solved by the MINLP-BB algorithm verifies the superiority of the SCP algorithm with respect to the number of subproblems solved, see figure 1. Note that we cannot include the α-ECP algorithm in this comparison as the α-ECP algorithm solves MILP problems rather than LP or QP problems and the effort involved in solving an MILP problem is generally significantly larger. 4.4.

Comparing the SCP and α-ECP algorithms

A more precise comparison between α-ECP and the two SCP versions was possible, since they can use the CPLEX library for solving the MILP and LP problems respectively.

78

STILL AND WESTERLUND Performance Profile 100.00 90.00

Proportion of Problems Solved

80.00 70.00 60.00 SCP 50.00

SCP-BB MINLP-BB

40.00 30.00 20.00 10.00 0.00 1

2

3

4

5

6

7

8

9

10

LP/QP Problems Solved in Proportion to Best Solver

Figure 1

Performance profile on number of LP/QP subproblems solved.

For these algorithms, the number of simplex iterations performed was used to provide a more precise comparison between them. The results can be found in Table 4. The results are in line with what we expected from the two algorithms. The α-ECP algorithm generally performs well on problems containing many linear constraints and few nonlinear constraints. Problems ALAN, BATCH and BATCHDES contain only a nonlinear objective and, in addition, one nonlinear constraint for BATCH and BATCHDES. It also performs well on problems with special structures, where the powerful functionality built into the CPLEX MILP solver can be fully utilized. The combinatorial nature of the BATCH problem may explain the exceptionally good performance on this problem by the α-ECP algorithm. The SCP algorithm, again, performs well on problems with several nonlinear constraints. As can be seen from the table, the SCP algorithm performed well on problems SYNTHES1–SYNTHES3 and reasonably well on OPTPRLOC, where there are several nonlinear constraints. One must further note that the linear subproblems are generally bigger for the α-ECP algorithm as old linearizations are used in subsequent iterations. The size (number of constraint rows) of the largest LP(MILP) problem solved is also listed in Table 4. A summary of the results is given in the form of a performance profile in figure 2. 4.5.

Branching strategies

We also investigated how the branching strategy affected the performance of the SCP algorithm. The results are shown in Table 5. As we can see from the results, there is not much difference between a depth-firstthen-best (dftbest) strategy and a depth-first (dfs) branching strategy. The number of LP

79

SOLVING CONVEX MINLP OPTIMIZATION PROBLEMS Performance Profile 100.00 90.00

Proportion of Problems Solved

80.00 70.00

SCP SCP-BB

60.00

α-ECP(max(g)) α-ECP(g>ε)

50.00 40.00 30.00 20.00 10.00 0.00 1

2

3

4

5

6

7

8

9

10

11

Simplex Iterations in Proportion to Best Solver

Figure 2

Performance profile on number of simplex iterations performed.

problems solved for both is almost the same. Also the depth-first-then-breadth (dftb) strategy worked well on the problems, except for problem BATCH, where it had some difficulties. The best-first branching strategy also performed well. It solved significantly fewer LP subproblems for the problem BATCHDES. On the other hand, it had some difficulties with problems BATCH, SYNTHES3 and TRIMLOSS2. The breadth-first (bfs) search was the most disappointing method solving significantly more LP problems for several test problems.

Table 5.

Test results for different branching strategies. SCP (dfs) Problem

#LP

SCP (bfs)

#Nds

#LP

SCP (best)

#Nds

#LP

SCP (dftb)

#Nds

#LP

SCP (dftbest)

#Nds

#LP

#Nds

ALAN

15

7

15

7

15

7

15

7

15

7

BATCH

56

17

125

43

65

25

95

31

56

17

BATCHDES

18

9

17

9

13

9

18

9

18

9

1

3

1

3

1

3

1

3

1

3

MEANVARX

17

11

28

31

18

19

17

11

17

11

OPTPRLOC

139

79

249

159

146

97

140

83

135

85

SYNTHES1

16

5

15

5

15

5

16

5

16

5

SYNTHES2

22

13

23

15

22

13

22

13

22

13

GBD

SYNTHES3

34

15

49

31

41

23

34

15

34

15

TRIMLOSS2

169

107

208

145

197

141

165

107

170

107

80

STILL AND WESTERLUND

Table 6.

Test results for different node selection strategies. SCP (Lagrange) Problem

SCP (Intdist)

#LP

#Nodes

OptLP

#LP

#Nodes

OptLP

ALAN

15

7

12

15

7

14

BATCH

56

17

41

150

35

137

BATCHDES

18

9

18

18

9

18

1

3

1

2

3

2

MEANVARX

17

11

12

19

13

15

OPTPRLOC

139

79

69

238

133

234

SYNTHES1

16

5

16

16

5

16

SYNTHES2

22

13

11

24

15

17

SYNTHES3

34

15

17

53

23

41

TRIMLOSS2

169

107

114

199

129

30

GBD

In conclusion, the depth-first and depth-first-then-best strategies performed best on the selected problems. We have used the depth-first strategy as the default strategy in our implementation of the algorithm, as it resulted in a slightly smaller branch and bound tree. Note that the depth-first-then-best strategy is a good choice as well as the default strategy. 4.6.

Node selection results

Finally, we look at the results when using different types of node selection strategies during a depth-first search. We display the results from this comparison in Table 6. For each node selection strategy we have three columns: the number of LP problems solved (#LP), the number of nodes in the branch and bound tree (#Nodes) and the number of LP problems solved before finding an optimal solution (OptLP). As we can see from the table, node selection based on the value of the Lagrangian was better than, or equal to, node selection based on the distance to the closest integer value for all problems. For BATCH, the performance of the algorithm was increased by a factor of more than two. Node selection based on the Lagrangian was used as the default strategy in our implementation of the algorithm. 5.

Summary and conclusions

We have described a new branch and bound-based algorithm for solving convex MINLP problems. Compared to previous papers on similar algorithms, the algorithm described here has several new features. The algorithm solves MINLP problems by solving a sequence of linear programming problems, in contrast to other solvers that typically solve a sequence of QP, NLP or MILP problems. Another new feature is the way lower bounds are calculated. We obtain a lower bound for the current NLP subproblem directly from the solution of the first LP problem in each NLP iteration performed. The method is efficient as it does not require any additional

SOLVING CONVEX MINLP OPTIMIZATION PROBLEMS

81

solutions to Lagrangian duality problems and it gives an explicit lower bound, which is advantageous for node fathoming. A third new feature is the branching heuristics used. We introduced a new method for selecting the next child node to solve in a depth-first search strategy, which proved successful for the selected test problems. The heuristics reduced the number of LP problems solved significantly for some of the test problems. A numerical comparison with existing solvers shows that the performance of the algorithm is competitive with existing algorithms. Although the test set in this paper is limited, we note that the algorithm improved the results significantly compared to the MINLP-BB algorithm for more than half of the test problems and was better in nine out of ten cases. In the remaining test case the performance was more or less equal. The consistent results show that the algorithm has some interesting qualities and is a promising addition to the set of existing MINLP algorithms. For larger MINLP problems, the performance of the algorithm is still open. For the larger MINLP problems in this study, the results were promising, but more numerical tests on considerably larger problems must be performed in order to get a more detailed picture of algorithm performance. Also note that the good performance of the SCP-BB algorithm indicates that the SCP-BB algorithm could be a promising algorithm for approximately solving nonconvex MINLP problems if the underlying NLP solver is extended to solve non-convex problems to stationary points. If global optimality is desired for the non-convex MINLP problems, then they would typically be solved to global optimality by solving a sequence of convex MINLP subproblems. In this case the original SCP algorithm, described in this paper, could be used. Acknowledgments ˚ Financial support from the Research Institute of the Foundation of Abo Akademi is gratefully acknowledged. We would also like to thank the reviewers for their insightful comments. References 1. A.S. Manne, “GAMS/MINOS: Three examples,” Technical Report, Department of Operations Research, Stanford University, Stanford: California, 1986. 2. E.M.L. Beale and J.A. Tomlin, “Special facilities in a general mathematical programming system for nonconvex problems using ordered sets of variables,” in: OR69 Proceedings of the Fifth International Conference on Operational Research, J. Lawrence (Ed.), Tavistock Publications, London, 1970, pp. 447–454. 3. M. Benichou, J.M. Gauthier, P. Girodet, G. Hentges, G. Ribiere, and O. Vincent, “Experiments in mixed-integer linear programming,” Mathematical Programming, vol. 1, pp. 76–94, 1971. 4. R.E. Bixby, M. Fenelon, Z. Gu, E. Rothberg, and R. Wunderling, “MIP: Theory and practice—closing the gap,” in: System Modelling and Optimization: Methods, Theory and Applications, M.J.D. Powell, S. Scholtes (Eds.), Kluwer Academic Publishers: Dordrecht, 2000, pp. 19–49. 5. K.-M. Bj¨ork, P.O. Lindberg, and T. Westerlund, “Some convexifications in global optimization of problems containing signomial terms,” Computers and Chemical Engineering, vol. 27, pp. 669–679, 2003. 6. B. Borchers and J.E. Mitchell, “An improved branch and bound algorithm for mixed integer nonlinear programs,” Computers and Operations Research, vol. 21, pp. 359–367, 1994. 7. M.R. Bussieck, A.S. Drud, and A. Meeraus, “MINLPLib—A collection of test models for mixed-integer nonlinear programming,” GAMS Development Corporation, Washington DC, 2001. Web page at < http://www.gamsworld.org/minlp/minlplib.htm >.

82

STILL AND WESTERLUND

8. J. Czyzyk, M.P. Mesnier, and J.J. Mor´e, “The NEOS Server,” IEEE Journal on Computational Science and Engineering, vol. 5, pp. 68–75, 1998. 9. H. Dahl, A. Meeraus, and S.A. Zenios, “Some financial optimization models: I risk management,” in: Financial Optimization, S.A. Zenios (Ed.), Cambridge University Press, Cambridge, 1993, pp. 3–36. 10. R.J. Dakin, “A tree-search algorithm for mixed integer programming problems,” Computer Journal, vol. 8, pp. 250–255, 1965. 11. E.D. Dolan, “The NEOS Server 4.0 Administrative guide, technical memorandum ANL/MCS-TM-250,” Mathematics and Computer Science Division, Argonne National Laboratory, Argonne: IL, 2001. 12. E.D. Dolan and J.J. Mor´e, “Benchmarking optimization software with performance profiles,” Mathematical Programming, vol. 91, pp. 201–213, 2002. 13. M.A. Duran and I.E. Grossmann, “An outer-approximation algorithm for a class of mixed-integer nonlinear programs,” Mathematical Programming, vol. 36, pp. 307–339, 1986. 14. R. Fletcher and S. Leyffer, “User manual for filterSQP,” Numerical Analysis Report NA/181, Department of Mathematics, University of Dundee, Dundee, 1998. 15. R. Fletcher and S. Leyffer, “Nonlinear programming without a penalty function,” Mathematical Programming, vol. 91, pp. 239–269, 2002. 16. C.A. Floudas, “Nonlinear and Mixed-Integer Optimization: Fundamentals and Applications, Topics in Chemical Engineering,” Oxford University Press: New York, 1995. 17. C.A. Floudas and P.M. Pardalos, Encyclopedia of Optimization, Kluwer Academic Publishers, Dordrecht, 2001. 18. B. Gavish, D. Horsky, and K. Srikanth, “An approach to the optimal positioning of a new product,” Management Science, vol. 29, pp. 1277–1297, 1983. 19. P.E. Gill, W. Murray, and M.H. Wright, Practical Optimization, Academic Press, London, 1981. 20. O.K. Gupta and A. Ravindran, “Branch and bound experiments in convex nonlinear integer programming,” Management Science, vol. 31, pp. 1533–1546, 1985. 21. I. Harjunkoski, T. Westerlund, R. P¨orn, and H. Skrifvars, “Different transformations for solving non-convex trim-loss problems by MINLP,” European Journal of Operational Research, vol. 105, pp. 594–603, 1998. 22. I. Harjunkoski, T. Westerlund, and R. P¨orn, “Numerical and environmental considerations on a complex industrial mixed integer non-linear programming (MINLP) problem,” Computers and Chemical Engineering, vol. 23, pp. 1545–1561, 1999. 23. G.R. Kocis and I.E. Grossmann, “Global optimization of nonconvex mixed-integer nonlinear programming (MINLP) problems in process synthesis,” Industrial and Engineering Chemical Research, vol. 27, pp. 1407–1421, 1988. 24. A.H. Land and A.G. Doig, “An automatic method of solving discrete programming problems,” Econometrica, vol. 28, pp. 497–520, 1960. 25. S. Leyffer, “MacMINLP : AMPL collection of mixed integer nonlinear programs,” University of Dundee, Dundee, 2000. Available from < http://www.maths.dundee.ac.uk/∼sleyffer/MacMINLP/ > . 26. S. Leyffer, “Integrating SQP and branch-and-bound for mixed integer nonlinear programming,” Computational Optimization and Applications, vol. 18, pp. 295–309, 2001. 27. J.C.T. Mao and B.A. Wallingford, “An extension of Lawler and Bell’s method of discrete optimization with examples from capital budgeting,” Management Science, vol. 15, pp. 51–60, 1968. 28. R. P¨orn and T. Westerlund, “A cutting plane method for minimizing pseudo-convex functions in the mixed integer case,” Computers and Chemical Engineering, vol. 24, pp. 2655–2665, 2000. 29. I. Quesada and I.E. Grossmann, “An LP/NLP based branch and bound algorithm for convex MINLP optimization problems,” Computers and Chemical Engineering, vol. 16, pp. 937–947, 1992. 30. N.V. Sahinidis and I.E. Grossmann, “Convergence properties of generalized Benders decomposition,” Computers and Chemical Engineering, vol. 15, pp. 481–491, 1991. 31. N.V. Sahinidis, “BARON: A general purpose global optimization software package,” Journal of Global Optimization, vol. 8, pp. 201–205, 1996. 32. M.W.P. Savelsbergh, “Preprocessing and probing techniques for mixed integer programming problems,” ORSA Journal on Computing, vol. 6, pp. 445–454, 1994. 33. C. Still and T. Westerlund, “Extended cutting plane algorithm,” in: Encyclopedia of Optimization, C.A. Floudas, P.M. Pardalos (Eds.), Kluwer Academic Publishers, Dordrecht, 2001, Vol. 2, pp. 53–61. 34. C. Still and T. Westerlund, “A sequential cutting plane algorithm for solving convex NLP problems,” European Journal of Operational Research, 2005, (Accepted).

SOLVING CONVEX MINLP OPTIMIZATION PROBLEMS

83

35. T. Westerlund and F. Pettersson, “An extended cutting plane method for solving convex MINLP problems,” Computers and Chemical Engineering, vol. 19(Suppl.), pp. S131–S136, 1995. 36. T. Westerlund, H. Skrifvars, I. Harjunkoski, and R. P¨orn, “An extended cutting plane method for a class of non-convex MINLP problems,” Computers and Chemical Engineering, vol. 22, pp. 357–365, 1998. 37. T. Westerlund and K. Lundqvist, “Alpha-ECP Version 5.01. An interactive MINLP-Solver based on the ˚ Akademi University, extended cutting plane method,” Report 01-178-A, Process Design Laboratory, Abo ˚ Abo, 2001. 38. T. Westerlund and R. P¨orn, “Solving pseudo-convex mixed integer optimization problems by cutting plane techniques,” Optimization and Engineering, vol. 3, pp. 253–280, 2002.