Optimal Process Control

14 downloads 0 Views 902KB Size Report
to the maximization of the terminal velocity of a rocket. However as it was subsequently ...... of a Class of Multistage Dynamic Optimization Problems. 1. Problems.
Optimal Process Control Miroslav Fikar

Karol Kost´ur

Slovak University of Technology in Bratislava Bratislava, Slovakia http://kirp.chtf.stuba.sk/∼fikar

Technical university in Koˇsice Koˇsice, Slovakia e-mail: [email protected]

Abstract—The purpose of this paper is to review basic concepts of optimal process control. We define the formulation and basic elements of optimal process control which are models of physical processes, constraints, objective function, functional, history of optimization, and practical approaches to optimal control implementation. The classification comprises basic theory, methods, and techniques. The theory will be complemented by concrete examples from practice. Finally, the structures for optimal process control (feed forward and feedback optimal control) will be given in the paper. Index Terms—Concepts of optimal process control, optimal control of well-defined processes, dynamic optimization of processes, structures of optimal process control, applications

1) Optimal criterion: Optimal criterion expresses the aim of optimization. The objective function is the mathematical expression of process operation to be optimized. The function f is called an objective function, cost function (minimization), utility function (maximization) or in certain fields, energy function or the functional (energy functional) or performance index. The principle of uniqueness states that one and only one objective function should be maximized or minimized. In situations where two objective functions f1 and f2 are to be optimized, they may be combined into a single objective function by means of a linear combination. The composition objective function is

I. I NTRODUCTION TO OPTIMIZATION PROBLEMS Process optimization is the discipline of adjusting a process so as to optimize some specified set of parameters with or without some constraints. The most common goals are cost minimization, throughput maximization throughput, and/or efficiency. This is one of the major quantitative tools in industrial decision making. When optimizing a process, the goal is to maximize one or more of the process specifications, while keeping all others within their constraints. Fundamentally, there are three components or procedures that can be adjusted to affect optimal performance. They are: • Technological equipment optimization • Technological procedures • Process control. There are hundreds or even thousands of control loops in process units. Each control loop is responsible for controlling one part of the process, e.g. such as maintaining a pressure, a volume flow, a temperature or level. If the control system and required value is not properly designed, the process runs in non optimal conditions. Therefore, the solution of a optimization tasks is needed in process control. A. Formulation of the Optimization Problem

fc = ψ 1 f 1 + ψ 2 f 2

(1)

where ψ1 , ψ2 are weighting constants. If f1 is to be maximized and f2 is to be minimized, then f3 may be substituted for f2 if f3 is equal to the inverse of f2 , so that the maximum of f3 will occur at the same point as the minimum of f2 . Then composite objective function to be maximized is fc = ψ 1 f 1 + ψ 3 f 3

(2)

Others principles for defining objective function are the principle of accountability, principle of profit orientation, principle of optimal control equations states. The profit objective function expresses the profit contribution that can be obtained from operating process. The profit objective function is often defined as the difference between gross revenue and variable cost   c i Oi − cj Rj (3) f= i

j

where c is price and cost, O is the output volume, R is the raw material volume. The cost objective function (4) expresses the costs associated with operating the process. Usually only variable costs that are controllable are included in cost objective function. Then objective function to be minimized is given as  c i Pi (4) f=

Main elements of the optimization problem are: • The choice of optimal criteria. • The mathematical model. • Constraints on state and control variables. • The choice of a suitable method for solution of optimization problem.

where ci is cost for unit of producing the volume from i-th process. Dynamic optimization problem has objective function which is integral of the cost-rate function over time or other changing

978-1-4577-1868-7/12/$26.00 ©2012 IEEE

153

Downloaded from http://www.elearnica.ir

i

variable. It is called functional. Very often the functional expresses the following types of dynamic optimization problems: • the minimum time, • the minimum cost (the maximum profit), • the optimal continuous operation. The aim of dynamic optimization problem can be expressed e.g., by functional which is minimized  tf F (x1 , x2 , . . . , xm , u1 , u2 , . . . , un , t)dt (5) J= t0

This is called optimal trajectory problem because the problem is to select a path or trajectory x(u)∗ of the control variables between two boundary conditions which will minimize J during period tf − t0 where t is time. 2) Mathematical model: Mathematical model is a mathematical representation of any studied process (physical, economical, producing process). Each process is transformation of inputs to outputs. This transformation can be for example described by linear or nonlinear functions for static (6) and dynamic processes (7) xj = fj (u)

(6)

in continuous time form dxj = f¯j (x, u, t) (7) dt including boundary conditions, or discrete time form (8) xj (t0 + Δt) = Tj (x(0), u(0), t0 )

(8)

where u is the n dimensional vector of control variables, x is the m dimensional vector of state variables, f , f¯ are process model functions, T is function which represents the effect of control variable at past intervals t0 . Note that it is a recurrence relationship, i.e., the same equation holds for all time intervals, t0 , t0 + Δt, t0 + 2Δt, . . .. 3) Constraints on state and control variables: Constraints are always present in the optimization of real processes. In a sense, constraints on state and control variables may be thought of as extensions of the physical process model. This is because constraints define the relationships between state and control variables or the allowed region of process operation. Also, the effect of constraints is generally to degrade the quality of optimal process operation. Different types of constraints can be divided to five groups: • Hard constraints on the limits of control variables. • Soft constraint on the limits of control variables. • Constraints on state and control variables expressed by functions. • Constraints on state and control variables. • Initial and terminal boundary values for dynamic optimization. Hard constraints on the limits of control variables prescribe specific upper and / or lower limits for control variables usually by inequalities (9) CiL ≤ ui ≤ CiU

154

By introducing positive slack variables, the constraints may be expressed as equalities ui − ZiL = CiL ,

ui + ZiU = CiU

(10)

where CiL , CiU are lower / upper constraints limit for the i-th control variable, ZiL , ZiU are slack variables. Soft constraints on the limit of control variables do not prohibit the control variables to exceed the constraint limit. However, the quality of process operation will deteriorate rapidly if the constraint limit is exceeded by any significant extent. Soft constraints may be implemented indirectly, i.e. by modifying the objective function to reflect the penalty imposed by significant deviation of a control variable beyond the constraint limit. A possible penalty function to be added to the objective function can be given as   Mi ui (11) P = Ki CiU where Ki is a positive constant, Mi is a positive integer. Then, the modified objective function to be minimized will be F = f + P. Constraints on state and control variables have general form (12), (13) (12) Sj (x, u) ≥ SjL or Sj (x, u) ≤ SjU

(13)

for j = 1, . . . , r, where SjU , SjL are positive constraints limits. Again, positive slack variables Zj can be introduced and inequalities (12), (13) are transformed into Gj (x, u, Zj ) = 0

(14)

To express constraints on state and control variables as functions of control variables only, it may be possible to substitute process model (6) into (14) to eliminate the state variables. The general form of the constraint on state and control variables expressed only as functions of control variables is then gj (u, Zj ) = 0

(15)

Constraints on state and control variables are applied when state variables cannot be eliminated, the objective function f to be optimized may be modified by the Lagrange multiplier technique. If λ is r dimensional vector of Lagrange multipliers the modified objective function subject to the constraints (14) or (15) is r  λj Gj (x, u) (16) F =f+ j=1

Initial and terminal boundary values for dynamic optimization for the specified state variables are given by x(t0 ) = x0 ,

x(tf ) = xf

(17)

where t0 and tf are initial and final times, x0 and xf are constants.

2012 13th International Carpathian Control Conference (ICCC)

4) The choice of a suitable method for solution of optimization problems: The choice of a suitable method to solve an optimization problem is difficult problem because it depends on multiple factors. If optimal criterion, mathematical model, and constraints are known the choice of optimization method will be easier. Generally, optimization problems can be divided to: • static optimization problems, • dynamic optimization problems. If variables in objective function, models, constraints do not depend on time then the static optimization problems can be solved by following static optimization methods: • classical mathematical analysis, • methods of linear programming, • non-linear programming. If objective function, constraints are linear functions, these optimization problems are suitable to solve by linear programming. If some function or constraint is not linear, these optimization problems can be solved by methods from group of nonlinear programming. If some variables in objective function (functional), models, constraints depend on time then these optimization problems can be solved by following dynamic optimization methods or techniques: • Calculus of variations • Pontryagin’s principle • Dynamic programming. Depending on way how optimization problems are solved, the optimization methods are divided to: • analytical methods, • numerical or iterative methods. Analytical methods for solving of optimization tasks with constraints and any conditions are rarely used. In these cases, numerical methods are dominantly used and supported a big set of special programs (software). Managers on various levels of processes are interested in correct decisions. At least, they should have an interest in optimal decisions. Linear and nonlinear programming are used for optimal production planning, optimal transport, or optimal distribution products from producer to consumer. Similarly, optimal source division, seek of optimal strategies, optimal control of equipments, optimal control of communication processes, etc. are solved often as dynamic optimization problems – see Fig. 1. Optimal control implementation is the application of optimization problem solutions to control of processes as in [1]. II. H ISTORY OF OPTIMIZATION An optimization problem can be represented in the following way. Given: a function f : A → Rn from some set A of the real numbers. Sought: an element x∗ in A such that f (x∗ ) ≤ f (x) for all x in A (minimization) or such that f (x∗ ) ≥ f (x) for all x in A (maximization).

Fig. 1. Main fields of using optimization problems dependent on management or process control.

Typically, A is some subset of the Euclidean space Rn , often specified by a set of constraints, equalities or inequalities that the members of A have to satisfy. The domain A of f is called the search space or feasible solutions. A feasible solution that minimizes (or maximizes, if that is the goal) the objective function is called an optimal solution. By convention, the standard form of an optimization problem is stated in terms of minimization. Generally, unless both the objective function and the feasible region are convex in a minimization problem, there may be several local minima, where a local minimum x∗ is defined as a point for which there exists some δ > 0 so that for all x such that ||x − x∗ || ≤ δ

(18)

f (x∗ ) ≤ f (x)

(19)

the expression holds, that is to say, in some region around x∗ all of the function values are greater than or equal to the value at that point. Local maxima are defined similarly. A. Conditions of optimality for objective function without constraints A large number of algorithms proposed for solving nonconvex problems – including the majority of commercially available solvers – are not capable of making a distinction between local optimal solutions and rigorous optimal solutions and will treat the former as actual solutions to the original problem. The branch of applied mathematics and numerical analysis that is concerned with the development of deterministic algorithms that are capable of guaranteeing convergence in finite time to the actual optimal solution of a non-convex problem is called global optimization. 1) Necessary conditions for optimality: One of Fermat’s theorems states that optima of unconstrained problems are found at stationary points, where the first derivative or the gradient of the objective function is zero. More generally, they may be found at critical points, where the first derivative or gradient of the objective function is zero or is undefined, or on the boundary of the choice set. An equation (or set of equations) stating that the first derivative(s) equal(s) zero at

2012 13th International Carpathian Control Conference (ICCC)

155

an interior optimum is called a ’first-order condition’ or a set of first-order conditions.

Let us introduce a new variable λ called a Lagrange multiplier, and study the Lagrange function defined by

Theorem 1 (Fermat). Let f : (a, b) → R is a function and suppose that (20) x∗ ∈ (a, b)

L(x, u, λ) = f (x, u) + λ (g(x, u) − c) ,

and x∗ is a local extreme of f (x). If f (x) is differentiable at x∗ then (21) f  (x∗ ) = 0. Similarly, if vector x ∈ Rn and f (x) is differentiable at local extreme x∗ then the gradient at extreme is zero f x (x∗ ) = 0.

(22)

2) Sufficient conditions for optimality: Let F xx is Hessian of a function f (x) which is twice differentiable   2 ∂ f (x) >0 (23) F xx = ∂xi ∂xj The sufficient condition for minimum states that this matrix must be positive definite, i.e., have eigenvalues that are positive. While the first derivative test identifies points that might be optima, this test does not distinguish a point which is a minimum from one that is a maximum or one that is neither. When the objective function is twice differentiable, these cases can be distinguished by checking the second derivative or the matrix of second derivatives (called the Hessian matrix) in unconstrained problems, or the matrix of second derivatives of the objective function and the constraints called the bordered Hessian in constrained problems. The conditions that distinguish maxima, or minima, from other stationary points are called “second-order conditions”. If a candidate solution satisfies the first-order conditions, then satisfaction of the second-order conditions as well is sufficient to establish at least local optimality. Further, critical points can be classified using the definiteness of the Hessian matrix. If the Hessian is positive definite at a critical point, then the point is a local minimum; if the Hessian matrix is negative definite, then the point is a local maximum; finally, if indefinite, then the point is some kind of a saddle point. B. Method of of Lagrange multipliers Constrained problems can often be transformed into unconstrained problems using the method of Lagrange multipliers. Lagrangian relaxation can also provide approximate solutions to difficult constrained problems. In mathematical optimization, the method of Lagrange multipliers (named after Joseph Louis Lagrange; 1736 – 1813) provides a strategy for finding maxima and minima of a function subject to constraints. For instance, consider the optimization problem to maximize objective function f (x, u) subject to constraint g(x, u) = c.

156

(24)

(25)

where the λ term may be either added or subtracted. If f (x, u) is a maximum for the original constrained problem, then there exists λ such that (x, u, λ) is a stationary point for the Lagrange function (stationary points are those points where the partial derivatives of L are zero). However, not all stationary points yield a solution of the original problem. Thus, the method of Lagrange multipliers yields a necessary condition for optimality in constrained problems [2], [3], [4], [5]. C. Linear programming Linear programming is a mathematical method for determining a way to achieve the best outcome (such as maximum profit or lowest cost) in a given mathematical model for some list of requirements represented as linear relationships. Linear programming arose mathematical models developed during World War II to plan expenditures and returns in order to reduce costs to the army and increase losses to the enemy. It was kept secret until 1947. After that, many industries found its use in their daily planning. The founders of this subject are Leonid Kantorovich, a Russian mathematician who developed linear programming problems in 1939, Dantzig (George Bernard Dantzig, 1914 – 2005), who published the simplex method in 1947 and more times later [6], [7], [8], and John von Neumann, who developed the theory of the duality in the same year. Linear programs are problems that can be expressed in canonical form: max cT x, x

s.t. Ax ≤ b, x ≥ 0.

(26a) (26b) (26c)

with the corresponding symmetric dual problem min bT x, y

(27a)

s.t. AT y ≤ c, y ≥ 0.

(27b) (27c)

where x represents the vector of variables (to be determined), b, c are vectors of (known) coefficients and A is a (known) matrix of coefficients. The expression to be maximized or minimized is called the objective function (cT x in this case). The equations Ax ≤ b are the constraints which specify a convex polytope over which the objective function is to be optimized. (In this context, two vectors are comparable when every entry in one is less than or equal to the corresponding entry in the other. Otherwise, they are incomparable.) The basic procedures of computation optimal variables x∗ are given by so called simplex algorithm or interior methods [9], [10], [11].

2012 13th International Carpathian Control Conference (ICCC)

TABLE I S URVEY OF OFTEN USING NLP METHODS Group Onedimensional problems

Unconstrained multidimensional problems

Constrained multidimensional problems

Subgroup Passive finding of extreme Direct methods for finding of extreme

Method Passive finding of extreme

Passive finding of extreme

Extended Passive method

Nonderivative direct methods

Probe method Rossenbrock method Simplex method

First order gradient methods

Simple gradient method Relaxed gradient method PARTAN method Modified gradient methods

Second order gradient methods

Newton method Fletcher – Powell method Zontendijk method Goldfarb method Broyden method

Equality constraints

Applied gradient method Applied Newton method

Inequality constraints

Penalty function

Fibonacci method A golden cut Quadratic interpolation Newton Raphson method

Method of linear approximation Gradient method for inequalities Carrol method Rossenbrock method

Quadratic Programming

Linear programming can be applied to various fields of study. It is used most extensively in business and economics, but can also be utilized for some engineering problems. Industries that use linear programming models include transportation, energy, telecommunications, and manufacturing. It has proved useful in modeling diverse types of problems in planning, routing, scheduling, assignment, and design [12]. D. Nonlinear programming In mathematics, nonlinear programming (NLP) is the process of solving a system of equalities and inequalities, collectively termed constraints, over a set of unknown real variables, along with an objective function to be maximized or minimized, where some of the constraints or the objective function are nonlinear [13], [14], [15]. Many iterative methods have been used in this field. They can be divided to groups and subgroups – see Tab. I [10]. E. Multi-objective optimization Multi-objective optimization (or multi-objective programming) [12], [15], [16] also known as multi-criteria or multiattribute optimization, aims to optimize simultaneously two or more conflicting objectives subject to certain constraints.

Multi-objective optimization problems can be found in various fields: product and process design, finance, aircraft design, oil and gas industry, automobile design, or wherever optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. Maximizing profit and minimizing the cost of a product; maximizing performance and minimizing fuel consumption of a vehicle; and minimizing weight while maximizing the strength of a particular component are examples of multi-objective optimization problems. For nontrivial multi-objective problems, one cannot identify a single solution that simultaneously optimizes each objective. While searching for solutions, one reaches points such that, when attempting to improve an objective further, other objectives get worse as a result. A tentative solution is called nondominated, Pareto optimal, or Pareto efficient if it cannot be eliminated from consideration by replacing it with another solution which improves an objective without worsening another one. Finding such non-dominated solutions and quantifying the trade-offs in satifying the different objectives is the goal when setting up and solving a multi-objective optimization problem. In mathematical terms, the multi-objective problem can be written as: min [μ1 (x), μ2 (x), . . . , μn (x)]T

(28a)

s.t. g(x) ≤ 0, h(x) = 0, xl ≥ x ≥ xu

(28b) (28c) (28d)

x

where μi (x) is the i-th objective function, g and h are the inequality and equality constraints, respectively, and x is the vector of optimization or decision variables. The solution to the above problem is a set of Pareto points. Thus, instead of being a unique solution to the problem, the solution to a multiobjective problem is a possibly infinite set of Pareto points. A design point in objective space μ is termed Pareto optimal if there does not exist another feasible design objective vector μ∗ such that μi ≤ μ∗i for all i, and μi < μ∗i for at least one index of i. F. Calculus of variations Calculus of variations is a field of mathematics that deals with extremal functionals, as opposed to ordinary calculus which deals with functions. A functional is usually a mapping from a set of functions to real numbers. Functionals are often formed as definite integrals involving unknown functions and their derivatives. The interest is in extremal functions that make the functional attain a maximum or minimum value – or stationary functions – those where the rate of change of the functional is precisely zero. In calculus of variations, the Euler–Lagrange equation, Euler’s equation [17] or Lagrange’s equation, is a differential equation whose solutions are the functions for which a given functional is stationary. Euler’s (1707 – 1783) contributions [17] began in 1733, and his Elementa Calculi Variationum gave to the science its name.

2012 13th International Carpathian Control Conference (ICCC)

157

Lagrange contributed extensively to the theory, and Legendre (1786) laid down a method, not entirely satisfactory, for the discrimination of maxima and minima. 1) The Euler–Lagrange equation: Consider the functional:  x2 L(x, f, f  )dx. (29) J[f ] = x1

The function f (x) should have at least one derivative in order to satisfy the requirements for valid application of the function; further, if the functional J[f ] attains its local minimum at f ∗ then optimal trajectory is given by solution of the Euler– Lagrange equation: d ∂L ∂L − = 0. ∂f dx ∂f 

(30)

In general this gives a second-order ordinary differential equation which can be solved to obtain the extreme of f . The Euler–Lagrange equation is a necessary, but not sufficient condition for an extreme. It is possible to express sufficient condition by help Legendre’s condition [18] 

Lf  f  (x, f, f ) ≥ 0

(31)

if f ∗ is a trajectory that minimizes (for maximization opposite inequalities) some measure of performance (functional) within prescribed constraint boundaries. Lev Pontryagin, Ralph Rockafellar, and Clarke developed new mathematical tools for optimal control theory, a generalization of calculus of variations [19]. G. Optimal control Optimal control deals with the problem of finding a control law for a given system such that a certain optimality criterion is achieved. A control problem includes a cost functional that is a function of state and control variables. The optimal control can be derived using Pontryagin’s maximum principle (a necessary condition also known as Pontryagin’s minimum principle or simply Pontryagin’s principle [20]) or by solving the Hamilton-Jacobi-Bellman equation (a sufficient condition). Some important milestones in the development of optimal control in the 20th century include the formulation dynamic programming by Richard Bellman (1920 – 1984) in the 1950s, the development of the minimum principle by Lev Pontryagin (1908 – 1988) and co-workers also in the 1950s, and the formulation of the linear quadratic regulator and the Kalman filter by Rudolf Kalman (b. 1930) in the 1960s [12]. 1) Optimal control by using the variation approach: Find the control vector trajectory u to minimize the performance index:  tf L(x, u, t)dt (32a) min J = φ(x(tf )) + u

s.t. dx = f (x, u) dt x(t0 ) = x0

158

t0

(32b) (32c)

where t0 , tf is the time interval of interest, x is the state vector, φ is a terminal cost function, L is an intermediate cost function, and f is a vector field. Note that equations (32b) and (32c) represent the dynamics of the system and its initial state condition, respectively. If L(x, u, t) = 0, then the problem is known as the Mayer problem, if φ(x(t)f ) = 0, it is known as the Lagrange problem. Note that the performance index J = J(u) is a functional, this is a rule of correspondence that assigns a real value to each function u in a class, and it is the tool that is used in this section to derive necessary optimality conditions for the minimization of J(u). Let us adjoin the constraints to the performance index with a time-varying Lagrange multiplier vector function:  J = φ(x(tf )) +

tf t0

˙ L(x, u, t) + λT (f (x, u, t) − x)dt (33)

Define the Hamiltonian function H as follows: H(x, u, λ, t) = L(x, u, t)) + λT f (x, u, t)

(34)

Since the Lagrange multipliers are arbitrary, they can be selected to make the coefficients of δx(t) and δx(tf ) equal to zero, as follows: ∂H dλ =− (35) dt ∂x λ(tf ) =

∂φ(tf ) ∂x

(36)

This choice of λ(t) results in the following expression for J, assuming that the initial state is fixed, so that δx(t0 ) = 0:  δJ =

tf t0

(Hu δu)dt

(37)

For a minimum, it is necessary that δJ = 0. This gives the stationarity condition: ∂H = 0. ∂u

(38)

Equations (32b), (35), (38) are the first-order necessary conditions for a minimum of J. Equation (35) is known as the costate (or adjoint) equation. Equation (36) and the initial state condition (32c) represent the boundary (or transversality) conditions. These necessary optimality conditions, which define a two point boundary value problem, are very useful as they allow to find analytical solutions to special types of optimal control problems, and to define numerical algorithms to search for solutions in general cases. Moreover, they are useful to check the extremality of solutions found by computational methods [21]. 2) Linear quadratic control: A special case of the general nonlinear optimal control problem given in the previous section is the linear quadratic (LQ) optimal control problem. The LQ problem is stated as follows. Minimize the quadratic

2012 13th International Carpathian Control Conference (ICCC)

continuous-time cost functional 1 min J = xT (tf )S f x(tf ) u 2 1 tf  T x (t)Q(t)x(t) + uT (t)R(t)u(t) dt + 2 t0 (39a) s.t. dx = A(t)x(t) + B(t)u(t), (39b) dt x(t0 ) = x0 (39c) A particular form of the LQ problem that arises in many control system problems is that of the linear quadratic regulator (LQR) where all of the matrices (A, B, Q, R) are constant, the initial time is arbitrarily set to zero, and the terminal time is taken as infinity (this last assumption is what is known as infinite horizon). The LQR problem is stated as follows. Minimize the infinite horizon quadratic continuous-time cost functional  1 ∞ T x (t)Qx(t) + uT (t)Ru(t) dt (40a) min J = u 2 0 s.t. dx = Ax(t) + Bu(t), (40b) dt x(t0 ) = x0 (40c) In the finite-horizon case the matrices are restricted in that Q and R are positive semi-definite and positive definite, respectively. In the infinite-horizon case, however, the matrices Q and R are not only positive-semi definite and positive-definite, respectively, but are also constant. These additional restrictions on Q and R in the infinite-horizon case are enforced to ensure that the cost functional remains positive. Furthermore, in order to ensure that the cost function is bounded, an additional restriction is imposed that the pair (A, B) is controllable. Note that the LQ or LQR cost functional can be thought of physically as attempting to minimize the control energy (measured as a quadratic form) [21]. The infinite horizon problem (ie, LQR) may seem overly restrictive and essentially useless because it assumes that the operator is driving the system to zero-state and hence driving the output of the system to zero. This is indeed correct. However the problem of driving the output to a desired nonzero level can be solved after the zero output problem. In fact, it can be proved that this secondary LQR problem can be solved in a very straightforward manner. It has been shown in classical optimal control theory that the LQ (or LQR) optimal control has the feedback form u(t) = −K(t)x(t)

(41)

where K(t) is a properly dimensioned matrix given as K(t) = R−1 B T S(t)

(42)

and S(t) is the solution of the differential Riccati equation ˙ S(t) = −S(t)A − AT S(t) + S(t)BR−1 B T S(t) − Q (43)

For the finite horizon LQ problem, the Riccati equation is integrated backward in time using the terminal boundary condition (44) S(tf ) = S f For the infinite horizon LQR problem, the differential Riccati equation is replaced with the algebraic Riccati equation (ARE) given as SA + AT S − SBR−1 B T S + Q = 0

(45)

Moreover, if the pair (A, C) is observable, where C T C = Q, then the closed loop system is asymptotically stable. This is an important result, as the linear quadratic regulator provides a way of stabilizing any linear system that is stabilizable. Of course, it is possible to solve the simplest case without Riccati equation or by using LMI. Understanding that the ARE [12], [22] arises from infinite horizon problem, the matrices A, B, Q, R are all constants. There are in general two solutions to the algebraic Riccati equation and the positive definite (or positive semi-definite) solution is the one that is used to compute the feedback gain. It is noted that the LQ (LQR) problem was elegantly solved by Rudolf Kalman [23]. LQ problem besides of predictive control has been used in the robust control [24], [25], [26], diagnostics systems [27], [28], etc. H. Pontryagin’s principle Pontryagin (1908 – 1988) and coworkers worked in optimal control theory. His maximum principle is fundamental to the modern theory of optimization. He also introduced there the idea of a bang-bang control to describe situations where either the maximum action should be applied to a system, or none. If the calculus of variations finds optimal control u inside of constraints, Pontryagin’s principle makes it possible to find it on boundaries of allowed values. Let a dynamical system be described by following differential equations x˙ = f (x, u) (46) with initial and terminal conditions x(t0 ) = x0 ,

x(tf ) = xf

(47)

where function f is defined for all x(t) ∈ Rn and u(t) ∈ Rr . Let the control variable is within a set u∈U

(48)

Then from all possible control u ∈ U which steer dynamical system (46) from x0 to xf , u must be chosen such that functional  J(u) =

tf

t0

f0 (x(t), u(t))dt

(49)

attains its extreme. If the system (46) will be extended with a new state dJ dx0 = ≡ f0 (x, u) dt dt

2012 13th International Carpathian Control Conference (ICCC)

(50)

159

then original system (46) has dimension n + 1 and initial and terminal condition are as follows xT0

=

(0, xT0 ),

xTf

=

(J, xTf ),

(51)

Adjoint system equations after defining of Hamiltonian function H and using calculus of variations including Lagrange multipliers will be in following form ∂H λ˙ = − ∂x x˙ =

(52)

∂H ∂λ

(53)

and stationarity condition ∂H =0 (54) ∂u The extreme of Hamiltonian function H depends on u ∈ U only for fixed λ, x. For fixed λ, x, the maximum of Hamiltonian function H denoted by M (λ, x) attains supreme M (λ, x) = sup H(λ, x, u)

(55)

u(t)∈U

It was shown by Pontryagin and co-workers that in this case, the necessary conditions (46), (52) and (53) still hold (see Section II-G1), but the stationarity condition (54) has to be replaced by (55). In other words, optimal control must be chosen such that H attains its extreme during period tf − t0 [29]. The principle was first known as Pontryagin’s maximum principle and its proof is historically based on maximizing the Hamiltonian. The initial application of this principle was to the maximization of the terminal velocity of a rocket. However as it was subsequently mostly used for minimization of a performance index it is also referred to as the minimum principle. Pontryagin’s book solved the problem of minimizing a performance index [30]. The principle states informally that the Hamiltonian must be minimized over U , the set of all permissible controls. If u∗ (t) ∈ U is the optimal control for the problem, then the principle states that: H(x∗ (t), u∗ (t), λ∗ (t), t) ≤ H(x∗ (t), u(t), λ∗ (t), t)

(56)

for t ∈ [t0 , tf ] and ∀u ∈ U . Here, x∗ (t) is the optimal state trajectory and λ∗ (t) is the optimal co-state trajectory. The result was first successfully applied into minimum time problems where the input control is constrained, but it can also be useful in studying state-constrained problems. Special conditions for the Hamiltonian can also be derived. When the final time tf is fixed and the Hamiltonian does not depend explicitly on time

then

160

∂H ≡ 0, ∂t

(57)

H(x∗ (t), u∗ (t), λ∗ (t), t) ≡ constant

(58)

and if the final time is free, then: H(x∗ (t), u∗ (t), λ∗ (t)) ≡ 0

(59)

When satisfied along a trajectory, Pontryagin’s minimum principle is a necessary condition for an optimum. The Hamilton–Jacobi–Bellman equation provides sufficient conditions for an optimum, but this condition must be satisfied over the whole of the state space. I. Dynamic programming The key idea behind dynamic programming is quite simple. In general, to solve a given problem, we need to solve different parts of the problem (subproblems) and then combine the solutions of the subproblems to reach an overall solution. Often, many of these subproblems are exactly the same. The dynamic programming approach seeks to solve each subproblem only once, thus reducing the number of computations. This is especially useful when the number of repeated subproblems is exponentially large. Top-down dynamic programming simply means storing the results of certain calculations which are later used again since the completed calculation is a subproblem of a larger calculation. Bottom-up dynamic programming involves formulation of a complex calculation as a recursive series of simpler calculations. This breaks a dynamic optimization problem into simpler subproblems, as prescribed by Bellman’s Principle of Optimality (R. E. Bellman, 1920 – 1984). The Principle of Optimality: An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision [31]. The Bellman equation was first applied to engineering control theory and to other topics in applied mathematics, and subsequently became an important tool in economic theory. Analyzing the appropriate Bellman equation can also solve almost any problem, which can be solved using optimal control theory. However, the term “Bellman equation” usually refers to the dynamic programming equation associated with discretetime optimization problems. In continuous-time optimization problems, the analogous equation is a partial differential equation, which is usually called the Hamilton-Jacobi-Bellman (HJB) equation   ∂J ∗ ∂J ∗ ∗ ∗ = min f (x , u) + f (x , u) (60) − 0 u∈U ∂t ∂x∗T with appropriate boundary condition. The link between Pontryagin’s approach and HJB equation is that adjoint variables are the sensitivities of the cost function with respect to the states: ∂J ∗ (61) λ= ∂x The term to be minimised in (60) is the Hamiltonian H. Thus, the partial differential equation (60) represents the time evolution of the adjoints: ∂ ∂J ∗ ∂Hmin d ∂J ∗ = =− (62) λ˙ = dt ∂x ∂x ∂t ∂x where Hmin is the minimum value of the Hamiltonian.

2012 13th International Carpathian Control Conference (ICCC)

III. O PTIMAL CONTROL OF WELL – DEFINED PROCESSES A. Steady state optimal control Steady-state processes are approximations to real world where time is assumed not be a parameter. Steady-state physical process models also assume that the processes are always in equilibrium. Steady-state approximations of process operations may be used for slow processes or processes that normally operate around a fixed set of conditions. A perturbation is such a change of state process that requires a new computation of optimal conditions. A perturbation can be continuous or discontinuous. The time period between jumps of input variables Tp are very important. If Ttt is transition time of process after jump and the following inequality holds Ttt  Tp

(63)

then it is possible to consider a process at steady state and it is convenient to use static model and static optimization methods for solution of optimal control. 1) Classification of optimal process control systems: By definition, four characteristics of the objective function, physical process model, and constraints of well-defined process are steady-steady state, continuous-value, deterministic, and well behaved. There are various structures and characteristics of optimal control systems. From standpoint of optimal process control, we can divide them in analytical and numerical methods. Analytical solution has main advantage that control law for a process is expressed in explicit form. Unfortunately, an analytical solution is possible to use in simple cases only. Usually, the complex system of non-linear equations requires iterative numerical methods. The second reason for numerical design is the fact that method or theory of optimal control is based on iterative procedures. Another possible division of the methods is as follows: • Control without constraints • Control with linear constraints and linear objective function • Control with linear constraints and non-linear objective function or respectively. In general, constraints reduce free space for optimum. In contrary, the finite optimum is impossible to find without constraints in many cases, e.g., if objective function is linear. Optimal control approach can be divided into: • Programmed optimal control • Feed forward optimal control • Feed forward optimal control with updated model • Feedback optimal control without model • Feedback optimal control with incremental model • Combined feed forward and feedback optimal control. Optimal control system on hierarchical basis can be divided into: • Optimal control with one level • Optimal control with a more levels. Optimal control with a more levels solves on each level a defined optimization task. Usually, lower optimization level

solves optimal control equations for technological equipment and higher level optimizes a group of equipments, etc [32]. Other classifications of optimal control depend on used methods and principles. Below, selected elements will be described. 2) Optimal process control with nonlinear objective function without constraints: Physical process model is given by x = f (u)

(64)

F = F (w u , wx , u, x).

(65)

The objective function is

where u is the vector of control variables, wu , wx are desired operating points, x is the vector of state variables. Substituting the values of x from (64) into (65), the objective function expressed in terms u only in G form is G = G(wu , wx , u)

(66)

Optimal control equation is given by using necessary condition (22) and it means to solve following system of equations ∂G = 0. (67) ∂u Usually, iterative techniques must be used to solve nonlinear equations (67). In opposite case it is possible to attain optimal control low in analytical form u = Kbc

(68)

where matrix K(n, n + m), vectors bT = (wTx , wTu ), c are constant. Sufficient conditions (23) must be investigated whether the optimum is a maximum or a minimum. 3) Optimal process control with nonlinear objective function with constraints: The constraint may be the physical process model, which is not in form that can be directly substituted into the objective function. The constraint may also be equation relating some of the control variables to the other control variables. Then, it is convenient in this case to use Lagrange multiplier’s method [10]. The problem is to optimize objective function F = F (u, x) (69) subject to following constraints gk (u, x) = 0 k = 1, . . . , p < n.

(70)

Then Lagrange function is following L=F +

p 

λk g k

(71)

k=1

and optimal control conditions are given by following partial equations ∂L =0 ∂ui ∂L =0 ∂λk

2012 13th International Carpathian Control Conference (ICCC)

i = 1, . . ., n

(72)

k = 1, . . ., p.

(73)

161

Feedforward optimal control requires four main elements (optimization criterion, mathematical model of process, constraints) introduced in introduction and is particular situated to well-defined steady state processes. Fig. 2 shows the feedforward optimal control approach.

Fig. 3.

Fig. 2.

Feedforward optimal control.

4) Evolutionary optimization of poorly defined and welldefined processes: steady state: The evolutionary optimization approach (EVOP) is particularly applicable to steady optimization of poorly defined processes. EVOP uses an iterative procedure, which adjusts the control variables in successive moves to arrive at the optimum of the objective function. The objective function and process may be linear, nonlinear and even undefined. EVOP can be used for feedback optimal control and combined feedforward – feedback optimal control (see Fig. 3). In this case, feedback measurement of the state and control variables will be used to determine the operating point. For example, if objective function is linear, the EVOP approach will be applied to the objective function and the simulated physical process model to estimate coefficients of linear objective function ci and matrix A. Then linear programming technique (LP) will be used to solve optimum following task Au ≤ b (74) to optimize

cT u

in often highly idealized processes. Methods for dynamic optimization process are flexible and main advantage of those methods are computational procedures. Hereby these rules reduce universal usage. Therefore, the principle of optimization by model appears as a compromise. It is suitable for slow processes. Its principle [10], [33], [34], [35], [36], [37], [38], [39] is shown in Fig. 4. The optimization algorithm may be based on convenient optimization methods or heuristic methods [40]. Its aim is to control the course of simulation in such a manner so as to ensure that optimal criterion reaches the extreme and the constraints are fulfilled during repeating cycles.

Fig. 4.

The principle of optimisation by model.

B. Dynamic optimal control (75)

In order to make the linearization assumption valid, the solution should be limited by constraints (b) to the neighborhood of the operating point. The results of the LP solution will be applied in a feedforward approach to adjust the control variables. This process is repeated over and over again to maintain or to improve on the optimum of the objective function [1]. 5) Optimization system with simulation model: The existence of the simulation model allows direct optimization, the so-called principle of optimization by model. The simulation model represents a complex mathematical model a physical process. This model can be developed as statical or dynamical. This system is known as optimization with separate model. The procedure can be characterized as universal and flexible. It has all the advantages which the classic optimization techniques do not possess because they are closed and applicable

162

Combined feedforward and feedback approach.

Numerical optimization methods for dynamic problems transform the original dynamic problem into static NLP optimization formulation which can be solved via common NLP strategies. 1) Sequential approach: Sequential approach is also referred to Control Vector Parametrization (CVP) in literature [41], [42], [43] and can be found in a variety of chemical process applications [44], [45], [46], [47], [48]. The main idea behind it is to parametrize the continuous controls using a finite set of decision variables. Typically, a piece-wise constant approximation over equally spaced time intervals is chosen for the inputs [44], [45]. Consequently, the general NLP solver iteratively optimizes an objective function by a choosing the control variables and by respecting algebraic constraints. The sequential method is of the feasible path type method, i.e. in every iteration, the solution of differential equations remains feasible while optimizing performance index. This leads to a

2012 13th International Carpathian Control Conference (ICCC)

robust solution procedure if feasible initial conditions for all variables are provided. The gradient for the cost function and for constraints with respect to the all optimized variable is estimated by the one of the following two: (i) by sensitivity equations of the system which are integrated together with the process equations, or (ii) by adjoint variables that have to be integrated backwards. The sensitivity equations are found by differentiating of right sides of the process differential equations with respect to the time invariant parameters and variables from discretized inputs [49], [50]. The obvious advantage of sensitivity approach is that it leads to a very efficient computation of the gradient. The gradient computed by adjoint variables is less accurate over gradient directly expressed by equations because the states are approximated during backward integration. In opposite, with increasing number of discretized intervals, the advantage lies on the side of adjoints because of computing advantage over sensitivity equations: (i) in case of sensitivity equations, large number of ODEs needs to be solved as every discretized interval adds one differential equation to process ODEs; (ii) in case of adjoints, the number of integrated differential equations does not depend on number of discretized intervals, simply it is two times process equations plus an additional equation per another optimized variable and per the constraint, as reported in [50]. Several efficient optimization algorithms for sequential methods can be found in [51], [52], [49]. 2) Simultaneous approach: Although sequential methods guarantee an optimal solution by following a feasible path, in opposite, they can be prohibitively expensive because they tend to converge slowly and require solution of differential equations at each iteration. In the simultaneous approach, the state profiles are approximated in addition to the control profiles, thus the dynamic optimization problem becomes a pure NLP problem expressed by a set of algebraic equations. This NLP problem then simultaneously converges to the optimum even from infeasible starting guess. The idea of orthogonal collocation coupled with Quasi-Newton method in [53] was used to perform simultaneous parameter estimation and integration of a non-linear system dynamics. Sensitivity equations for the dependent variables with respect to the parameters along system dynamics were also replaced by an approximated set of algebraic equations. So, the optimization was performed in the subspace of the parameters. A low order polynomial approximation was found to give good accuracy while keeping the dimension of the NLP low. This method proved to be superior in computational efficiency to the other existing parameter estimation algorithms. A similar approach is discussed by Biegler [54], in which orthogonal collocation is applied to the system of differential equations, too. The control and state profiles are transformed into a set of algebraic equations. Then, the optimization strategy solves the transformed problem. The main improvement of the Biegler’s simultaneous algorithm over Hertzberg and Asbjornsen algorithm is in different approximation of the time varying independent variables. Biegler approximated them by the Lagrange polynomials instead of the constant independent

variables used in [53]. A generalization of these two collocation methods is presented in [55]. The major difference in this approach is the application of collocation procedure to convert the ODEs into an approximating set of algebraic equations. This method is also labeled as collocation on finite elements [56], [57]. The continuous independent variables are specified as piecewise constant functions. Algorithm can specify the number and location of spline points. This makes [55] algorithm slightly more complex and dimensionally larger then Biegler’s method. The simultaneous algorithms introduces an approximation of dynamic system equations in order to explicitly avoid of integration process. Hence, the optimization is carried out in the full space of approximated inputs and states. In general, ODEs are satisfied only at the solution of optimization problem [44]. So, this method is called the infeasible path approach. The approach can be found in several batch applications [58], [59], [60], [61]. 3) Software tools: There are numerous software packages (commercial or free) for solving dynamic optimization problems implemented in various programming environments. MATLAB packages such as orthogonal collocation based Dynopt [62] or CVP based DOTcvp [63] are among those available freely. IV. P ROCESS CONTROL EXAMPLES A. Feed forward optimal control with updated model for group of technological equipments The aim of static optimization is to minimize a consumption of fuel in four parallel manufacturing equipments (tunnel furnaces), which during period t ∈ (0, tf ) must to make out a material mass Q. Fig. 5 shows functional model of n parallel manufacturing processes.

Fig. 5.

Functional model of E1 , E2 , . . . , En manufacturing equipments.

Individual equipments are of various construction and different age. Efficiency of these equipments is different as well. Therefore, i-th equipment has for performance xi specific consumption of energy yi . The dependence of specific consumption of energy on performance was given in Tables II, III, IV, V. Technological processes are defined by following convex functions interpolated from tables yi = ai0 + ai1 xi + ai2 x2i

2012 13th International Carpathian Control Conference (ICCC)

for i = 1, . . ., n (n = 4).

(76)

163

TABLE II S PECIFIC CONSUMPTION OF NATURAL GAS FOR 1st TUNNEL FURNACE x1 [t.h−1 ] 2.53 2.76 3.68 4.6 5.065.525.986.446.9 y1 [m3 .t−1 ] 139.5133.1117.09116.21107 99.492.990.995.7 TABLE III S PECIFIC CONSUMPTION OF NATURAL GAS FOR 2nd TUNNEL FURNACE x2 [t.h−1 ] 2.53 2.76 3.76 4.73 5.24 5.69 6.15 7.52 y2 [m3 .t−1 ] 144.4131.2118.6111.69106.58104.8102.1112.28

The optimal solution of the optimization problem is given by solution of following linear equations ∂L = 0, ∂xi

TABLE V S PECIFIC CONSUMPTION OF NATURAL GAS FOR 4th TUNNEL FURNACE x4 [t.h−1 ] 4.073 5.067 5.35 6.24 6.706 y4 [m3 .t−1 ] 128.38115.86113.46109.35105.18

The optimal control of non-linear objective function with linear and non-linear constraints is given as optimization problem n  ai0 + ai1 xi + ai2 x2i (77) f (x) = i=1

subject to linear and non-linear constraints g1 ≡

n 

x i ti − Q = 0

(78)

i=1

g2 ≡ ti + ZiU − tf = 0 g3 ≡ ti − ZiL = 0

(79) (80)

(86)

in form Bu∗ = b where u is the vector of control variables u = (x1 , x2 , x3 , x4 , λ)T and matrix B, vector b = (a11 , a21 , a31 , a41 , tf ) consist of constant elements. Then optimal control equation is u∗ = B −1 b.

TABLE IV S PECIFIC CONSUMPTION OF NATURAL GAS FOR 3rd TUNNEL FURNACE x3 [t.h−1 ] 3.6 3.97 4.04 5.12 6.19 6.43 y3 [m3 .t−1 ] 129.4118.17115.34110.3108.5107.73

∂L =0 ∂λ

(87)

The solution of optimal control equation (87) enables to reach minimal consumption of energy and Q tons of material will be produced after time tf . On based data from Tab. II–V, the optimal control was determined from (87) as u∗ = (6.483, 5.016, 4.398, 5.995) for t = 30 days (720 h) and desired production Q = 15763.032 tons of fired material. This way of control minimizes the consumption of fuel given as is y min = ( 438 006.53, 387 452.80, 361798.39, 470 785.25). Comparison on real data from the plant during the same period of one month resulted in the consumption of fuel y = (438 534.60, 468 669.45, 43 0316.39, 467 483.46) m3 gas. The fuel reduction is 146 960.90 m3 gas in comparison with considered period of one month. The optimal process control system can be used to improve the accuracy by updating physical process model coefficients in real time using measurements. Fig. 6 shows that control and state variables are measured to provide data for update of the model of processes. However, if the allowable error is exceeded, a new set of optimal control equations can be developed by using the actual values of the coefficients in matrix A [32].

Thus formulated problem of optimal control is convenient to solve by Lagrange multiplier’s method. Then Lagrange function is following L = f (x) +

3 

λk g k .

(81)

k=1

Then optimal solution of task (77)–(80) is given from following non-linear equations ∂L =0 ∂xi ∂L =0 ∂ti ∂L = 0. ∂λk

Adapting feedforward optimal control by updating models.

(82) (83) (84)

In the simplest case it is possible to get optimal control in analytical form if the vector t = (tf , tf , tf , tf ) will be constant for n = 4. Then Lagrange function (81) will be in following form (85) L = f (x) + λ(xT tT − Q).

164

Fig. 6.

B. Feedforward optimal process control using model optimization Principle of static optimization by model was used to develop a control system for heating slabs in pushing furnace [33], [35], [36], [37]. The pushing furnace belongs to the group of industrial furnaces with a high-energy consumption. Fig. 7 shows a length cut through the furnace. This furnace has the length about 34 m. There are seven upper burners and two lower burners’ zones, which are controlled by stabilization level. The furnace is fueled by mixture of natural, coke gas,

2012 13th International Carpathian Control Conference (ICCC)

and blast furnace gas. The slabs, which are steel blocks with dimensions about 1.2 × 0.2 × 9 m, are pushed through the furnace longitudinally of the furnace. The furnace is filled with 25 slabs, which form a “slab band”. Whenever a slab is pushed into furnace, the slab band moves in forward direction and another slab leaves the furnace at the discharging end at the required temperature of approximately 1270 ◦ C.

Fig. 7.

Scheme of pushing furnace.

1) Mathematical model: The development of the mathematical model was based on the zone structure. The working space of the furnace was divided into volume zones, which are closed by real surfaces zones (roof, wall, and slab). Main processes in volume zone are described as follows. Heat transfer by conduction:

∂t ∂ λ ∂x ∂t cρ = (88) ∂τ ∂x where c is the specific heat capacity [J kg−1 K−1 ], ρ is the density [kg m−3 ], x is the coordinate [m], t is the temperature [◦ C], λ is the thermal conductivity [W m−1 K−1 ], τ is the time [s]. Conduction heat transfer is solved for roof, wall, slab elements within the initial and boundary conditions t(x, 0) = f (x) dt = q in for x = 0 −λ dx dt = q out for x = h −λ dx

(89) (90) (91)

where q in , q out is density of input heat flow, output heat flow in element which is defined by (92), (93), h is the thickness of element. The boundary conditions of fourth type are held at the boundary of the wall layers. The heat flows on the lining and the slabs are given by convection and radiation Qc = α(T1 − T2 )S ⎡ ⎤ n n   Qri = σ ⎣ T Aij Tj4 − T Aji Ti4 ⎦ j=1

(92) (93)

i=j

where Qc is the convective heat flow [W], α is the convective coefficient [W m−2 K−1 ], S is the surface [m2 ], T1 , T2 are temperatures of volume and real zones respectively [K], σ is Stefan–Boltzman’s constant (5.6703 10−8 [W m−2 K−4 ]), Qr – radiation heat flow [W], T – temperature of radiation surface [K], T Aij , T Aji total heat exchange surface between zones i and j.

The temperature of combustion products in each i-th volume zone was determined by solving the non linear balance equations air Vif Hif + Viair cair i ti +



c Vi+1 cci+1 tci+1 pin i+1  slab wall water c c c out − Qi − Q i − Qi − V i c i t i pi = 0

(94)

where H f , is the calorific value of mixture fuel gas, V f , V air , V c is the volume flow of fuel, combustion air, combustion products, c is the specific heat capacity, Qslab , Qwall are total heat flows by conection and radiation for slab and walls, Qwater is the heat flow by water cooled skids, pin , pout are relative parts of input or output volume flow combustion products into volume zone. The system equations (88)–(94) were solved for all volume zones during every discrete simulation time period. The simulation model makes it possible to take into account different dimensional configuration of the working space of the furnace and slabs including their production range. The basic input parameters besides constructed parameters, dimensions are volume flows of fuel, pushing period for slabs, the composition of fuel mixture (natural, coke gas and blast furnace gas), and output of furnace. The basic output parameters of simulation are the temperature distribution in the combustion in the combustion products, slabs and lining in the cross-section and along the length of the furnace, the composition of combustion products, temperature gradient in slabs and the mass of metal loss due to scaling. From standpoint of optimal control, the vector of control variables is defined as u = (V1f , V2f , . . ., V9f , Δτ, pcokegas , pbgas )

(95)

where Vif for i = 1, . . . , 7 are fuel volume flows for upper controlled zones and i = 8, 9 for lower zones, pcokegas , pbgas are % contents of coke gas and blast furnace gas, Δτ is the pushing period of slabs. 2) Optimisation criterion: The aim is to find optimal control u variables, optimal trajectory tslab (x, τ ) for the minimum of following functional J(u) = k1 J1 + k2 J2 + k3 J3

(96)

where k1 , k2 , k3 are weighting (prices) constants. First term in (96) expresses the quality requirement on temperature field of out pushed slabs in following quadratic criterion  h 2 (tr − t(x, τk )) dx. (97) J1 = 0

where h is the thickness of slab, τk is the time when first input slab leaves the furnace, tr is the desirable slab temperature in the moment of exit from furnace. The second term stands for fuel consumption J2 =

2012 13th International Carpathian Control Conference (ICCC)

  9 τ i=1

ui (τ )dτ

(98)

165

The third term presents the metal loss due to scaling.  J3 = f [t (x, τ ) , τ, %O2 (τ ) ] dτ for x = h and x = 0 τ

(99) where %O2 means the concentration of oxygen in furnace. 3) Constraints: The constraints were given in following form • The produced mass of hot slabs Q = 800 kt. • Time of production τp ≤ 7500 h. min • Vi ≤ Vif ≤ Vimax where Vimin , Vimax are constants. 4) Optimisation technique: In this case, the simulation model was used given by equations (88)–(94). The existence of the simulation model allowed direct optimization with model (see Fig. 4). Optimization procedures were based at first on probe algorithm and next and algorithm based on gradient method with constraints was employed. The effectiveness of the second technique was better. Fig. 8 shows the course J(u) depending on optimization steps for the first technique. Values of functionals (96)–(99) have economical interpretation, e.g. (96) as total costs of heating.

Fig. 9.

Optimal trajectory for slab.

are sufficient to determine optimal operation analytically for this simple process. A schematic diagram of a discontinuous membrane filtration process is shown in Figure 10. Considering a process liqueur

diluant

retentate

u(t)

permeate q(t) membrane module

Fig. 8.

Values optimised criterion during simulation.

feed tank

5) Optimal control of heating process in pushing furnace: Obtained optimal control variables are given as: 3 −1 • Volume flow of fuel, [m s ]: (u1 , . . . , u5 ) = (0.0, 0.35, 0.68, 0.63, 0.11) (u6 , . . . , u9 ) = (0.05, 0.12, 1.22, 1.1) Period of pushing, [s]: u10 = 530 Coke, Blast furnace, Natural gas: u11 = 60%, u12 = 40%, 0% These control variables provide the heating with minimal value of optimization criterion. Temperature gradient of last slab was 13.6 ◦ C. It is very good quality of reheating slabs. Fig. 9 shows optimal trajectories of slab in cross section. If control system consists from two levels (stabilization + optimization), desirable values for stabilization level will be defined by temperature trajectory for upper surfaces of slabs in furnace. •



C. Time optimal membrane filtration The aim is to study dynamic operation of a batch membrane filtration process [64]. The process model is described by a set nonlinear ordinary differential equations that are input affine. The objective is to find time-optimal control trajectory. We apply Pontryagin’s minimum principle to formulate necessary conditions of optimality. We show that optimality conditions

166

Fig. 10.

Schematic representation of a generalized batch filtration process.

with two solutes, the general purpose of a batch plant can be summarized as to increase the macro-solute concentration from c1,0 to c1,f and to reduce the micro-solute concentration from c2,0 to c2,f . The fractionation is accomplished by performing a so called diafiltration mode in which the micro-solute is washed out of the process liqueur by introducing fresh buffer (i.e. diluant) into the feed reservoir while simultaneously removing the macro-solute-free permeate. 1) Model: The balance of each solute can be written as ci q dci = (Ri − α), ci (0) = ci0 , i = 1, 2 (100) dt V where V is the retentate volume at time t. The rejection coefficients are R1 = 1 for macro-solute (does not pass through the membrane) and R2 = 0 for micro-solute. We assume concentration polarization model with cw (101) q(c1 ) = kA ln c1

2012 13th International Carpathian Control Conference (ICCC)

where k is the mass transfer coefficient, A is the membrane area, and cw is the macro-solute concentration at the membrane wall. As R1 = 1, the volume balance follows from the material balance of the macro-solute

A with initial concentration cA (t0 ) up to volume V1 and some portion of catalyst. The heating coil is a control variable during the first stage. In the first reactor, the chain reaction

c1 V = c1,0 V0

takes place till an undetermined time tp . At this instant, the dynamic of the process will change. The second batch reactor is filled with the products from the first reaction and an amount S of diluted solution of compound B with the concentration csB is added. Three parallel reactions at isothermal conditions take place in the reactor

(102)

where V0 represents the initial tank volume. 2) Minimum Time Problem: The objective of this optimisation task is to find the time dependent function α(t) which uses minimum time to drive the process from initial state to a prescribed terminal state. Mathematical formulation of this dynamic optimisation problem is as follows J1 = min tf

(103a)

α(t)

s.t. c1 q (1 − α), V c2 q α, c˙2 = − V c1,0 V = V0 c1 α ≥ 0.

c˙1 =

c1 (0) = c1,0 ,

c1 (tf ) = c1,f , (103b)

c2 (0) = c2,0 ,

c2 (tf ) = c2,f , (103c) (103d) (103e)

3) Solution: Hamiltonian of this problem is linear in control variable α. Therefore, its minimum will be attained with α on its boundaries (bang-bang control) if its derivative with respect to α will not be zero. If it is zero, the singular case occurs and we inspect time derivatives of Hamiltonian and require them to be zero. These conditions in our case yield α = 1 if c1 = cw /e. Therefore, the optimal process will then consist of consecutive operational steps of three basic operational modes in a certain order. These operational modes can be technically characterized as concentration mode (α = 0), constant volume diafiltration mode (α = 1), and pure dilution (α = ∞). This latter case, α = ∞, corresponds to instantaneous addition of diluant. This theoretical result was confirmed by methods of numeric dynamic optimization as mentioned in Section III-B. The optimal minimum-time operation for this type of membrane can be stated as follows: 1) The first (optional) step is either pure dilution (α = ∞) or pure ultrafiltration (α = 0) until optimal macro-solute concentration c = cg /e is obtained. 2) The second step is CVD (α = 1) maintaining the optimal macro-solute concentration. This step finishes if either final concentration of micro-solute or final ratio of macro-solute to micro-solute concentration is obtained. 3) Finally, the third (optional) step is again either pure dilution (α = ∞) or pure ultrafiltration (α = 0) until final concentration of both components are obtained. D. Two-stage batch reactor control We consider connected two-stage batch reactors [65], [44], [45]. The first one is filled with diluted solution of compound

k

k

A →1 B →2 C

B → D,

B → E,

(104)

2B → F.

(105)

The objective is to maximize an amount of compound D equal to V2 cD subject to a minimal desired concentration cw D of compound D at final time tf and subject to process equations. The decision variables are the reactor temperature T at the first reaction stage, the switching time tp between two stages, final time tf and the amount of S. In overall, the optimization problem is given as follows: cost function: (106a) min V2 cD (tf ) S,tp ,T [0,tf ]

constraints: process ⎡ ⎤ ⎡ ⎤ c˙A −2 k1 (T ) c2A ⎢c˙B ⎥ ⎢k1 (T ) c2 − k2 (T ) cB ⎥ A ⎢ ⎥ ⎢ ⎥ ⎢ c˙C ⎥ ⎢ ⎥ k2 (T ) cB ⎢ ⎥ =⎢ ⎥ t ∈ [0, tp ] ⎢c˙D ⎥ ⎢ ⎥ 0 ⎢ ⎥ ⎢ ⎥ ⎦ ⎣ c˙E ⎦ ⎣ 0 c˙F 0 ⎡ ⎤ ⎡ ⎤ c˙A 0 ⎢c˙B ⎥ ⎢−0.02 cB − 0.05 cB − 0.00008 c2 ⎥ B⎥ ⎢ ⎥ ⎢ ⎢ c˙C ⎥ ⎢ ⎥ 0 ⎢ ⎥ =⎢ ⎥ ⎢c˙D ⎥ ⎢ ⎥ 0.02 c B ⎢ ⎥ ⎢ ⎥ ⎣ c˙E ⎦ ⎣ ⎦ 0.05 cB c˙F 0.00004 c2B

(106b)

t ∈ [tp , tf ]

(106c)

terminal cD (tf ) − cw D ≥0

(106d)

with kinetic rate constants defined as k1 (T ) = 0.0444 e k2 (T ) = 6889 e

−2500 T

(107a)

−5000 T

(107b)

and mixing operations at the switching time tp are − V2 cA (t+ p ) = V1 cA (tp )

V2 cB (t+ p) V2 cC (t+ p)

= =

V1 cB (t− p) − V1 cC (tp )

(108a) +S

csB

(108b) (108c)

− − where V2 = V1 + S; cA (t− p ), cB (tp ), cC (tp ) are output concentrations of compounds A, B, and C in the first stage;

2012 13th International Carpathian Control Conference (ICCC)

167

We study a small size single basin wastewater treatment plant. The removal of nitrogen (N) requires two biological processes: nitrification and denitrification. The former takes place under aerobic conditions, whereas the latter requires anoxic environment. For small size plants, i.e. less than 20,000 p.e. (population equivalent), the two processes are very often carried out in a single basin using surface turbines. The nitrification process (respectively denitrification process) is realized by simply switching the turbines on (respectively off). The process considered consists of a unique aeration tank equipped with mechanical surface aerators (turbines) which provide oxygen and mix the incoming wastewater with biomass (Fig. 12). The settler is a cylindrical tank where the solids are either recirculated to the aeration tank or extracted from the system. We assume daily variations of both influent flowrate and organic load variations during dry weather conditions that are based on measured data from the plant. The objectives is to determine an optimal sequence of aeration/non-aeration times so that for a typical diurnal pattern of disturbances, the effluent constraints are respected, the plant remains in periodical steady state, and energy consumption is minimized [66]. 1) Process Model: The model we use is based on the Activated Sludge Model No.1 (ASM 1) by [67]. This is the most popular mathematical description of the biochemical processes in the reactors for nitrogen and chemical oxygen demand (COD) removal. The biodegradation model consists of

168

1800 1600 cA−C [mol.m−3]

1400 1200 1000 800 600 400 200 0

0

50

100 t [s]

150

100 t [s]

150

100 t [s]

150

400 350 300 cD−F [mol.m−3]

E. Alternating activated sludge process

2000

250 200 150 100 50 0

0

50

380 370 360 350 T [K]

+ + cA (t+ p ), cB (tp ), cC (tp ) are initial input concentrations of compounds A, B, and C in the second stage; S stands for an amount of addition of compound B at switching time tp with fixed concentration csB . The values of process parameters are the following: V1 = 0.1 m3 , cA (t0 ) = 2000 mol m−3 , cB−F (t0 ) = −3 . The final 0 mol m−3 , csB = 600 mol m−3 , cw D = 150 mol m time is constrained by tf ≤ 180 min. Numerical solution of the optimization problem was obtained by orthogonal collocation method. Both control and state profiles are parametrized and transformed from infinite time domain into finite time domain, i.e. continuous problem becomes NLP. In the first stage, 4 intervals with 10 discretization points are used for state variable, 3 intervals with 4,5,3 collocation points are used for control variables, respectively. In the second stage, single interval with 10 collocation points is used for state variables. No control variable is presented here as the process operates at isothermal conditions. The optimal value of performance index is J = 25.56 mol and the value of the addition is S = 0.0702 m3 . Both constraints are satisfied and active: the final concentration cD of compound D is equal to desired 150 mol m−3 and the final time tf coincides with the maximum 180 min. The resulting optimal switching time takes value tp = 105.8 min. The nominal solutions for control and state variables are shown in Fig. 11.

340 330 320 310 300

0

50

Fig. 11. Optimal control of two-stage reactor. Top: optimal concentration profiles of compounds A − C. Middle: optimal concentration profiles of compounds E − F. Bottom: optimal control profile.

2012 13th International Carpathian Control Conference (ICCC)

Total Nitrogen

Influent

10.5

Settler Effluent

Aeration tank

10

Recycled sludge

Fig. 12.

Excess sludge

Typical small-size activated sludge treatment plant

N [mg/l]

9.5 9 8.5 8

11 state variables and 20 parameters and was fully described in [66]. If we assume Nc cycles of on/off operations during one day, the 11 system differential equations can be given as

7.5 7 0

dx˙ = u(τ )f (x, ub ) , 0 ≤ τ ≤ 2Nc (109) dτ where ub is a binary sequence switching between 1 and 0 and u(τ ) is piece-wise constant sequence of switching times i − 1 ≤ τ < i,

i = 1, 2, . . . , 2Nc (110)

2) Definition of Optimal Operation: a) Cost Function: About 3/4 of the total cost is related to energy consumption of the aeration turbines [68]. As these operate in on/off mode, minimizing the time of aeration will decrease the operating costs. Therefore, the dimensionless cost function is defined as Nc  u2i−1 i=1 (111) min J = u T where T represents time interval of one day. b) Constraints: According to the new European Union regulations on the effluent of wastewater treatment plants, the maximum concentrations in terms of chemical oxygen demand (COD), biological oxygen demand (BOD), suspended solids (SS), and total nitrogen (TN) are to be respected. The most critical is the total nitrogen constraint since the other constraints are usually satisfied during normal operating conditions for this plant. TNmax ≤ 10 mg/l

(112)

In addition, some limitations are imposed on the aeration times to ensure the feasibility of the computed aeration profiles and to prevent the turbines from damaging. The minimum air-on and air-off times are set to 15 minutes to avoid too frequent cycling of the turbines and to ensure that the activated sludge after anoxic periods will be sufficiently aerated and mixed in the aeration tank. Maximum times of 120 minutes are also considered to prevent floc sedimentation in the aeration tank as well as anaerobic conditions, hence modifying the degradation performances. In order to find a stationary regime, initial conditions of the plant are assumed to be unknown and are subject to

4

6

8

6

8

10 12 14 16 18 20 22 24 time [h] Aeration rates

0.6

Aeration rate [−]

u(τ ) = ui = Δti ,

2

0.5 0.4 0.3 0.2 0.1 0 0

2

4

10 12 14 16 18 20 22 24 time [h]

Fig. 13. Optimal stationary trajectories for J = 39.51%. Top: Nitrogen constraint, bottom: aeration policy

optimization. Then, the requirement of a stationary regime dictates that states at the final optimization time are the same as the initial states. The final time of optimization T has been chosen as one day as the disturbances are periodic with this frequency. This results in an equality constraint that sum of all aeration and non-aeration times should be equal to T . 3) Simulation Results and Discussion: To solve the problem, we have applied package DYNO [69] implemented in FORTRAN. It is based on CVP method. The number of cycles has been set to Nc = 29 and was not a subject of further optimization as this would lead to mixed integer dynamic optimization. This optimization problem converged to the optimal aeration profile with the average aeration rate of 39.51% shown in Fig. 13. Aeration rate is defined as the ratio of aeration time to one cycle time., i.e. u2i−1 /(u2i−1 + u2i ). It can be noticed that the total nitrogen hits the maximum constraint. The optimal trajectories of nitrate and nitrite nitrogen concentration SNO and dissolved oxygen concentration SO

2012 13th International Carpathian Control Conference (ICCC)

169

well and keeps the effluent total nitrogen limit within the constraint in the first five days. Continuation of the simulation for another 200 days (not shown here) to attain stationary operation gives the average aeration rate approximately the same as that of the optimal control (39.60%) with the peak concentration of the total nitrogen only slightly higher (10.74 mg/l).

Total Nitrogen 11 perturbed nominal

10 9 N [mg/l]

8 7

V. C ONCLUSIONS

6

In this survey paper we have presented basic concepts from optimization and optimal control. These are applied to key technological units in process industries to improve their operation. Static optimization is mainly used in process management where dynamical properties of process plants can be neglected. It can be thought of as a tool for managers and as an aid to meet correct decisions. Dynamic optimization or optimal control of processes is inherently tied with unsteady operations, transient changes, disturbance rejection, and batch process operation. Its targets are often minimum time or minimum energy operation. Optimal process control does not require additional costs or special investments in technologies. Against the backdrop of growing global energy consumption, process optimal control is in a competitive battle with alternatives, especially investments to manufacture, the payback period of which is also relatively long. That will lead to the attractiveness of these methods and techniques because the payback period of investments will be shortened. The paper contains selected examples chosen to illustrate theoretical properties as well as practical aspects of optimal process operation.

5 4 3

0

1

2 3 time [day]

4

5

4

5

Aeration rates

Aeration rate [−]

0.6 0.5 0.4 0.3 0.2 0.1 0

0

1

2 3 time [day]

Fig. 14. Trajectories for rule based control. Top: Nominal and perturbed nitrogen constraint, bottom: nominal aeration policy

show only a limited sensitivity towards the disturbances (inlet flowrates and inlet concentrations). More precisely, switching of the turbines in the obtained optimal stationary regime occurs frequently either if the concentration of SNO falls close to zero or if the concentration of SO is sufficiently high. Moreover, these two states can be measured. A simple feedback control strategy can then be proposed from their behavior: 1) Start aeration when SNO decreases sufficiently close to zero, 2) Stop aeration when SO reaches a certain value. The application of these simple rules is shown in Fig. 14 where the total nitrogen concentration is shown for two cases. The first one denoted as nominal applies the rules with values of SNO (min) = 0.01 mg/l and SO (max) = 0.7 mg/l. In the perturbed case it is supposed that the third day is rainy with 300% increase of the influent flow and with 50% decrease of the influent concentrations for the whole day. This simple feedback control strategy handles the usual daily variations of influent flow and total nitrogen load very

170

VI. ACKNOWLEDGMENT This work was supported by the Slovak Research and Development Agency under the contract no. APVV-0582-06 and grants VEGA No. 1/0036/12, and 1/0095/11. R EFERENCES [1] T. H. Lee, G. E. Adams, and W. M. Gaines, Computer Process Control: Modeling and Optimisation. John Wiley & Sons, Inc., 1968. [2] D. P. Bertsekas, Nonlinear Programming. Cambridge, MA.: Athena Scientific, 1999. [3] I. B. Vapnyarskii, “Lagrange multipliers,” in Encyclopaedia of Mathematics, Hazewinkel and Michiel, Eds. Springer, 2001. [Online]. Available: http://www.encyclopediaofmath.org/index.php? title=L/l057190 [4] H. J. Baptiste and C. Lemar´echal, Advanced theory and bundle methods. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], ser. 306. Berlin: Springer-Verlag, 1993, vol. 2, ch. Convex analysis and minimization algorithms, pp. 136–193. [5] L. S. Lasdon, Optimization theory for large systems, ser. Macmillan series in operations research. New York: The Macmillan Company, 1970. [6] G. B. Dantzig, “Programming of interdependent activities: II mathematical mode,” Econometrica, vol. 17, no. 3, pp. 200–211, 1949, doi:10.2307/1905523. [7] ——, Linear inequalities and related systems. Princeton University Press, 1956. [8] ——, Linear programming and extensions. Princeton University Press and the RAND Corporation, 1963.

2012 13th International Carpathian Control Conference (ICCC)

[9] K. Kost´ur, Optimal control I. (Optim´alne riadenie I.). Koˇsice: Technical University, 1984. [10] ——, Optimization of processes (Optimaliz´acia procesov). Koˇsice: Technical University, 1989. [11] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge University Press, 2004. [12] Wikipedia, “Mathematical optimization — Wikipedia, the free encyclopedia.” [Online]. Available: http://en.wikipedia.org/wiki/ Mathematical optimization [13] A. Mordecai, Nonlinear Programming: Analysis and Methods. Dover Publishing, 2003. [14] D. G. Luenberger and Y. Ye, Linear and nonlinear programming, 3rd ed., ser. International Series in Operations Research & Management Science. New York: Springer, 2008. [15] R. E. Steuer, Multiple Criteria Optimization: Theory, Computations, and Application. New York: John Wiley & Sons, Inc., 1986. [16] Y. Sawaragi, H. Nakayama, and T. Tanino, Theory of Multiobjective Optimization. Orlando, FL: Academic Press Inc., 1985, vol. 176 of Mathematics in Science and Engineering. [17] C. Fox, An introduction to the calculus of variations, ser. Dover Books on mathematics. Dover Publications, 1987. [18] L. Lebedev and M. Cloud, The calculus of variations and functional analysis: with optimal control and applications in mechanics, ser. Series on stability, vibration and control of systems: Series A. World Scientific, 2003. [19] J. Ferguson, “Brief survey of the history of the calculus of variations and its applications,” 2004. [Online]. Available: http: //arxiv.org/abs/math/0402357v1 [20] I. M. Ross, A Primer on Pontryagin’s Principle in Optimal Control. Collegiate Publishers, 2009. [21] V. M. Becerra, “Optimal control,” Scholarpedia, vol. 3, no. 1, p. 5354, 2008. [Online]. Available: http://dx.doi.org/10.4249/scholarpedia.5354 [22] J. Mikleˇs and M. Fikar, Process Modelling, Identification, and Control. Berlin: Springer, 2007. [23] R. Kalman, “A new approach to linear filtering and prediction problems,” Transactions of the ASME, Journal of Basic Engineering, vol. 82, pp. 34–45, 1960. [24] A. Filasov´a, “Robust control design: An optimal control approach,” in Proc. of the IEEE International Conference on Intelligent Engineering Systems INES’99, Star´a Lesn´a, Slovakia, 1999, pp. 515–518. [25] M. Bakoˇsov´a, D. Puna, P. Dost´al, and J. Z´avack´a, “Robust stabilization of a chemical reactor,” Chemical Papers, vol. 63, no. 5, pp. 527–536, 2009. [26] M. Bakoˇsov´a, D. Puna, J. Z´avack´a, and K. Vanekov´a, “Robust static output feedback control of a mixing unit,” in Proceedings of the European Control Conference 2009. Budapest: EUCA, 2009, pp. 4139– 4144. [27] D. Krokavec and A. Filasov´a, Diagnostics of dynamical system (Diagnostika dynamick´ych syst´emov). Koˇsice: Elfa, 2007. [28] ——, “Fault detection based on linear quadratic control performances,” in Proceedings of the 10th International Science and Technology Conference Diagnostics of Processes and Systems DPS’2011, Zamosc, Poland, 2011, pp. 52–56. [29] K. Kost´ur, Optimal control II. (Optim´alne riadenie II.). Koˇsice: Technical University, 1985. [30] L. S. Pontryagin, V. G. Boltyanskii, R. V. Gamkrelidze, and E. F. Mishchenko, The Mathematical Theory of Optimal Processes, 1962. [31] R. E. Bellman, Dynamic Programming. Princeton, NJ: Princeton University Press, 1957. [32] K. Kost´ur, “Optimal control system for group of technological equipments (Syst´em optim´alneho riadenia skupiny agreg´atov),” Automatizace, vol. 30, pp. 199–203, 1987. [33] ——, “Optimization of heating slabs using a simulation model,” in Transaction of Technical University of Kosice. Riecansky science publishing Co, 1991, vol. 1, pp. 199–202. [34] ——, “Optimization of a tunnel furnace,” in Transaction of Technical University of Kosice. Riecansky science publishing Co, 1994, vol. 4, pp. 189–192. [35] ——, “Fuel and metal losses cost minimalization during slab heating in the push heating furnace,” Hutnicke listy, vol. 49, pp. 189–192, 1994. [36] ——, “The analysis of optimal production conditions in a pushing furnace,” Metallurgy, vol. 36, pp. 43–47, 1996.

[37] K. Kost´ur and I. Pokorn´y, “Optimization of heating in pushing furnace,” in Proceedings of international conference on automation in control. Koˇsice: House of technology, 1990, pp. 163–167. [38] K. Kost´ur, “The optimization of burners on tunnel furnace,” Metallurgy, vol. 37, pp. 209–214, 1998. [39] ——, “Simulation model and optimisation of tunnel furnace,” in First Joint Conference of International Simulation Societies Proceedings. ETH Zurich, 1994, pp. 230–233. [40] D. Krokavec, “Convergence of action dependent dual heuristic dynamic programming algorithms in LQ control tasks,” in Intelligent Technologies - Theory and Application, New Trends in Intelligent Technologies. Amsterdam: IOS Press, 2002, pp. 72–80. [41] T. F. Edgar and D. M. Himmelblau, Optimization of Chemical Processes. McGraw-Hill, New York, 1988. [42] C. Guntern, A. Keller, and K. Hungerbuhler, “Economic Optimization of an Industrial Semi-batch Reactor Applying Dynamic Programming,” Industrial and Engineering Chemistry Research, vol. 37, no. 10, pp. 4017–4022, 1998. [43] W. Ray, Advanced Process Control. Mc Graw Hill, New York, 1981. [44] V. S. Vassiliadis, R. W. H. Sargent, and C. C. Pantelides, “Solution of a Class of Multistage Dynamic Optimization Problems. 1. Problems without Path Constraints,” Eng. Chem. Res., vol. 33, no. 9, pp. 2111– 2122, 1994. [45] ——, “Solution of a Class of Multistage Dynamic Optimization Problems. 2. Problems with Path Constraints,” Eng. Chem. Res., vol. 33, no. 9, pp. 2123–2133, 1994. [46] E. Sorensen, S. Macchietto, G. Stuart, and S. Skogestad, “Optimal Control and Online Operation of Reactive Batch Distillation,” Comp. Chem. Eng., vol. 20, no. 12, pp. 1491–1498, 1996. [47] I. M. Mujtaba and S. Macchietto, “Efficient Optimization of Batch Distillation with Chemical Reaction using Polynomial Curve Fitting Techniques,” Ind. Eng. Chem. Res., vol. 36, no. 6, pp. 2287–2295, 1997. [48] T. Ishikawa, Y. Natori, L. Liberis, and C. Pantelides, “Modeling and Optimization of Industrial Batch Process for the Production of Dioctyl Phthalte,” Comp. Chem. Eng., vol. 21, pp. 1239–1244, 1997. [49] S. Storen and T. Hertzberg, “The Sequential Quadratic Programming Algorithm for Solving Dynamic Optimization Problems – A Review,” Comp. Chem. Eng., vol. 19, pp. 495–500, 1995. [50] V. Dovi and A. Reverberi, “Optimal Solution of Processes Described by System of Differential Algebraic Equations,” Chemical Engineering Science, vol. 48, no. 14, pp. 2609–2614, 1993. [51] L. T. Biegler and R. Hughes, “Process Optimization: A comparative Case Study,” Comp. Chem. Eng., vol. 7, no. 5, pp. 645–661, 1983. [52] K. Lau and D. Ulrichson, “Effects of Local Constraints on the Convergence Behavior of Sequential Modular Simulators,” Comp. Chem. Eng., vol. 15, no. 9, pp. 887–892, 1992. [53] T. Hertzberg and O. Asbjornsen, Parameter Estimation in Nonlinear Differential Equations: Computer Applications in the Analysis of Data and Plants. Science Press, Princeton, 1977. [54] L. Biegler, “Solution of Dynamic Optimization Problems by Successive Quadratic Programming and Orthogonal Collocation,” Comp. Chem. Eng., vol. 8, no. 3-4, pp. 243–248, 1984. [55] J. Renfro, A. Morshedi, and O. Asbjornsen, “Simultaneous optimization and solution of systems described by differential algebraic equations,” Comp. Chem. Eng., vol. 11, no. 5, pp. 503–517, 1987. [56] G. Carey and F. B.A., “Orthogonal Collocation on Finite Elements,” Chemical Engineering Science, vol. 30, pp. 587–1596, 1975. [57] B. Finlayson, Nonlinear Analysis in Chemical Engineering. McGrawHill, New York, 1980. [58] J. E. Cuthrell and L. T. Biegler, “Simultaneous-optimization and Solution Methods for Batch Reactor Control Profiles,” AIChE J., vol. 13, pp. 49– 62, 1989. [59] J. S. Logsdon and L. T. Biegler, “Accurate Solution of Differential Algebraic Optimization Problems,” Ind. Eng. Chem. Res., vol. 18, no. 11, pp. 1628–1639, 1989. [60] J. W. Eaton and J. B. Rawlings, “Feedback-control of Chemical Processes using Online Optimization Techniques,” Comp. Chem. Eng., vol. 14, pp. 469–479, 1990. [61] D. Ruppen, C. Benthack, and D. Bonvin, “Optimization of Batch Reactor Operation under Parametric Uncertainty – Computational Aspects,” Journal of Process Control, vol. 5, no. 4, pp. 235–240, 1995. ˇ zniar, M. Fikar, and M. A. Latifi, “Matlab dynamic optimisation [62] M. Ciˇ code dynopt. user’s guide,” KIRP FCHPT STU Bratislava, Slovak Republic, Technical Report, 2005.

2012 13th International Carpathian Control Conference (ICCC)

171

ˇ zniar, M. Fikar, E. Balsa-Canto, and J. R. Banga, [63] T. Hirmajer, M. Ciˇ “Brief introduction to DOTcvp – dynamic optimization toolbox,” in Proceedings of the 8th International Scientific - Technical Conference Process Control, Kouty nad Desnou, Czech Republic, 2008. [64] R. Paulen, G. Foley, M. Fikar, Z. Kov´acs, and P. Czermak, “Minimizing the process time for ultrafiltration/diafiltration under gel polarization conditions,” Journal of Membrane Science, vol. 380, no. 1-2, pp. 148– 154, Aug. 2011. [65] T. Hirmajer and M. Fikar, “Optimal Control of a Two-Stage Reactor System,” Chemical Papers, vol. 60, no. 5, pp. 381–387, 2006. [66] M. Fikar, B. Chachuat, and M. A. Latifi, “Optimal operation of alternating activated sludge processes,” Control Engineering Practice, vol. 13, no. 7, pp. 853–861, 2005. [67] M. Henze, C. P. L. Grady, W. Gujer, G. v. R. Marais, and T. Matsuo, “Activated Sludge Model No. 1,” IAWQ, London, Tech. Rep. 1, 1987. [68] J.-L. Vasel, “Contribution a` l’´etude des transferts d’oxyg`ene en gestion des eaux,” Ph.D. dissertation, Fondation Universitaire Luxemourgeoise, Luxembourg, Arlon, 1988. [69] M. Fikar and M. A. Latifi, “User’s guide for FORTRAN dynamic optimisation code DYNO,” LSGC CNRS, Nancy, France; STU Bratislava, Slovak Republic, Tech. Rep. mf0201, 2002.

172

2012 13th International Carpathian Control Conference (ICCC)