Application of Linear Algebra to Control Systems

0 downloads 0 Views 878KB Size Report
Jun 10, 2016 - 2.4.4 Matrix Series. It is known that if x is a real number such that |x| < 1 then we have the following result: 1 + x + x2 + x3 + ... = 1. 1 − x . (2.4.5).
Application of Linear Algebra to Control Systems Emmanuel NIYIGABA ([email protected])

10th June 2016

Abstract Engineers are continuously building the devices that incorporate different mathematical techniques related to the Control Theory. Their main purpose is to implement the control of the systems. Controlling of a system is to influence its behaviours for the purpose of obtaining the desired output. Linear algebra can be used to model control systems and detect their controllability and observability. This report is devoted to present some of the mathematical approaches that deal with Control Theory. More emphasis is put on the application of Linear Algebra to the modelling of Mass-spring system, Magneticball-suspension system and Cruise control of a car. The resolvent matrix for these three systems is calculated; this matrix helps to calculate the transition matrix and the ‘transfer function. We also use the ranks of two special matrices; the controllability matrix and the observability matrix to check if the above-mentioned systems are controllable or observable. It is found out that the Mass-spring system and Electromagnetic- ball- suspension system are both controllable and observable, whereas the system of Cruise control of a car is controllable, but it can not be observable. Keys words: Resolvent matrix, transition matrix, transfer function, controllability, and observability.

Declaration I, the undersigned, hereby declare that the work contained in this research project is my original work, and that any work done by others or by myself previously has been acknowledged and referenced accordingly.

Emmanuel NIYIGABA, 10th June 2016 i

Contents Abstract

i

1 Introduction 1.1 History and Background of Linear Algebra and Control Theory 1.2 General Introduction . . . . . . . . . . . . . . . . . . . . . . . 1.3 Connection between Linear Algebra and Control Theory . . . . 1.4 Problem Statement and Objectives of the Project . . . . . . .

1 1 1 2 2

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

2 Some Notions of Linear Algebra and the Laplace Transform 2.1 Rank of Matrices and Solvability of System of Linear Equations 2.2 Eigen-system and Characteristic Polynomial of a Square Matrix . 2.3 Similarity and Jordan form of a Square Matrix . . . . . . . . . . 2.4 Calculation of Matrix Exponential . . . . . . . . . . . . . . . . 2.5 Some Notions of the Laplace Transforms . . . . . . . . . . . . . 2.6 Application of Inverse Laplace Transform in Calculation of eAt . 2.7 Application of Cayley-Hamilton Theorem to Calculate eAt . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

3 . 3 . 3 . 4 . 6 . 9 . 11 . 12

3 Understanding the Control Systems 3.1 State Space of Linear Continuous Time-invariant Systems . . . . . . . . . . . . . . . . 3.2 State Space and Linearization of non Linear Continuous Time-invariant Systems . . . . 3.3 Transfer Function and Transition Matrix of Linear Continuous Time-invariant Systems 3.4 Solution of Linear Continuous Time-invariant Systems . . . . . . . . . . . . . . . . . . 3.5 Controllability and Observability Matrices of Linear Continuous Time-invariant Systems 3.6 Controllability and Observability of Linear Continuous Time-invariant Systems . . . . . 4 Applications: Mass-spring System, Electromagnetic Ball Control of a Car 4.1 Mass-spring Analysis . . . . . . . . . . . . . . . . . . . 4.2 Electromagnetic Ball Suspension System . . . . . . . . 4.3 Cruise Control of a Car . . . . . . . . . . . . . . . . .

. . . . . .

14 15 16 17 18 19 20

Suspension System and Cruise 23 . . . . . . . . . . . . . . . . . . 23 . . . . . . . . . . . . . . . . . . 24 . . . . . . . . . . . . . . . . . . 29

5 Conclusion and Future Works

33

References

36

ii

List of Figures 1.1

Control system: A is system and B is the feedback (Wikipedia, 2016a). . . . . . . . . .

3.1 3.2 3.3

General example of a control system diagram (Dorf, 2004). . . . . . . . . Closed loop and open loop control system diagrams. . . . . . . . . . . . Damper spring, K is the spring constant, B is damping coefficient, x(t) is ment and M is the mass of the moving object (Google, 2016c). . . . . . Pendulum system (Wikipedia, 2016b). . . . . . . . . . . . . . . . . . . . Simulation of solutions for 1 (t) and 2 (t) with different values of u (t). .

3.4 3.5 4.1 4.2 4.3 4.4 4.5 4.6 4.7

. . . . the . . . . . .

. . . . . . . . . . displace. . . . . . . . . . . . . . .

. 14 . 14 . 15 . 17 . 19

Simulation for Mass-spring system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Electromagnetic ball suspension system. . . . . . . . . . . . . . . . . . . . . . . . . . . Simulation for displacement δx3 and the current δI2 with negative equilibrium point. . . Simulation for the displacements δx3 and the currents δI2 with positive equilibrium point. Cruise control of a car system functioning. . . . . . . . . . . . . . . . . . . . . . . . . . Diagrams for cruise control of a car system. . . . . . . . . . . . . . . . . . . . . . . . . Simulations of speed for cruise control of a car with θ = 0.069 radians with reference speed is vc = 20m/sec. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

iii

2

24 25 28 28 29 29 32

1. Introduction 1.1

History and Background of Linear Algebra and Control Theory

Linear algebra is a branch of mathematics which serves as the link between vector spaces and linear mappings. It is the basis of both applied mathematics and pure mathematics. We find a lot of its applications in different domains such as Engineering, Physics and Computer Sciences. Linear algebra first rose from the study of the determinant by Leibniz in 1693. In 1750, a method for solving a system of linear equations was developed by Gabriel Cramer. Later Gauss developed gaussian elimination theory to solve a system of linear equations (Vitulli, 2004). In 1844 Hermann Gunter Grassman published his Theory of Extension (Schulz and William, 2011) even if he failed to convince mathematicians of his time; this book included the new topics that we call linear algebra today. The term matrix was introduced by James Joseph Sylvester in 1848. Matrix multiplication and inverses were defined by Arthur Cayley while studying compositions of linear transformations. He also discovered the connection between determinants and matrices (Vitulli, 2004). The automatic control of the speed steam engine developed in 1789 by J. Watt, was the successful and famous examples of control systems (www lar.deis.unibo.it, 2016). Later in the 19 century, J.C. Maxwell and I.A. Vyshegradskii developed the automatic control described by differential equations (Fern´andez-Cara et al., 2003). Human beings had continuously attempted to improve the behaviour of dynamical systems by means of control. Control systems stated to be analysed with dynamics of a Watt’s steam engine centrifugal governor by James Clerk Maxwell in 1868. Edward John Routh abstracted the results of Maxwell for a class of linear systems. The differential equations were used by Adolf Hurwitz to analyse the stability of the systems in 1877. As a result of the application of control dynamics, the flight was done. The first successful test of flight was done by Wright Brothers on 17th December 1903. Stability analysis theory was developed in 1932 by Nyquist and in 1940 by Bond (Fern´andez-Cara et al., 2003). The control system was increasingly growing in different domains such as electronics, ship stabilizers and fire control systems during the second world war (Fern´andez-Cara et al., 2003).

1.2

General Introduction

The study for this work is devoted to present some of the mathematical approaches that deal with control theory. In this project, we emphasize on the applications of linear algebra to the modelling of some control systems. Mathematical control theory is a domain of application of mathematics that deals with the design of control systems (Sontag, 2011). It represents the systems in terms of mathematical equations for example algebraic equations, differential or difference equations (Sontag, 2011). It also deals with the analysis of the behaviours of the dynamic control systems. To control a system is to influence its behaviour so as to obtain the desired output. We think of a control system as a mathematical function where the independent variable is compared as an input and the dependent variable is an output. Engineers are continuously building the devices that incorporate the different mathematical techniques related to control theory in order to implement the control of the systems. These devices started from Watt’s steam engine governor (Fern´andez-Cara et al., 2003). There exist two types of control systems: open loop control system and closed loop control system (Electrical4u). Linear algebra can be used to model some of the control systems and make some analysis to be simpler (Xiaoxin Liao, 2008), that is why we are applying linear algebra in this project. We will emphasize on

1

Section 1.3. Connection between Linear Algebra and Control Theory

Page 2

systems described by ordinary differential equations. The different examples are used to understand the formulation of a control system. For instance mass spring, simple pendulum and RLC-circuit are examples used to explain the concepts of control system models. The plan of this project is organized into five chapters. The second chapter is about some notions of Linear Algebra where we are going to study rank of matrices, similarity and Jordan form, matrix exponential and some applications. The third chapter is designed for the understanding of some important elements of control systems. The fourth chapter is reserved for the analysis of Mass-spring system and modelling with Electromagneticball-suspension and Cruise control car systems. Finally, the fifth chapter is the conclusion and future works.

1.3

Connection between Linear Algebra and Control Theory

Later some transformations like Laplace transform, Fourier transform, and Z-transform had been used to better analyze the dynamical control systems. The modelling with control systems gave rise the use of linear algebra concepts. Linear algebra helped to solve some problems related to control systems in state space representation. The well-known problems are reachability of the states, controllability of the system, observability of the system, state observer problems, feedback stabilization and eigenvalue problems, frequency response and transfer function, stability, and matrix equation problem such as the Lyapunov, Sylvester and Ricatti (Datta, 1994) and (K.J.Astrom, 2016).

1.4

Problem Statement and Objectives of the Project

In general, control systems are represented by diagrams for instance as it is given by Figure 1.1. They are often represented and analyzed using mathematics. The Mass-spring system, the Cruise control system, and the Magnetic-ball-suspension system are some of the examples of control systems that can be analyzed using Linear Algebra. The controllability and the observability of dynamic control systems are one of the most useful concepts to deal with control systems. Therefore, the general objective of this project is to use linear algebra concepts to model the Mass-spring system, the Electromagneticball-suspension system and the Cruise control of a car system. Specifically, the controllability and the observability of these control systems have to be discussed. We also determine the resolvent matrices, the transition matrices and the transfer functions of the above-mentioned systems. Furthermore, the corresponding simulations are given depending on some chosen specific parameters/inputs.

Figure 1.1: Control system: A is system and B is the feedback (Wikipedia, 2016a).

2. Some Notions of Linear Algebra and the Laplace Transform In this chapter, we introduce the following concepts of Linear Algebra: rank of a matrix, Cayley-Hamilton theorem, and its applications. In some calculations, we use the Laplace transforms and inverse Laplace transforms.

2.1

Rank of Matrices and Solvability of System of Linear Equations

It is of great important to represent a system of linear equations in matrix form because matrices help to find the solution of the system. The system of linear equations in matrix form can be written as follows: AX = b.

(2.1.1)

Here the matrix A is n × m matrix with n the number of equations and m the number of unknowns. The matrices X and b are the column vectors with m and n components respectively (Noble and Daniel, 1988). As it is explained by the following theorem, the system of linear equations can have either unique solution, infinitely many solutions or empty (no solution) depending on the ranks of matrix A and the augmented matrix [A : b]. Here matrices A and b are given by Equation (2.1.1). 2.1.1 Theorem (Rank and solvability). Suppose the system of linear equations is given by Equation (2.1.1) and consider the matrix [A : b] as the augmented matrix. Exactly one of the following possibilities must hold: 1. If rank([A : b]) = rank(A), which is strictly less than the number of unknowns, then Equation (2.1.1) has infinitely many solutions. 2. If rank([A : b]) = rank(A) which is equal to the number of unknowns, then Equation (2.1.1) has a unique solution. 3. If rank[A : b] > rankA, then Equation (2.1.1) has no solution.

2.2

Eigen-system and Characteristic Polynomial of a Square Matrix

Let A be a square matrix defined on the set of complex numbers. The relation AX = λX where X is column vector and λ is a scalar, gives rise of the concepts of eigenvalues and eigenvectors. 2.2.1 Definition 1. If X is a non zero vector, then a such scalar λ in the relation above is called an eigenvalue of A corresponding to X. 2.2.2 Definition 2. A such non zeros vector X in the relation above is called an eigenvector of A corresponding to the eigenvalue λ. 2.2.3 Definition 3. The association of eigenvalues and eigenvectors is called the eigen-system of A . 2.2.4 Definition 4. The determinant p(λ) = det(A − λI) is called characteristic polynomial of matrix A. Here I is an identity matrix with the same dimension as the matrix A (Noble and Daniel, 1988). 3

Section 2.3. Similarity and Jordan form of a Square Matrix

Page 4

2.2.5 Remark. 1. The degree of the characteristic polynomial is exactly the order of matrix A. 2. All eigenvalues of the matrix A are the roots of the characteristic polynomial. 3. The geometric multiplicity of an eigenvalue λ is the dimension of an eigenspace Eλ corresponding to that eigenvalue. 4. The algebraic multiplicity of an eigenvalue is the number of times an eigenvalue is repeated in roots of characteristic polynomial.

2.3

Similarity and Jordan form of a Square Matrix

For some square matrices A and B, it is possible to find a non singular matrix P such that P −1 AP = B. The relation P −1 AP , is called similarity transformation (Noble and Daniel, 1988). • The similarity transformation preserves the determinant, eigenvalues, characteristic equation and trace of a matrix. • If A and B are nonsingular square matrices, for a nonnegative integer k then Ak is similar to B k . This means that we can find an invertible matrix P such that P −1 Ak P = B k . • If p is a polynomial of degree n, that is to say p(x) = a0 + a1 x + a2 x2 + ... + an xn with an 6= 0 and if P (X) for any square matrix X is denoted by P (X) = a0 I + a1 X + a2 X 2 + ... + an X n , then p(A) is similar to p(B). This means that we can find an invertible matrix P such that P −1 p(A)P = p(B) (Noble and Daniel, 1988). 2.3.1 Remark. It is not difficult to observe that the more zeros a matrix have; however big it could be, the more easily to be managed. There exist different approaches that are used to decompose a matrix into relation involving matrices with many zeros. For instance QR decomposition and LU decomposition. With similarity transformation, the first attempt is diagonalizing a square matrix. However, not all matrices are diagonalizable. Another attempt is similarity triangulation. Even if any square matrix can be similar to a triangular matrix, this form is not simpler as much as we could expect. That is why Jordan form is needed to increase the number of zeros and hence, we may have easy calculation on matrix operations by applying the similarity properties. 2.3.2 Theorem (Jordan Theorem (Noble and Daniel, 1988)). Every square matrix A is similar to a matrix J called Jordan form. J has the following characteristics: 1. J is an upper triangular matrix. 2. The entries on the leading diagonal are eigenvalues of A repeated accordingly to their algebraic multiplicity. 3. Each element on the first super-diagonal of J is one or zero. 4. Each element on the super-diagonals of J different from first super-diagonal is zero.

Section 2.3. Similarity and Jordan form of a Square Matrix

Page 5

2.3.3 Example of a Jordan Form. The matrix J below 

1 0 J = 0 0

0 2 0 0

0 0 4 0

 0 0 , 1 4

(2.3.1)

is in the Jordan form. Here 1, 2 and 4 are eigenvalues with algebraic multiplicity 1, 1 and 2 respectively. Theorem 2.3.2 says that for any square matrix A, we can find a non-singular matrix P such that J = P −1 AP for J having the characteristics mentioned above. The matrix J can be written in block matrix form according to the algebraic multiplicity of each eigenvalue   J1 0 · · · 0  0 J2 · · · 0    J =. (2.3.2) . . . . . . 0 0

0

0



The number of blocks on main diagonal is the sum of geometric multiplicities (Noble  and Daniel, 1988).  J(1) O O J(2) O  For instance the above matrix J in Equation (2.3.1) can be written as follows; J =  O O O J(4)   4 1 . From Jordan theorem, It is clear to observe that with J(1) = (1), J(2) = (2) and J(4) = 0 4 P J = AP . This relation is very important in finding the matrix P . If we consider P = [p1 : p2 : p3 : ...pn ], with pi for i = 2, 3, ...n column vectors of matrix P , then [p1 : p2 : p3 : ...pn ]J = A[p1 : p2 : p3 : ...pn ].

(2.3.3)

From Equation (2.3.3) we have the following; Ap1 = λr p1 , Api = λr pi + pi−1

(2.3.4) for i = 2, 3, ...nr , where nr is algebraic multiplicity corresponding to eigenvalue λr .

2.3.4 Example: Similarity withJordan Form.  5 4 2 1  0  1 −1 −1  . The characteristic equation is given by det(A − λI) = 0. This gives Let A =   −1 −1 3 0  1 1 −1 2 2 (λ − 1)(λ − 2)(λ − 4) = 0. The eigenvalues of A are 1, 2 and 4.   1 0 0 0 0 2 0 0  Let consider the matrix J =  0 0 4 1 . Using Equation (2.3.3) we have 0 0 0 4 (A − I)p1 = 0

(zeros column vector),

(2.3.5)

(A − 2I)p2 = 0

(zeros column vector),

(2.3.6)

(A − 4I)p3 = 0

(zeros column vector),

(2.3.7)

(A − 4I)p4 = p3 .

(2.3.8)

Section 2.4. Calculation of Matrix Exponential

Page 6 

Solving Equations (2.3.5), (2.3.6), (2.3.7) and (2.3.8) we get p1 

    1 1 −1 1  0  0  1 −1       −1  and p4 = 0. Thus the matrix P =  0 0 1 0 0 1 −1 J = P AP .

   −1 1  1   −1     =  0 , p2 =  0 , p3 = 0 1  1 1 0 0  . It is easy to verify that −1 0  1 0

2.3.5 Remark. Let J be Jordan form matrix which is similar to a square matrix A, then J is the root of characteristic polynomial of A. That is, if p(λ) = det(A − λI) then p(J) = O (zero matrix) (Noble and Daniel, 1988). 2.3.6 Theorem (Cayley -Hamiltonian theorem (Di Ruscio and i Telemark, 1996)). If p(λ) is the characteristic polynomial of a matrix A, then p(A) = O (zero matrix). Proof. The proof becomes simpler if we consider the similarity of any square matrix A and Jordan form matrix J. And then use the fact that p(J) = O (zero matrix) (Noble and Daniel, 1988). Note: The Jordan form matrix is used to calculate the matrix function and power of matrices. If there exists a matrix P such that J = P −1 AP , then the important properties as they are described in (Noble and Daniel, 1988) are : Ak = P J k p−1

and eAt = P eJt p−1

with k any integer k > 0 and t any scalar.

(2.3.9)

Even if the Jordan form matrix is helpful, it is not clear how the matrix exponentials eAt and eJt would be calculated. In the next section we develop how the exponential of a square matrix A can be calculated.

2.4

Calculation of Matrix Exponential

In this section we develop different alternatives of calculating the exponential of a square matrix A. But we need some notions of matrix norm and spectral radius of a square matrix. This idea is discussed in the following two definitions. 2.4.1 Definition 5: Matrix Norm for Square Matrices. For any N × N matrix, a matrix norm denoted by ||A|| is a norm satisfying the following properties: 1. ||A|| ≥ 0, 2. ||A|| = 0 if and only if A = O (zero matrix), 3. ||αA|| = |α|||A|| for nay scalar α, 4. ||A + B|| ≤ ||A|| + ||B||, 5. ||AB|| ≤ ||A||||B|| for some matrix norms. There are different types of matrix norm for instance induced norm, operator norm and Frobenius or L22 norm. Here we define a matrix norm ||A||∞ which is the maximum absolute row sum of the matrix

Section 2.4. Calculation of Matrix Exponential

Page 7

given by the following equation: ||A||∞ = max

1≤i≤n

n X

|aij |.

(2.4.1)

j=0

2.4.2 Remark. With matrix norm we always have |Aij | ≤ ||A|| where |Aij | is the modulus of entries of matrix A, for 1 ≤ i, j ≤ N .

(2.4.2)

2.4.3 Definition 6: Spectral Radius. Let A be a square matrix and λ1 , λ2 , λ3 , ...,λn be the eigenvalues of A. Then the spectral radius usually denoted by ρ(A) is given by the following equation: ρ(A) = max(|λ1 |, |λ2 |, |λ3 |, ..., |λn |).

(2.4.3)

The famous relationships between matrix norm and spectral radius of a square matrix are given by the following equation: 1

ρ(A) ≤ ||Ak || k k

ρ(A) = lim ||A || k→∞

1 k

for any positive ineger k,

(2.4.4)

Gelfand’s Formula, 1941.

Note: Equation (2.4.4) hold for any matrix norm. 2.4.4 Matrix Series. It is known that if x is a real number such that |x| < 1 then we have the following result: 1 + x + x2 + x3 + ... =

1 . 1−x

(2.4.5)

It is easy to find out that (1 + x + x2 + x3 + ... + xn )(1 − x) = 1 − xn+1 .

(2.4.6)

Taking the limit as n tends to infinity, on Equation (2.4.6), we get Equation (2.4.5). We use the same argument to show that if the spectral radius of a square matrix A defined in Equation (2.4.3) is less than one then the series I + A + A2 + A3 + ... converges to (I − A)−1 .

(2.4.7)

To see this, let consider the partial sum given below; Sn = I + A + A2 + A3 + ... + An .

(2.4.8)

Multiplying both sides of Equation (2.4.8) by (I − A) we obtain Sn (I − A) = I − An+1 .

(2.4.9)

Using Equation (2.3.9) and Theorem (2.3.2), we can write Equation (2.4.9) as follows: Sn (I − A) = I − P J n+1 P −1 .

(2.4.10)

Taking limit as n tends to infinity on both sides of Equation (2.4.10) we get S∞ (I − A) = I − P ( lim J n+1 )P −1 . n−→∞

(2.4.11)

Section 2.4. Calculation of Matrix Exponential

Page 8

Since the spectral radius of matrix A is less than one we get limn−→∞ J n+1 = O (zeros matrix). Thus Equation (2.4.11) becomes S∞ (I − A) = I.

(2.4.12)

(I − A)S∞ = I.

(2.4.13)

Alternatively we can have also

Comparing Equations (2.4.12) and (2.4.13), we get (I − A)S∞ = S∞ (I − A) = I.

(2.4.14)

This shows that the inverse of I − A exist and we can find S∞ as follow: S∞ = (I − A)−1 .

(2.4.15)

I + A + A2 + A3 + ... = (I − A)−1 .

(2.4.16)

Therefore

The same argument can be used to show that if s is greater than the spectral radius of matrix A. Then 1 1 1 1 (sI − A)−1 = (I + A + 2 A2 + 3 A3 + ...) s s s s

(2.4.17)

2.4.5 Theorem (Matrix exponential Theorem). 1. The infinite series eA = (I + 1!1 A + 2!1 A2 + 3!1 A3 + ...) converges for every square complex matrix. 2. For all scalars λ and µ, and for all square matrices A, eµA eλA = e(λ+µ)A . 3. For all square matrices A, AeA = eA A. 4.

d(eAt ) dt

= AeAt for all square matrices A and for any scalar t.

5. If J = P −1 AP , then eAt = P eJt P −1 where J is Jordan form matrix. And thus eA can be found by substituting t = 1. 6. eA is non-singular and (eA )−1 = e−A . 7. The exponential of a zero square matrix is equal to identity matrix. Proof. We prove only part (1) of the theorem, the remaining follow immediately assuming part (1) holds. Let etA = I +

t2 t3 t A + A2 + A3 + .... 1! 2! 3!

(2.4.18)

Assuming A0 = I, Equation (2.4.18) can be written as follows: tA

e

=

∞ n X t n=0

n!

An .

(2.4.19)

Section 2.5. Some Notions of the Laplace Transforms

Page 9

Equation (2.4.19) implies that etA ij

=

∞ n X t n=0

n!

Anij

for all 1 ≤ i, j ≤ N .

(2.4.20)

n tA and An respectively. We know from the matrix norm Where etA ij and Aij are entries of matrices e that ||AB|| ≤ ||A||||B||, by induction we can use this to show that ||An || ≤ ||A||n . Therefore we have the following: ∞ X |t|n n=0

n!

n

||A || ≤

∞ X |t|n n=0

n!

||A||n .

(2.4.21)

Since ||A|| is a scalar, from calculus we have the following relation: ∞ X |t|n n=0

n!

||An || ≤

∞ X |t|n n=0

n!

||A||n = e|t|||A|| < ∞.

(2.4.22)

On the other hand from Remark 2.4.2 and Equation (2.4.2), we have |Aij | ≤ ||A|| which implies that |Anij | ≤ ||An ||. This shows that ∞ X |t|n n=0

n!

|Anij |



∞ X |t|n n=0

n!

n

||A || ≤

∞ X |t|n n=0

n!

||A||n = e|t|||A|| < ∞.

(2.4.23)

Equation (2.4.23) shows that the right hand side of Equation (2.4.20) is absolutely convergent for all 1 ≤ i, j ≤ N . Since from calculus we have any absolutely convergence implies convergence, we conclude that the right side of Equation (2.4.20) converges for all 1 ≤ i, j ≤ N . Thus we conclude that etA = I +

t t2 t3 A + A2 + A3 + ... converges for any t in compact interval. 1! 2! 3!

(2.4.24)

If we chose t = 1 we have eA = I +

1 1 1 A + A2 + A3 + ... also converges. 1! 2! 3!

(2.4.25)

2.4.6 Remark. We have seen that a matrix exponential eAt can be represented by infinity series which is always converging. But we still not have the explicit formula that can help to find this matrix. In the next sections we discuss some techniques that are required to easily find the value of eAt .

2.5

Some Notions of the Laplace Transforms

The Laplace transform is efficient alternative in solving differential equation. Its particularity advantageous rises form the fact that the Laplace transforms deals with the piece wise functions, periodic function or impulse function (Google, 2016b).

Section 2.5. Some Notions of the Laplace Transforms

Page 10

2.5.1 Definition 7. Let f (t) be a function defined on interval 0 ≤ t < ∞, then the Laplace transform of f (t) denoted by L(f (t)) is given by the following equation: Z ∞ f (t)e−st dt for some scalar s. (2.5.1) L(f (t)) = 0

The integral in Equation (2.5.1) is improper integral, thus the Laplace transform of a function f (t) to exist, it has to converge and is a function of s. The Laplace transforms require s to be lager and f (t) to belong to a class of functions known as function of exponential order. That is limt→∞ fe(t) αt = 0, for some real number α or equivalently, for some constants M and α, |f (t)| < M eαt (Google, 2016b). In addition, as it is described in (Google, 2016b), the function f (t) must be piece-wise continuous on each finite sub-intervals of 0 ≤ t < ∞. 2.5.2 Theorem (Existence of the Laplace transform). (Google, 2016b) Suppose f (t) is a piece-wise continuous on each finite sub-intervals of 0 ≤ t < ∞ and satisfy |f (t)| ≤ M eαt for some constants M and α. Then Laplace transform of f (t) exists for s > α and lims→∞ L(f (t)) = 0. R∞ Proof. We have to show that 0 f (t)e−st dt is finite for s > α. By comparison text we need to show that the integrand is absolutely bounded by an integrable function h(t) (tutorial.math.lamar.edu, 2016). M Let chose h(t) = e(s−a)t . Then h(t) is always positive and it is integrable on 0 ≤ t < ∞, because R∞ M αt we have |L(f (t))| ≤ M . Now we have s−α 0 h(t)dt = s−α . Using the inequality |f (t)| ≤ M e lims→∞ |L(f (t))| ≤ 0. This shows that lims→∞ L(f (t)) = 0. 2.5.3 The Laplace Transform of Some Known Functions. Let a and ω be real numbers, n be positive integer, f (t), g(t) and y(t) be piece-wise continuous on 0 ≤ t < ∞ functions of exponential order. Then we have the following: L(1) = L(eat ) = L(cos(ωt)) = L(sin(ωt)) = L(tn ) = L(teat ) = L(αf (t) + βg(t)) =

1 , s 1 , s−a s , 2 s + ω2 ω , 2 s + ω2 n! , n+1 s 1 , (s − a)2 αL(f (t)) + βL(g(t)),

(2.5.2)

L(y(t)) ˙ = sL(y(t)) − y(0). 2.5.4 The Inverse Laplace Transform of Some Known Functions. The inverse Laplace transform denoted L−1 is the inverse operator of the Laplace transform. It is calculated using the notion of ”Complex Analysis”, However in this paper we are not going in deep of the inverse Laplace transforms. We show some intuitive ideas on how some of the inverse Laplace transforms can be found. Using the information given in Equation (2.5.2) we can have the following list

Section 2.6. Application of Inverse Laplace Transform in Calculation of eAt

Page 11

of the inverse Laplace transforms of some functions of s. 1 L−1 ( ) s 1 −1 L ( ) s−a s ) L−1 ( 2 s + ω2 ω L−1 ( 2 ) s + ω2 n! L−1 ( n+1 ) s 1 L−1 ( ) (s − a)2 L−1 (αL(f (t)) + βL(g(t))) −1

L

= 1,

(2.5.3)

= eat , = cos(ωt), = sin(ωt), = tn , = teat , = (αf (t) + βg(t)),

(sL(y(t))) = y(t)). ˙

2.5.5 Remark. Having some notions of the Laplace transform and the inverse Laplace transform we can proceed to apply these concepts to calculate the matrix eAt .

2.6

Application of Inverse Laplace Transform in Calculation of eAt

• We have seen that the matrix eAt can be calculated by using the formula given by the Equation (2.4.24) as follows: eAt = I +

1 1 1 At + (At)2 + (At)3 + ... for any scalar t. 1! 2! 3!

(2.6.1)

• Assuming that Equation (2.4.17) holds and taking the inverse Laplace transform ( L−1 ) on the both sides of it, we get L−1 (sI − A)−1 ) = I +

1 1 1 At + (At)2 + (At)3 + ... 1! 2! 3!

(2.6.2)

• The Comparison of Equations (2.6.1) and (2.6.2) gives L−1 ((sI − A)−1 ) = eAt .

(2.6.3)

Thus eA is found by letting t = 1. Note: Here we have assumed that s is larger enough and is not a root of the characteristic polynomial of the matrix A. 2.6.1 Example of Calculating eA . Let calculate eA if 

 3 1 0 A =  0 3 0 . 0 0 5

(2.6.4)

Section 2.7. Application of Cayley-Hamilton Theorem to Calculate eAt

Page 12

We first find sI − A. 

 s−3 −1 0 0 s−3 0 . sI − A =  0 0 s−5

(2.6.5)

Then we find (sI − A)−1 by assuming that s is not the root of characteristic polynomial of A.   1 1 0 2 s−3 (s−3)   1 (2.6.6) (sI − A)−1 =  0 0 . s−3 1 0 0 s−5 And then we apply the inverse Laplace transform on Equation (2.6.6) to get  (3 t)  e te(3 t) 0 L−1 ((sI − A)−1 ) =  0 e(3 t) 0 . (5 t) 0 0 e

(2.6.7)

Finally we let t = 1 and put this value in Equation (2.6.7) to find   3 3 e e 0 eA =  0 e3 0  . 0 0 e5 Note: Equation (2.6.7) is obtained by evaluating the inverse Laplace transform of each entry of the matrix (sI − A)−1 . Here we have to be carefully because the inverse (sI − A)−1 depends on the choice of s. To avoid this confusion the Cayley-Hamilton theorem can help to find eAt instead.

2.7

Application of Cayley-Hamilton Theorem to Calculate eAt

The Cayley-Hamilton theorem is very important in reducing the degree of polynomial of square matrices. And if a given function is analytic on a region of complex plane this is much helpful to evaluate the analytic function of square matrices. Here we are much interested in finding the value of eAt . It is known from calculus that 1 1 1 eλt = 1 + λt + (λt)2 + (λt)3 + ... where λ is any parameter and t any scalar. (2.7.1) 1! 2! 3! Equation (2.7.1) means λt

e

n X 1 = lim (λt)i . n→∞ i!

(2.7.2)

i=0

Assuming A0 = I, from Equation (2.6.1) and Theorem (2.4.5) we have; for any square matrix A, n X 1 (At)i . n→∞ i!

eAt = lim

(2.7.3)

i=0

Equations (2.7.3) and (2.7.2) can be written as follow respectively e

At

=

lim

n→∞

n X i=0

θi Ai , where θi for i = 0, 1, 2... are functions of t,

(2.7.4)

Section 2.7. Application of Cayley-Hamilton Theorem to Calculate eAt

Page 13

and eλt =

lim

n→∞

n X

θi λi , where θi for i = 0, 1, 2, ... are functions of t.

(2.7.5)

i=0

But on other hand dividing by characteristic P polynomial of matrix A, and using the long division algorithm; in Equation (2.7.5) the expression ni=0 θi (λ)i , can be written as follow: n X

i

θi (λ) = p(λ)

n X

i=0

αi λi + R(λ).

(2.7.6)

i=0

In Equation (2.7.6), p(λ) is the characteristic polynomial with degree m and R(λ) is the remainder from long division by p(λ) and it can have at most m − 1 degree, and αi0 s are functions of t. Taking the limit as n tends to infinity on both sides of Equation (2.7.6) we get e

λt

= p(λ)

∞ X

αi λi + R(λ).

(2.7.7)

i=0

Using the Cayley-Hamilton theorem, we can evaluate eAt by replacing λ by A in Equation (2.7.7) to obtain eAt = R(A).

(2.7.8)

Here R(λ) can be computed as follow : R(λ) =

m−1 X

βi λi .

(2.7.9)

i=0

The coefficients βi are found from Equation (2.7.7) by replacing λ by all eigenvalues of the matrix A. That is λj t

e

=

m−1 X

βi λij , for j = 1.2, 3, ..., n and λj are eigenvalues of A.

(2.7.10)

i=0

Note: In the case of repeated eigenvalues we differentiate both sides of Equation (2.7.10) with respect to λ after replacing λj by λ and then evaluate as usual. 2.7.1 Example of Calculating eAt Using Cayley-Hamilton Theorem. Consider the matrix A of Example 2.6.1 on Equation (2.6.4). The eigenvalues of A are 3, 3 and 5. Here m = 3. So R(λ) = β0 + β1 λ + β2 λ2 . Putting eigenvalues of this matrix in Equation (2.7.10) we get the following: e3t = β0 + 3β1 + 9β2 ,

te3t = β1 + 6β2

and e5t = β0 + 5β1 + 25β2 .

(2.7.11)

Upon solving the above System (2.7.11) for β0 , β1 and β2 in terms of t we get the following: β0 =

5 4

(−6 t − 1)e(3 t) + 49 e(5 t) ,

β1 =

1 2

(8 t + 3)e(3 t) − 23 e(5 t)

and β2 =

1 4

(−2 t − 1)e(3 t) + 14 e(5 t) .

Thus we find eAt = R(A) = β0 I + β1 A + β2 A2 . This yield the required results  (3 t)  e te(3 t) 0 eAt =  0 e(3 t) 0 . (5 t) 0 0 e

3. Understanding the Control Systems Today’s technology is dominated by the use of control systems which is a mechanism that alters the future behaviours of the system. A system is a set of interconnected functional components designed to perform a specific physical task, see Figure 3.1. The ”trial and error” technique was adapted to be used in order to be able to monitor the behaviour of the system. That is to obtain the desired output from the appropriate input. This method is not efficient because to obtain the required output sometimes requires a long time; to solve this problem, Control Theory was developed. It is a branch of Mathematics which deals with strategies for selecting the appropriate input for the desired output. As it is explained in (Wikipedia, 2016a) and (Astrom and Murray, 2008), there are two common classes of control systems:the open loop control systems and the closed loop control systems see Figure 3.2. In the open loop control systems output is generated based on inputs. In the closed loop control systems current output is taken into consideration and corrections are made based on feedback (Wikipedia, 2016a). Mathematical models are used to understand the control theories. The mathematical skills

Figure 3.1: General example of a control system diagram (Dorf, 2004).

(a) Closed loop control system (Electrical4u).

(b) Open control system (Dorf, 2004).

Figure 3.2: Closed loop and open loop control system diagrams. and concepts such as Linear algebra, Differential Equations and the Laplace transforms are used to predict the future behaviour of the control systems. In this section we will discuss the use of linear algebra to interpret different models in terms of controllability and observability. We shall consider only time-invariant systems. We also intend to use mass spring and simple pendulum as examples to explain the above concepts of control system.

14

Section 3.1. State Space of Linear Continuous Time-invariant Systems

3.1

Page 15

State Space of Linear Continuous Time-invariant Systems

It is known that a control system is made by three major components: the input of the system, the system or plant and the output of the system (Astrom and Murray, 2008). If for example the process is represented by a differential equation of high order it is possible to represent it using an equivalent relation made by first order differential equations. As it is described in (Dorf, 1998), it is convenient to represent the model representing the control system as follow: ˙ X(t) = AX(t) + Bu(t) with initial condition X(t0 ) = X0 ,

(3.1.1)

Y (t) = CX(t) + Du(t).

(3.1.2)

Here X(t) ∈ Rn is a vector of state variables, A ∈ Rn×n is a square matrix with constant coefficients u(t) ∈ Rr is the input vector of the system, Y (t) ∈ Rp is the output of the system, C ∈ Rp×n , B ∈ Rn×r and D ∈ Rp×r . System (3.1.1) is the general representation of time-invariant continuous control systems. To be able to identify the component of Equation (3.1.1), let use an example of mass spring with driving force F as shown on Figure 3.3.

Figure 3.3: Damper spring, K is the spring constant, B is damping coefficient, x(t) is the displacement and M is the mass of the moving object (Google, 2016c). Using Hook’s law and Newton’s law of motion it was found that this model is given by Equation (3.1.3) below: x ¨ + γ x˙ + ω 2 x = f.

(3.1.3)

B K F In Equation (3.1.3); γ = M , ω2 = M and f = M , M is mass of the object in motion, B is damping coefficient and K is the spring constant. Here the main task is to write the state space variables of Equation (3.1.3). Let x = x1 and x˙ = x2 then   x˙1 = x2 , x˙2 = −γx2 − ω 2 x1 + f,  x = x1 .

This system can be written in matrix form as follows:        x˙1 0 1 x1 0 = + f, x˙2 −ω 2 −γ x2 1    x1 x = 1 0 + 0f. x2

(3.1.4) (3.1.5)

Section 3.2. State Space and Linearization of non Linear Continuous Time-invariant Systems Page 16 From comparison of Equations (3.1.4) and (3.1.1), and Equations (3.1.5) and (3.1.2) we can see that        0 1 0 x1 A= , B= , C = 1 0 , u(t) = f, X(t) = , and D = 0. 2 −ω −γ 1 x2 The output of the system here is x which is the displacement from the equilibrium position . The input of the system is the driving force per unit mass f . You can control this system by changing the driving force f to get the desired displacement. Later we will discuss about controllability and observability of this system.

3.2

State Space and Linearization of non Linear Continuous Timeinvariant Systems

Most of systems representing the real world problems are non-linear. Unfortunately analyzing non-linear systems is somehow tedious. However linearization techniques can help to know some information around some special points of the system, called equilibrium points. In this section we discuss the concepts of linearization. The pendulum on Figure 3.4 is an example of a non linear system. As it is described in (Bevivino, 2009), the model representing this system is given directly by the Newton’s law as follows: ¨ + g sin(θ(t)) = f (t). θ(t) (3.2.1) L Here the state variables are θ = x1 and θ˙ = x2 , and the input is f (t) which is the force per unit mass per unit length. It is not difficult to observe that the state variables are related by relation in the following system:   x˙1 = x2 , x˙2 = − Lg sin(x1 ) + f (t), (3.2.2)  θ = x1 . System (3.2.2) is non linear because of sine term. 3.2.1 Example; Linearization of Non-linear Control System. Consider the model given by the following equation with the input u(t). x ¨ − 2x − x2 x˙ − xu = −8. If we let x1 = x and x2 =

dx dt ,

it is easy to see that the state variables are related as follow:  x˙ 1 = x2 , x˙ 2 = 2x1 + x21 x2 + x1 u − 8.

System (3.2.4) is still non linear, because of the product of variables. Thus we need to around its equilibrium point. Let the equilibrium point be given by (x1e , x2e , ue ). It is easy 8 , x2e = 0 and ue = ue any scalar. Since that the stationary points are given by x1e = 2+u e be −2 for simplicity we can choose ue = 0. This gives that its equilibrium point is (4, 0, 0). the Jacobian matrix Jp at this equilibrium point we get   0 1 0 Jp = . 2 16 4

(3.2.3)

(3.2.4) linearize it to find out ue can not Evaluating

(3.2.5)

Section 3.3. Transfer Function and Transition Matrix of Linear Continuous Time-invariant Systems Page 17

(a) Description of the system.

(b) Acting force.

Figure 3.4: Pendulum system (Wikipedia, 2016b). Let ε1 = x1 − 4, ε2 = x1 − 0 and εu = x1 − 0, then System (3.2.4) is transformed into the following system:        0 ε1 0 1 ε˙1 ε , (3.2.6) + = 4 u ε2 2 16 ε˙2    ε1 x = 1 0 + 0εu . ε2 The linearized model around this equilibrium point in the state space representation is represented by System (3.2.6) above. 3.2.2 Remark. The other idea would be to find the solution of Equation (3.1.1). We need the concepts of transition matrix, which are very important in finding this solution. These concepts together with transfer function are the purposes of the following section.

3.3

Transfer Function and Transition Matrix of Linear Continuous Timeinvariant Systems

From Equations (3.1.1) and (3.1.2), we have seen that the general equation representing the state space of a dynamical control system is given by the following: ˙ X(t) = AX(t) + Bu(t)

with initial condition X(t0 ) = X0 ,

Y (t) = CX(t) + Du(t). Here the input of the system u(t) and the output of the system is Y (t). 3.3.1 Definition 8: Transfer function. The transfer function H(s) of linear, time-invariant control system given by Equations (3.1.1) and (3.1.2) is the ratio of the Laplace transform of the output Y (s) = L(Y (t)) to the Laplace transform of

Section 3.4. Solution of Linear Continuous Time-invariant Systems

Page 18

the corresponding input u(s) = L(u(t)) with all initial conditions assumed to be zero (Boubiche et al., 2016). This means that the transfer function is given by H(s) =

Y (s) . u(s)

(3.3.1)

If u(t) is a single input, then Equation (3.3.1) is calculated explicitly as follows: H(s) = Cϕ(s)B + D,

where ϕ(s) = (sI − A)−1 and s is larger enough and not eigenvalue of A. (3.3.2)

Equation (3.3.2) is obtained by applying the Laplace transform on both sides of Equations (3.1.1) and (s) (3.1.2) and then rearrange to solve for Yu(s) . The expression ϕ(s) = (sI − A)−1 is called the resolvent matrix of the matrix A. 3.3.2 Definition 9: State Transition matrix. The relation ϕ(t) = L−1 (sI − A)−1 is called state transition matrix. It has been shown that ϕ(t) = L−1 (sI − A)−1 = eAt .

(3.3.3)

This matrix transfer or propagate the initial condition/ initial state into the state at time t, that is why it is called state transition matrix. 3.3.3 Example of Finding Transition Matrix and Transfer Function. Consider model on Equation (3.2.6) of Example 3.2.1. Matrices A, B, C, and D are    thelinearized  0 0 1 , C = 1 0 and D = 0 respectively. The resolvent matrix ϕ(s) = (sI −A)−1 ,B= A= 4 2 16 is given below:  s−16  1 2 −16s−2 2 −16s−2 −1 s s (sI − A) = . 2 s s2 −16s−2

s2 −16s−2

The state transition matrix ϕ(t) = L−1 (sI − A)−1 is given by √ √  √  (8 t) 1 √ (8 t) √    1 4 − 33 66 sinh 66t − 33 cosh 66t e 66e sinh 66t 66 √ (8 t) √  √ √  √  . 1 1 66e sinh 66t 66t + 33 cosh 66t e(8 t) 33 33 4 66 sinh The transfer function H(s) = Cϕ(s)B + D is given by H(s) =

3.4

4 . s2 −16 s−2

Solution of Linear Continuous Time-invariant Systems

In this section, the concepts of matrix exponential or transition matrix are used to find the solution of the system in Equation (3.1.1). The solution of Equation (3.1.1) is given by the following theorem: 3.4.1 Theorem (Solution of the state equation (Di Ruscio and i Telemark, 1996)). The solution of system in Equation (3.1.1) is given by the following equation: Z t X(t) = ϕ(t − t0 )X0 + ϕ(t − τ )Bu(τ )dτ, with ϕ(t) = L−1 (sI − A)−1 . (3.4.1) t0

Proof. To show this we can proceed as follow: we transform Equation (3.1.1) as ˙ X(t) − AX(t) = Bu(t), with initial conditions X(t0 ) = X0 . e−At

(3.4.2)

Multiply both sides of Equation (3.4.2) by and then use part (4) of the Matrix Exponential theorem. Thereafter integrate both sides with the limits of integration from t0 to t (Noble and Daniel, 1988). Then replace eAt by φ(t). This process yields the required results given by Equation (3.4.1).

Section 3.5. Controllability and Observability Matrices of Linear Continuous Time-invariant Systems Page 19 3.4.2 Remark. If t0 = 0, then Equation 3.4.1 becomes as follows: Z t X(t) = ϕ(t)(X0 + ϕ(−τ )Bu(τ )dτ ), with ϕ(t) = L−1 (sI − A)−1 . 0

3.4.3 Example of Finding Solution. Let consider the same example of the linearized model on Equation (3.2.6) of Example 3.2.1. If we chose u = 5 and the initial conditions to be (0 0), we get the following:  √  √ √ √  5 √ √ 66 66 + 8 e(2 66t) + 66 − 8 e(− 66t−8 t) − 2 66 , 66 5 √ (2 √66t) √  (−√66t−8 t) 66e − 66 e . 33

1 (t) = − 2 (t) =

The simulations obtained for different values of u are given by the Figure 3.5.

Figure 3.5: Simulation of solutions for 1 (t) and 2 (t) with different values of u (t). 3.4.4 Remark. Figure 3.5 shows that by changing the inputs u (t) we obtain different forms of the outputs 1 (t). They have been estimated by using sagemath codes.

3.5

Controllability and Observability Matrices of Linear Continuous Time-invariant Systems

In this section we introduce two kinds of important matrices in control systems. One of these matrices helps to see whether a given dynamic control system is controllable or not; It is called controllability matrix. Another helps to know whether a given state of dynamic control system is observable or not; it is called observability matrix. Let consider Equations (3.1.1) and (3.1.2), then, as it is defined in (Astrom and Murray, 2008), the controllability and observability matrices are respectively given by the

Section 3.6. Controllability and Observability of Linear Continuous Time-invariant Systems

Page 20

following equations: C = [B : AB  C  CA  2  O =  CA  ..  .

: AB 2 ... : AB n−1 ], 

(3.5.1)

   .  

(3.5.2)

CAn−1

Here matrices A, B and C are given by Equations (3.1.1) and (3.1.2), and n is the number state variables. Note: The size of observability matrix depends on the size of the output Y (t) and the matrix A; if Y (t) has p components and A is n × n, then O is np × n. The size of controllability matrix depends on the size of the input u(t) and the matrix A; if u(t) has r components and A is n × n, then C is n × nr.

3.6

Controllability and Observability of Linear Continuous Time-invariant Systems

In order to be able to perform whatever we want with a given dynamic system under control input, the system must be controllable. Also in order to be able to see what is going on inside the system under observation, the system must be observable (Google, 2016a). In this section, ranks of controllability and observability matrices are used to test the controllability and observability of the control systems. 3.6.1 Controllability of Linear Continuous Time-invariant Systems. The linear continuous time-invariant control system given by Equations (3.1.1) and (3.1.2) is said to be controllable, if it is possible to transfer the initial state X(t0 ) = X0 to any other state X(t) = Xf in finite time t. That is 0 < t < ∞. Here the main problem to answer is to show how we can find a control input that will transfer our system from any initial state to any final state. To answer this problem we use Equations (3.4.1) and (3.3.3) and Cayley-Hamilton theorem. After some manipulations, Equations (3.4.1) and (3.3.3) yield the following: e

−At

−At0

Xf − e

Z

t

X0 =

e−Aτ Bu(τ )dτ.

(3.6.1)

t0

The Cayley-Hamilton theorem gives e−Aτ =

n−1 X

αi (τ )Ai .

(3.6.2)

i=0

Here αi for i = 0, 1, 2, 3, ..., n−1, are scalar time functions. If we combine Equations (3.6.2) in Equation (3.6.1), we have −At

e

Xf − e

−At0

X0 =

n−1 X i=0

i

Z

t

AB

αi (τ )u(τ )dτ. t0

(3.6.3)

Section 3.6. Controllability and Observability of Linear Continuous Time-invariant Systems

Page 21

Equation (3.6.3) can be written as follows:  Rt  α0 (τ )u(τ )dτ t 0  R t α (τ )u(τ )dτ   Rt0 1   t  −At −At0 2 n−1 α (τ )u(τ )dτ  . e Xf − e X0 = [B : AB : A B : ... : A B]   t0 2  ..     . Rt α (τ )u(τ )dτ t0 n−1

(3.6.4)

Using Theorem 2.1.1, Equation (3.6.4) has a solution if and only if the matrix C = [B : AB : A2 B : ... : An−1 B] controllability matrix has rank(C) = n. Note: The left-hand side of Equation (3.6.4) is known and on its right-hand side, the functions αi (τ ), for i = 0, 1, 2, ..., n − 1 are determined by using Cayley-Hamilton theorem and Equation (2.7.10). The only unknown to be determined is the input u(τ ) with τ ∈ (t0 , t). 3.6.2 Observability of Linear Continuous Time-invariant Systems. The linear continuous time-invariant control system in Equation (3.1.1) is said to be observable, if it is possible to learn everything about the dynamical behaviours of the state space variables of the system by using only information from the output measurement Y (t). It is clear that if the initial state X(t0 ) = X0 is known, Equation (3.4.1) gives us complete knowledge about the state variables of the system at any time t. For simplicity let consider Equation (3.1.1) as an input free system that is u(t) = 0. Following the above argument it is convenient to find X0 from the available measurements Y (t).Let proceed as follow: apply derivatives of measurement at t = t0 we get the following pattern. Y (t0 ) = CX0 , Y˙ (t0 ) = CAX0 , Y¨ (t0 ) = CA2 X0 , .. .

(3.6.5)

Y n−1 (t0 ) = CAn−1 X0 . Equation (3.6.5) can be written as     Y (t0 ) C  Y˙ (t0 )   CA       Y¨ (t )   CA2  0  =  X0    ..  . .    .  . n−1 CAn−1 Y (t0 )

 where

C CA CA2 .. .



      O=      n−1 CA

is observability matrix.

(3.6.6)

Using Theorem 2.1.1, Equation (3.6.6) has a solution if and only if the observability matrix O has full rank n. These concepts of controllability and observability introduce us to the following theorems that were introduced by R. Kalman in 1960. 3.6.3 Theorem (R. Kalman’s Criteria for controllability). The linear continuous time-invariant system is controllable if and only if the controllability matrix C has a full rank, that is rank (C) = n. 3.6.4 Theorem (R. Kalman’s Criteria for observability). The linear continuous time-invariant system is observable if and only if the observability matrix O has a full rank, that is rank (O) = n. Proof. The proof of the above theorems can be found in several literatures for example (Davis et al., 2009).

Section 3.6. Controllability and Observability of Linear Continuous Time-invariant Systems

Page 22

3.6.5 Example of testing controllability and Observability.     0 1 0 Consider the linearised model of Example 3.2.1. We have matrices A = , B = and 2 16 4    0 4 1 0 C = . The controllability matrix C is given by C = [B : AB] = . The observability 4 64     C 1 0 matrix O is given by O = = . Since the determinant of the matrices C and O is CA 0 1 different from zero, we conclude that rank (C) = 2 and rank (O) = 2, hence the system is both controllable and observable.

4. Applications: Mass-spring System, Electromagnetic Ball Suspension System and Cruise Control of a Car In this chapter we discuss the application of Linear Algebra and control systems. The Mass-spring, the Electromagnetic -ball- suspension and the Cruise control of a car systems have to be discussed separately. We put more emphasis on modelling of their corresponding equations. We determine the controllability and the observability, the transfer function and state transition matrix for each case. Simulations for some particular values of parameter have to be given.

4.1

Mass-spring Analysis

We recall that the state space form of the Mass-spring system was given by Equations (3.1.4) and (3.1.5) in the second chapter. Also we identified the following matrices:      0 0 1 and D = 0. , C= 1 0 , B= A= 2 1 −ω −γ Here we check the controllability and the observability of the Mass-spring system. We also give the simulations for different values of a parameter ξ which is going to be defined later. • The controllability and the observability matrices are given respectively by the following:       1 0 C 0 1 . = and O = C = [B : AB] = 0 1 CA 1 −γ Using Theorems 3.6.3 and 3.6.4 we can deduce the following: Since the ranks of matrices C and O is 2, then Mass-spring system is both observable and controllable. • The resolvent matrix and the transfer function are given respectively by the following equations: ! γ+s 1 ϕ(s) = H(s) =

ω 2 +γs+s2 ω2 − ω2 +γs+s 2

ω2

ω 2 +γs+s2 s ω 2 +γs+s2

1 . + γs + s2

,

(4.1.1) (4.1.2)

The denominator of Equation (4.1.2) can have two distinct roots, two complex conjugates roots or double roots depending on the value of D defined by the following equation: p D = γ 2 − 4ω 2 . (4.1.3) Equation (4.1.3) can be written as D=γ

p 1 − ξ2

where ξ =

2ω . γ

(4.1.4)

If we choose f to be the periodic force per unit mass f = 2 cos(2t), ω = 2 rads/sec and the initial condition to be (0 0)T simulations of the displacement x(t) for different values of ξ are given in Figure 4.1; they are obtained by using sagemath. 23

Section 4.2. Electromagnetic Ball Suspension System

(a) Plot of solution for ξ = 4 > 1.

Page 24

(b) ξ = 4000 > 1 but very larger.

(c) 0 < ξ < 1 particular value ξ = 45 .

Figure 4.1: Simulation for Mass-spring system. 4.1.1 Remark. From Figure 4.1, it is observed that the amplitudes of the solutions are experiencing an increasing with respect to t. When ξ = 4 and ξ = 45 the amplitudes of the solutions look like they are stabilizing at a certain value. But they stabilize quickly for ξ > 1 than for 0 < ξ < 1. In the case where ξ = 4000 that is very larger the solution is approaching the resonance case, that is when the frequency of the solution is relatively closed to the natural frequency ω.

4.2

Electromagnetic Ball Suspension System

In this section we discuss the electromagnetic -ball -suspension system. It is a mechanism consisting of electromagnet and a steel ball m as it is shown in Figure 4.2. The system functions by regulating the current in electromagnet such that the steel ball of mass m will be suspended at a fixed distance, y, from the end of the electromagnet. Our aim is to build the model representing the electromagnetic ball suspension system, and study its controllability and observability by calculating ranks of controllability and observability matrices. We also calculate state transition matrix and transfer function. 4.2.1 Electromagnetic Ball Suspension System Model. From Figure 4.2 we have the following variables and parameters: • I = I(t): the input current (Ampere) through the resistor R1 (Ohm), • v = v(t): the input voltage (Volt),

Section 4.2. Electromagnetic Ball Suspension System

Page 25

Figure 4.2: Electromagnetic ball suspension system. • I2 = I2 (t): the current (Ampere) through the resistor R2 (Ohm) and inductance L (Henry), • I1 = I1 (t): the current (Ampere) through the capacitor C (Farad), • y = y(t): the ball position (Meter) and • g: the gravitational acceleration (9.81m/s2 ). Using Kirkoff’s voltage and current laws and Newton’s laws of motion we have the following equations: dI2 + R2 I2 , v = R1 I + L dt q v = R1 I + where q is charge through capacitor C, C I = I1 + I2 , 2 d y m 2 = −mg + F. dt

(4.2.1) (4.2.2) (4.2.3) (4.2.4)

Comparing Equations (4.2.1) and (4.2.2), we get the following: q = LC

dI2 + R2 CI2 . dt

(4.2.5)

Differentiating both sides of Equation (4.2.5) with respect to time t we get the following equation: dq dt Since

dq dt

= LC

d2 I2 dI2 . + R2 C 2 dt dt

(4.2.6)

= I1 , if we put Equation (4.2.3) in Equation (4.2.6) we get the following: d2 I2 R2 dI2 I2 + + 2 dt L dt LC

=

I . LC

(4.2.7)

Section 4.2. Electromagnetic Ball Suspension System

Page 26

On the other hand, as it is described in (Venkatesh and Balamurugan), it was shown that the force F I2 is proportional to y22 . Thus Equation (4.2.4) can be written as follows: d2 y dt2

= −g +

α I22 m y2

where α is a proportional coefficient.

(4.2.8)

Hence the model representing the electromagnetic -ball- suspension system, is given by Equations (4.2.7) and (4.2.8) as it is given by the following system.  2 I2 I 2  d I22 + RL2 dI dt + LC = LC , dt (4.2.9)  d2 y = −g + α I22 where α is a proportional coefficient. m y2 dt2 4.2.2 State Space Variables and Equations of Electromagnetic Ball Suspension Model. dy(t) 2 Let x1 (t) = I2 , x2 (t) = dI dt , x3 (t) = y(t) and x4 = dt , then we have the state equations represented by the following system.  dx 1  = x2 ,  dt    x1 I  2  dx = − R2Lx2 − CL + CL , dt (4.2.10) dx3  = x4 ,  dt    2   dx4 = −g + αx12 . dt mx 3

System 4.2.10 is non linear, thus we need to linearize it around its equilibrium points. 4.2.3 Linearized Model of Electromagnetic Ball Suspension System. We first find the equilibrium points. Let (x1e , x2e , x3e , x4e , Ie ) be the equilibrium point of System (4.2.10), then its equilibrium points are given by the following equations: r √ mgx3e gm x1e = − x3e , x2e = 0, x3e = x3e , x4e = 0 and Ie = − √ , (4.2.11) α α r √ mgx3e gm x1e = x3e , x2e = 0, x3e = x3e , x4e = 0 and Ie = √ . (4.2.12) α α The equilibrium points depend on the position of the steel ball x3e which has to be chosen for a fixed value y0 . Let first consider the negative equilibrium given by Equation (4.2.11). The Jacobian matrix Jp evaluated at this equilibrium point is given by the following equation. 

0

1

1 − CL − RL2 0 √ 0

  Jp =   −

2 α gm α my0

0 0 0 0 0 1

0 − 2y0g

0

0 1 CL



  . 0   0

(4.2.13)

Section 4.2. Electromagnetic Ball Suspension System

Page 27

If we assume δx1 = x1 − x1e , δx2 = x2 − 0, δx3 = x3 − x3e , δx4 = x4 − 0 and δI = I − Ie ; the linearized model in the state space form is given by the following system.  ˙   0 1 0 0 δx1   0  δx1 −R2 −1 0 0  ˙   δx   1  LC LC   2 =  δx2  +  LC  δI, (4.2.14)   0 0 1  δx3   0  δx ˙ 3  √ mg0 2α ˙4 δx4 0 δx − my0α 0 − 2g y0 0   δx1  δx2   δx3 = 0 0 1 0  δx3  + 0δI. δx4 Here δI is termed as the input of the system and δX3 is the output of the system. 4.2.4 Analysis of the Linearized Model for Electromagnetic Ball Suspension System. For simplicity let consider m = 1kg and α = 1N m2 /A2 , y0 = 0.5m, g = 9.81m/s2 , R2 = 1Ω, L = 1H, and C = 1F arad, then the matrix A is given by the following equation:   0 1 0 0  −1 −1 0 0  . A= (4.2.15)  0 0 0 1  −12.53 0 −39.24 0 The resolvent matrix ϕ(s) and the transfer function H(s) are given respectively by Equations (4.2.16) and (4.2.17) below.   1 s+1 0 0 s2 +s+1 s2 +s+1 1 s  − s2 +s+1 0 0    s2 +s+1 , ϕ(s) =   (4.2.16) −12.53 s−12.53 12.53 1s 1 −  s4 +s3 +40.24 s2 +39.24 s+39.24 s4 +s3 +40.24 s2 +39.24 s+39.24 s2 +39.24 s2 +39.24  −12.53 s2 −12.53 s s 1s − s4 +s3 +40.2412.53 − s239.24 s4 +s3 +40.24 s2 +39.24 s+39.24 s2 +39.24 s+39.24 +39.24 s2 +39.24 H(s) = −

s4

+

s3

12.53 . + 40.24s2 + 39.24s + 39.24

(4.2.17)

The controllability and observability matrices are given by matrices C and O respectively as follows:     0 1 −1 0 0 0 1 0  1 −1  0 1  0 0 0 1  . ,O =  C=  0  −12.53 0 0 −12.53  0 −39.24 0  0 0 −12.53 12.53 0 −12.53 0 −39.24 Since the ranks of matrices C and O is 4, from the controllability/observability discussed in the second chapter, we conclude that the electromagnetic- ball- suspension system is both controllable and observable. This means that we can manipulate the current δI to fix the ball at the desired ball position. Also given the output’s measurements, it is possible to learn everything about the dynamics of state variables of the electromagnetic -ball-suspension system. The simulations of positions δx3 (t) and currents δI2 (t) are given in Figure (4.7). They have been calculated by assuming the initial state to be δx0 = (0 0 0 0)T , and choosing different values of δI as follows: δI = 1A, δI = 1.5A, δI = 2.5A and δI = 0.5 cos(t)A. The desired output position is y0 = 0.5 m is obtained by choosing δI = 1.5A see Figures 4.3b and 4.4b.

Section 4.2. Electromagnetic Ball Suspension System

Page 28

Note: 1. In these calculations above we have made approximations around two decimal places. We have also used sagemath and octave in these calculations. 2. The state transition matrix is very big and very long to present; but we know it is given by ϕ(t) = L−1 (sI − A)−1 = L−1 (ϕ(s)). 3. Even if the linearized Electromagnetic-ball- suspension system is controllable and observable, as it is shown in sub-Figures 4.3b and 4.4b, all positions of the steel ball are oscillating around the fixed output position with respect to time. Thus other techniques of linearization for instance non-linear feedback linearization (Henson and Seborg, 1997) are required to address this instability problem. However this is not the objective of this work, though it could be reserved for future works. 4. Around positive equilibrium given by Equation (4.2.12), the Electromagnetic-ball-suspension system is also controllable and observable. The simulations of the ball positions δx3 and the currents δI2 around this equilibrium are given by Figure 4.4.

(a) The current δI2 (t).

(b) Displacement δx3 (t).

Figure 4.3: Simulation for displacement δx3 and the current δI2 with negative equilibrium point.

(a) The current δI2 (t).

(b) Displacement δx3 (t).

Figure 4.4: Simulation for the displacements δx3 and the currents δI2 with positive equilibrium point.

Section 4.3. Cruise Control of a Car

4.3

Page 29

Cruise Control of a Car

The cruise control system of a car is a closed loop control system. The main purpose is to maintain a constant speed in the presence of disturbances due to changes in slope and aerodynamic. ”The controller compensates for these disturbances by measuring the speed of the car and adjust to the throttle appropriately” (Astr¨ om and Murray, 2010). To model the system we consider v the speed of a car and vc the desired speed. The controller receives the signals v and vc and generates a control signal u which is sent to an actuator that controls the throttle position (Astrom and Murray, 2008) . The throttle in turn, controls the torque T derived by the engine, which is transmitted through the gears and the wheels which generate a force F that moves the car (Astrom and Murray, 2008). The block diagram on Figure 4.5 is showing this functioning mentioned above.

Figure 4.5: Cruise control of a car system functioning.

(a) Action of friction (ctms.engin.umich.edu, (b) Action of gravity on cruise control system (Astrom 2016). and Murray, 2008).

Figure 4.6: Diagrams for cruise control of a car system.

Section 4.3. Cruise Control of a Car

Page 30

4.3.1 Cruise Control of a Car Model. The Equation is modelled by using Newton’s law of motion as follows: m

dv = F − Fd . dt

(4.3.1)

Here 1. m is the total mass of the car and passengers. 2. F is the force generated by the gears and the wheels which moves the car see Figure 4.6b. 3. Fd is the disturbance force caused by the gravity and the incline of the road, and friction and aerodynamic drugs see Figure 4.6. For the simplicity of the model, we neglect the aerodynamic drugs. Thus the force Fd is given by the following equation: Fd = bv + mg sin(θ).

(4.3.2)

In Equation (4.3.2) bv is the disturbance force due to friction and mg sin(θ) represents the disturbance force caused by the gravity with the incline of the road. We describe b as proportional coefficient, v as the actual speed of the car and θ as the slope of the road. From (Astrom and Murray, 2008). It have been shown that the force F is proportional to the torque T and it is given by the following equation: F

= uαn T (αn v).

(4.3.3)

Here u is the controller input, αn is the gear ratio divided by the radius r of the wheel. And T is the torque which depends on engine speed ω = αn v and is given by the following equation:  2 ! αn v T (αn v) = Tm 1 − β −1 . (4.3.4) ωm Where Tm is the maximum torque which is given by the maximum engine speed ωm . Combining Equations (4.3.4) and (4.3.3) we get:  2 ! αn v F = uαn Tm 1 − β −1 . (4.3.5) ωm Therefore the complete model is obtained by combining Equations (4.3.5), (4.3.2) and (4.3.1), and supposing v = dx dt , with x the position of the car. It is given in the following equation:  !2  dx 2 αn dt d x dx m 2 = uαn Tm 1 − β −1 −b − mg sin(θ). (4.3.6) dt ωm dt 4.3.2 Remark. As it is described in (Astrom and Murray, 2008), we have the following typical values: Tm = 190N m, 0 ≤ u ≤ 1, β = 0.4, α1 = 40, α2 = 25 and ωm = 420rad/sec.

Section 4.3. Cruise Control of a Car

Page 31

4.3.3 State Space Variables and Equations for Cruise Control of a Car Model. From Equation (4.3.6), if we let x1 = x and x2 = dx dt , then the state space model is given by the following equation:  dx ,  dt1 = x2  2  (4.3.7) α 1 x2 dx2 u 2 − g sin (θ) − bx  dt = − β ωm − 1 − 1 Tm α1 m m . Equation (4.3.7) is the non-linear system of ordinary differential equations. We need to perform linearization process around its equilibrium point. 4.3.4 Linearized Model of Cruise Control of a Car. For simplicity let consider parameters Tm , m and b to be Tm = 190N m, m = 1000kg and b = 98.1N sec/m. If we let (x1e , x2e , ue , θ0 ) to be equilibrium point of System (4.3.7) and αn = α1 , we find that it is then given as follows: x1e = x1e ,

x2e = 0,

ue = −

980 sin (θ0 ) 19 (α1 β − α1 )

and θ0 = θ0 .

It follows to find the Jacobian matrix Jp evaluated at this equilibrium point. Taking θ0 = 0.069 equivalently to 40 , α1 = 40 and β = 0.4, the Jacobian matrix is given by the following:   0 1 0 0 Jp = . 0 −0.0124 4.56 −9.776

(4.3.8) radians

Let δx1 = x1 − x1e , δx2 = x2 − x2e , δu = u − ue and δθ = θ − θ0 , then the linearized model of cruise control of a car is is given by the following equations:          ˙ 1 δx δx1 0 0 1 0 δθ, (4.3.9) δu + + ˙ 2 = 0 −0.0124 −9.776 δx2 4.56 δx   δx1 + 0δu. (4.3.10) v = (0 1) δx2 In Equation (4.3.9), δx1 and δx2 are state variables of the linearized model, δu is the input of the linearized model and δθ is the disturbance of the system. Equation (4.3.10) represents the output of the linearized model. 4.3.5 Analysis of the Linearized Model of Cruise Control of a Car. From Equation (4.3.9) and (4.3.10), the matrices A, B, C and D defined in Equation (3.1.1) are given by the following:      0 1 0 A= ,B = , D = 0 and C = 0 1 . 0 −0.0124 4.56 • The controllability and observability matrices C and O are given by the following:       0 4.56 C 0 1 C = [B : AB] = and O = = . 4.56 −0.05654 CA 0 −0.0124 The resolvent matrix ϕ(s) is given by the following equation:  1  1 2 +0.0124 s −1 s s . (sI − A) = ϕ(s) = 1 0 s+0.0124

Section 4.3. Cruise Control of a Car

Page 32

• The transfer function of this system is given by the following equation: H(s) = Cϕ(s)B =

4.56 s+0.0124

.

• State transition matrix ϕ(t) of the system is given by the following equation: ! 31 t) 2500 (− 2500 + 1 − 2500 e 31 31 . 31 0 e(− 2500 t)

(a) Speed for cruise control of a car when reference speed (b) Speed v(t) with different altenatives (trial and error) is vc = 20m/sec. to get the referece speed vc = 20m/sec.

Figure 4.7: Simulations of speed for cruise control of a car with θ = 0.069 radians with reference speed is vc = 20m/sec. Here the rank of observability matrix rank (O) is 1 and the rank of controllability matrix rank (C) is 2. We conclude that the system is controllable but the states of the system are not observable. This means that we can manipulate the input u regardless of the disturbances to obtain the desired output speed vc , as it is shown in Figure 4.7b. But given any position and velocity at any time t it is not possible to determine the initial state from which the state is transferred. The simulation of the speed of a car with reference speed of 20m/s is given in Figure 4.7a. It is obtained by designing the sagemath codes and by choosing u = 0.195 and θ = 0.069 radians. 4.3.6 Remark. Figures 4.3.11 and 4.3.11 show that the input u can be changed to obtain the desired output vc .

5. Conclusion and Future Works The use of Linear Algebra to modelling and checking the controllability and observability of some control systems was the main purpose of this work. In the first chapter, we introduced the historical background of Linear Algebra, Control Theory and the connection between them. Linear Algebra is widely used in Control Theory to answer several problems of Control Theory such as; controllability, observability, stability, frequency domain and transfer function. In the second chapter, we discussed some notions of Linear Algebra such as the rank of matrices, eigensystem, characteristic polynomial and similarity of matrices. All these concepts were to be built on the calculation of a special matrix called the state transition matrix or eAt . This matrix is very important in Control Theory because it helps to obtain the solution of the system represented by differential equations in the state space form. In the third chapter, we gave some concepts of control systems by discussing the components of a control system. In general, the components of a control system are the input of the system, system/ plant and the output of the system. So far we have seen that mainly there exists two types of control systems: the open loop control systems and the closed loop control systems. Furthermore, we discussed the concepts of controllability, observability, transfer function, resolvent matrix and state transition matrix as the very basic elements of Control Theory. Ranks of controllability matrix and observability matrix are very important to detect the controllability and observability of a control system. The state transition matrix and the transfer function are important in transferring the initial state of the system into any other state at time t. Finally, in the fourth chapter, we discussed the applications of Linear Algebra to the modelling of three control system examples which are the Mass-spring system, the Electromagnetic-ball-suspension system and the Cruise control of a car. We found out that the Mass-spring system and the Electromagneticball-suspension system are both controllable and observable; whereas the Cruise control of a car system is controllable, but it is not observable. The future works would include the continuation of this project to talk about the design of the Electromagnetic-ball-suspension system controller. The question to ask here is how we can design a closed loop control system on the Electromagnetic-ball-suspension system; in such way that the system is correcting itself to obtain the desired output which is the distance from the end of the electromagnet to where we want the ball to be suspended. We can also mention the following problems: robotic, control of combustion, control of fluid, control of plasma, hydrology, recovery of natural resources, and economics. The readers of this project can refer to this paper (Fern´andez-Cara et al., 2003) for more detailed descriptions of these above-mentioned problems.

33

Acknowledgements First of all, I thank God for giving me good health for the entire period of my studies. I would like to thank my supervisor Professor Joyati Debnath guided, supported and motivated me throughout my research project. Her availability and critiques gave me courage and strength to do my research project. I appreciate the love, care and supports of my wife Nyiransabimana Prudencienne. I would like to thank AIMS-Tanzania for granting me the opportunity to study for a structured-masters degree. I am thankful for every form of advice and encouragement from the Deputy Rector- Academic, Dr Wilson Mahera Charles, the tutors and the entire AIMS students.

34

References M. Artin, editor. Algebra. Number ISBN 0-13-593609-8. Prentice-Hall Englewoos Cliffs, New Jersey, Massachusetts Institute of Technology, 1991. A. Astolfi. Stability of dynamical systems-continuous, discontinuous, and discrete systems (by michel, an et al.; 2008)[bookshelf]. Control Systems, IEEE, 29(1):126–127, 2009. K. J. Astrom and R. M. Murray. Feedback systems. An Introduction for Scientists and Engineers, Princeton, 2008. K. J. Astr¨om and R. M. Murray. Feedback systems: an introduction for scientists and engineers. Princeton university press, 2010. J. Bevivino. The path from the simple pendulum to chaos. Dynamics at the Horsetooth, 1(1):1–24, 2009. D. E. Boubiche, S. Boubiche, A. Bilami, and H. Toral-Cruz. Toward adaptive data aggregation protocols in wireless sensor networks. In Proceedings of the International Conference on Internet of things and Cloud Computing, page 73. ACM, 2016. ctms.engin.umich.edu. Cruise control image, 2016. URL http://ctms.engin.umich.edu/CTMS/index. php. [Online; accessed 11-May-2016]. B. Datta. Linear and numerical linear algebra in control theory: Some research problems. ScienceDirect, 197-198:755–790, January–February 1994. J. M. Davis, I. A. Gravagne, B. J. Jackson, I. Marks, and J. Robert. Controllability, observability, realizability, and stability of dynamic linear systems. arXiv preprint arXiv:0901.3764, 2009. D. Di Ruscio and H. i Telemark. System theory state space analysis and control theory. Lecture notes, Telemark University College, Porsgrunn, Norway, 1996. R. Dorf. Rh bishop modern control systems, 1998. R. C. Dorf. The engineering handbook. CRC Press, 2004. Electrical4u. Closed loop open loop control. http://www.electrical4u.com/ control-system-closed-loop-open-loop-control-system, 2016. Accessed April17 2016. Fern´andez-Cara, Enrique, Zuazua, and Enrique. Control theory: History, mathematical achievements and perspectives. Boletın SEMA (Sociedad Espanola de Matem´atica Aplicada), 26(12):79–140, 2003. Google. Controllability and observability, 2016a. URL http://www.ece.rutgers.edu/∼gajic/psfiles/chap5. pdf. [Online; accessed 24-May-2016]. Google. Laplace tranform, 2016b. URL http://www.math.utah.edu/∼gustafso/laplaceTransform.pdf. [Online; accessed 24-May-2016]. Google. Mass-spring images, 2016c. URL https://www.google.com/#q=mass+spring+images. [Online; accessed 11-May-2016]. M. A. Henson and D. E. Seborg. Feedback linearizing control. Nonlinear process control, pages 149–231, 1997. 35

REFERENCES

Page 36

R. M. K.J.Astrom. An introduction for scientists and engineers. ScienceDirect, 2016. B. Noble and J. W. Daniel, editors. Applied Linear Algebra. Number ISBN 0-13-593609-8. Prentice-Hall International Editions, third edition, 1988. Schulz and William. Theory and application of grassmann algebra, 2011. E. Sontag. sontag mathematical control theory springer98.pdf, 2011. tutorial.math.lamar.edu. Improperintegralscomptest.aspx, 2016. URL http://tutorial.math.lamar.edu/ Classes/CalcII/ImproperIntegralsCompTest.aspx. [Online; accessed 11-May-2016]. P. Venkatesh and S. Balamurugan. Real time control of magnetic-ball-suspension system using dspace ds1104. M. A. Vitulli. A brief history of linear algebra and matrix theory, 2004. darkwing. uoregon. edu/ vitulli/441. sp04. LinAlgHistory. html, 2004. Wikipedia. Control system — wikipedia, the free encyclopedia. Wikipedia, the Free Encyclopedia, https: //en.wikipedia.org/w/index.php?title=Control system&oldid=720692346, 2016a. [Online; accessed 24-May-2016]. Wikipedia. Pendulum — wikipedia. Wikipedia, the Free Encyclopedia, https://en.wikipedia.org/w/ index.php?title=Pendulum&oldid=716475686, 2016b. Wikipedia. Damping ratio — wikipedia, the free encyclopedia, 2016c. URL https://en.wikipedia.org/ w/index.php?title=Damping ratio&oldid=710018522. [Online; accessed 24-April-2016]. www lar.deis.unibo.it. Automatic control and system theory, 2016. URL http://www-lar.deis.unibo.it/ people/cmelchiorri/Files ACST/01 Introduction.pdf. [Online; accessed 11-May-2016]. P.

Y. Xiaoxin Liao. Absolute Stability of Nonlinear Control Systems, 2nd Edition (Mathematical Modelling: Theory and Applications). 2nd edition, 2008. ISBN 1402084811,9781402084812,9781402084829. URL http://gen.lib.rus.ec/book/index.php?md5= 8E0F7D7BD0BDAD48863B097986D07849.