some developments on parameterized inverse ...

1 downloads 0 Views 291KB Size Report
including inverse Sturm-Liouville problem, particle physics, control theory, structural de- sign and inverse vibration problem. References to these motivating ...
SOME DEVELOPMENTS ON PARAMETERIZED INVERSE EIGENVALUE PROBLEMS HUA DAI1

Department of Mathematics Nanjing University of Aeronautics and Astronautics Nanjing 210016, People's Republic of China

CERFACS Technical Report - TR/PA/98/34

Abstract A comprehensive survey of some recent results regarding parameterized inverse eigenvalue problems is given in this paper. Speci c topics include: additive and multiplicative inverse eigenvalue problems, classical inverse eigenvalue problems and generalized inverse eigenvalue problems. Both the theoretic and algorithmic aspects are reviewed. Some open problems are revealed to stimulate further research.

Key words: eigenvalue problems, inverse problems, nonlinear equations, iterative methods AMS(MOS) subject classi cation: 15A18, 65F15

1 This research was supported in part by the National Natural Science Foundation of China and the Jiangsu Province Natural Science Foundation. The work of the author was done during a visit to CERFACS, France in March-August 1998.

1

1. Introduction

A problem is called an inverse eigenvalue problem because of its relationship to eigenvalue problem, and because the unknowns in the former are the data in the latter and vice versa. In an eigenvalue problem the data consist of the elements of a matrix or matrix pencil, while the unknowns are the eigenvalues and corresponding eigenvectors; they can be calculated by using some e ective methods (see, for example, [17, 71, 117, 120, 138]). In the corresponding inverse problem, the data are the complete or only partial information of eigenvalues or eigenvectors as well as some constraint conditions and the unknowns are some elements of the matrix or matrix pencil. Depending on the application, inverse eigenvalue problems may be described in di erent forms. A collection of inverse eigenvalue problems were identi ed and classi ed according to their characteristics [24, 151]. In this paper, we present a survey of some recent results regarding a kind of inverse eigenvalue problems, called parameterized inverse eigenvalue problems, i.e., given n real or complex numbers 1;    ; n and an n  n matrix-valued function A(c) or matrix pencil (A(c); B (c)) contained parameters c = (c1;    ; cn), nd c 2 Rn or Cn such that the eigenvalues of the matrix A(c) or matrix pencil (A(c); B (c)) are precisely 1;    ; n. The parameterized inverse eigenvalue problems have been of interest for many applications, including inverse Sturm-Liouville problem, particle physics, control theory, structural design and inverse vibration problem. References to these motivating problems are strewn throughout the literature and particularly in two books [68, 151] and earlier reviews [24, 69, 70]. Besides the parameterized inverse eigenvalue problems, there are other kinds of inverse eigenvalue problems, for example, the inverse eigenvalue problems for Jacobian and symmetric band matrices, which are the problems of constructing Jacobian and symmetric band matrices from spectral data or some eigenpairs and can be solved by direct methods (see, [5, 9-11, 28, 29, 32-34, 38, 41, 53, 54, 63, 67-70, 72, 73, 81, 82, 86-88, 151]), the inverse eigenvalue problems for Toeplitz [22, 65, 96, 97, 134] and nonnegative [3, 21, 62, 68] matrices, the matrix or matrix pencil approximations under spectral and structural constraints (see, [20, 25, 28-30, 35, 37, 39, 40, 42-44, 89, 131, 135, 148, 151]), pole assignment problems (see, [14, 55, 92, 129, 133, 151]) and so on. Associated with any inverse eigenvalue problem are two fundamental questions { solvability and computability. A major e ort in solvability has been to determine a necessary or a sucient condition under which an inverse eigenvalue problem has a solution; while the main concern in computability has been to develop a numerical method for solving the problem. Both questions are dicult and challenging. For forty years there has been considerable discussion about all kinds of inverse eigenvalue problems. Most theoretic results and numerical methods can be found in the books [68, 151] and the references contained therein. In this paper, our discussion is only con ned to the parameterized inverse eigenvalue problems. Our present intention is to provide a more complete review than [24] on the parameterized inverse eigenvalue problems. Section 2 of the survey is devoted to the additive and multiplicative inverse eigenvalue problems where the emphasis is on the solvability. Section 3 concerns the classical inverse eigenvalue problems. We touch on some new developments in both the solvability and the computability. Section 4 deals with the generalized inverse eigenvalue problems where the emphasis is on the computability. To facilitate the discussion, we shall adopt the following notation. By Rn (Cn ) is meant the real (complex) linear space of column or row vectors with the dimension n. Rmn (Cmn ) is the set of all m  n real (complex) matrices, SRnn (Hnn ) the set of all real symmetric (complex Hermitian) matrices. A > 0 means that A is a symmetric (Hermitian) positive de nite matrix. (AH )AT and A+ represent the (conjugate) transpose and the Moore-Penrose inverse of a matrix A, respectively. I is the identity matrix, and ei is the ith column of I . ij is the Kronecker delta. (A) and tr(A) denote the spectral 2

radius and the trace of a square matrix A, respectively. min (A) and max (A) stand for the minimum and maximum eigenvalues of a real symmetric (complex Hermitian) matrix A, respectively. (A) ((A; B )) denotes the set of all eigenvalues of the standard (generalized) eigenvalue problem Ax = x (Ax = Bx). k:k2 (k:k1 ) denotes the Euclidean vector norm (1-norm) or its corresponding induced matrix norm, and k:kF the Frobenius matrix norm. k:k is a consistent matrix norm. For an n  m matrix A = [a1;    ; am], where ai is the ith column vector of A, we de ne the norm kAk = max1j m fkaj k2 g, and min(A) denotes the smallest singular value of the matrix A. For a vector b = (b1;    ; bn)T 2 Rn and a matrix A = (aij ) 2 Rnn , let d(b) = min jb ? b j; (A) = max ja j; diag(b) = diag(b1;    ; bn) = diag(bi) i6=j i j i6=j ij

K1(A) = 1max j n h(A) = (

n X i;j=1

n X i=1

i6=j

jaij j; K2(A) = 1max ( j n

n X i=1

a2ij )

1 2

i6=j

a2ij ) ; A(0) = A ? diag(a11;    ; ann) 1 2

i6=j

Let k and n be two positive integers and 1  k  n. Gk;n denotes the set of all increasing sequence of integers  = (j1; j2;    ; jk ) with 1  j1 < j2 <    < jk  n. For arbitrary  = (j1; j2;    ; jk ) 2 Gk;n , A( ) = A[j1; j2;    ; jk ] denotes the k  k principal submatrix of A whose (i; l) entry is aji ;jl (i; l = 1;    ; k).

2. Additive and multiplicative inverse eigenvalue problems In this section, We deal with the additive and multiplicative inverse eigenvalue problems and survey the development of these problems in the solvability.

2.1. Additive and multiplicative inverse eigenvalue problems over the complex eld

The additive and multiplicative inverse eigenvalue problems over the complex eld are respectively described as foolws: Problem CA . Given a matrix A 2 Cnn and n complex numbers 1;    ; n, nd T c = (c1;    ; cn) 2 Cn such that the eigenvalues of the matrix A + diag(c) are precisely 1 ;    ;  n . Problem CM . Given a matrix A 2 Cnn and n complex numbers 1 ;    ; n, nd T c = (c1;    ; cn) 2 Cn such that the eigenvalues of the matrix diag(c)A are precisely 1 ;    ;  n . Concerning the existence of solution to Problem CA, Friedland [59, 61] proved the following theorem by using some powerful results from algebraic geometry: Theorem 2.1. For any speci ed f1;    ; ng and A 2 Cnn , Problem CA is always solvable. The number of solutions is nite and does not exceed n!. Moreover, for almost all f1;    ; ng, the number of solutions is exactly n!. Alexander [2], and Byrnes and Wang [15] gave other proofs of Theorem 2.1, respectively. Byrnes and Wang [15] considered an additive inverse eigenvalue problem for Lie perturbation over an algebraically closed eld F. More precisely, let g`(n; F) be the Lie 3

algebra of all n  n matrices de ned over F, given A 2 g`(n; F) and a matrix Lie subalgebra $  g`(n; F), de ne a polynomial mapping A from $ to the ane n space An of monic polynomials over F of degree n, via A (L) = det(I ? A ? L); L 2 $ under what condition is A onto ? This problem arises in some particular applications, for example, control theory [15]. Byrnes and Wang showed that A : $ 7! An is onto for all A if and only if rank($) = n and some element of $ has distinct eigenvalues. By using topological degree theory, Friedland [60, 61] proved the following result for the Problem CM. Theorem 2.2. For any speci ed f1;    ; ng, Problem CM is solvable if all principal minors of A are di erent from zero. Moreover there are at most n! distinct solutions. Dias da Silva [51] considered the multiplicative inverse eigenvalue problem over an algebraically closed eld and showed the similar result. Theorem 2.2 gives a sucient condition for the solvability of Problem CM. In general, however, the existence question for Problem CM is related to both the given matrix A and the prescribed eigenvalues 1 ;    ; n. We consider an example of Problem CM. Let

A=

0



1   1 0 ; 1 = ?i ; 2 = i

p

where i = ?1. It is easy to verify Problem CM has a solution for the given A and 1; 2. But A does not satisfy the condition of Theorem 2.2. So it is worth to consider the solvability of Problem CM by using both the given matrix A and the speci ed eigenvalues 1 ;    ;  n .

2.2. Additive and multiplicative inverse eigenvalue problems over the real eld

Consider the following problems. Problem RA . Given a matrix A 2 Rnn and n real numbers 1;    ; n , nd c = (c1;    ; cn)T 2 Rn such that the eigenvalues of the matrix A + diag (c) are precisely 1 ;    ;  n . Problem RM . Given a matrix A 2 Rnn and n real numbers 1;    ; n, nd c = (c1;    ; cn)T 2 Rn such that the eigenvalues of the matrix diag(c)A are precisely 1 ;    ;  n . Problem RA and Problem RM are called the additive and multiplicative inverse eigenvalue problems over the real eld, respectively. It is easy to construct examples that show that Problem RA is not always solvable over the real eld. So one must seek necessary and sucient conditions for the solvability of both Problem RA and Problem RM. Morel [106] gave the following necessary conditions for the solvability of Problem RA: n X j =1

and

n X

2j ? n1 (

j =1

j )2 

n n X X i=1 j =1

n X n X

n X n X

i=1 j =1

i=1 j =1

(i ? j )2  2n 4

aij aji

(1)

aij aji

(2)

We suppose that aii = 0(i = 1;    ; n) in Problem RA and aii = 1(i = 1;    ; n) in Problem RM and that 1      n . For A = (aij ) 2 Rnn , let

gi =

n X j=1

jaij j;  = (1;    ; n)T

j 6=i

The cornerstones of all sucient conditions that are known are the Brouwer xed-point theorem and Kronecker theorem [115]. Two very important roles in the application of the two theorems are respectively played by the mapping T : B 7! Rn ; T (c) = c +  ? (c) (3) and the homotopy H : D  [0; 1]  Rn+1 7! Rn ; H (c; t) = (A(c; t)) (4) where B is a nonempty convex compact set in Rn , D is an open and bounded set in Rn ,  is the closure of the set D , (c) = (A + diag (c)) for Problem RA or (diag (c)A) for D Problem RM, A(c; t) = tA + diag (c) for Problem or diag (c)[I + t(A ? I )] for Problem RM. Since T is a continuous mapping, if T (B )  B , then there exists c 2 B such that T (c) = c and so (A + diag(c)) =  ((diag(c)A) = ). Thus c is a solution for Problem RA (Problem RM). Similarly, H (c; t) is also a continuous mapping. Clearly, the degree of the mapping H (c; 0) on D with respect to  is deg (H (:; 0); D ; ) = 1. If  satis es H (c; t) 6=  for all (c; t) 2 @ D  [0; 1] (@ D is the boundary of the set D ), then it follows from the homotopy invariance theorem [115] and Kronecker theorem that there is a c in D such that H (c; 1) = . So c is a solution forn Problem RA (Problem RM). Constructing a particular convex compact set B in R , and then applying the Brouwer xed-point theorem, de Oliveira [112, 114] gave the following sucient conditions for the solvability of Problem RA and Problem RM. (i) If there exists a permutation p of f1; 2;   ; ng such that jp(i) ? p(j)j  2(gi + gj ); i; j = 1;    ; n; i 6= j (5) then Problem RA has a solution. (ii) If max1in fgig  21 and no two of the intervals [i(1 ? 2gi ); i 11+?ggii ](i = 1;    ; n) intersect, then Problem RM has a solution. Choosing a particular open and bounded set D in Rn and then using the homotopy invariance theorem and Kronecker theorem, Dai and He [36, 84] presented the following sucient conditions. (i) Assume that 1 > 2 >    > n and there is a permutation p of f1; 2;   ; ng such that i ? i+1j  2 maxfgp(i); gp(i+1)g; i = 1;    ; n ? 1 (6) then Problem RA has a solution. (ii) Assume that 1 > 2 >    > n and there is a permutation p of f1; 2;   ; ng such that gp(1) < 1, gp(n) < 1 and

i ? i+1 j  20(1 +  ) maxfgp(i); gp(i+1)g; i = 1;    ; n ? 1

(7)

where 0 = maxfj1j; jnjg,  = maxf 1?gpgp ; 1?gpgpn n g, then Problem RM has a solution. (1)

( )

(1)

5

( )

2.3. Additive and multiplicative inverse eigenvalue problems for symmetric (Hermitian) matrices

We now consider the following additive and multiplicative inverse eigenvalue problems for real symmetric (complex Hermitian) matrices. Problem SA(HA). Given a matrix A 2 SRnn (Hnn ) and n real numbers 1;    , n, nd c = (c1;    ; cn)T 2 Rn such that the eigenvalues of the matrix A + diag(c) are precisely 1;    ; n. Problem SM(HM). Given a matrix A 2 SRnn (Hnn ) and n real numbers 1;    , n, nd c = (c1;    ; cn)T 2 Rn such that the eigenvalues of the matrix diag(c)A are precisely 1;    ; n. There is another type of multiplicative inverse eigenvalue problems: Problem HM'. Given a matrix A 2 Hnn and n real numbers 1;    ; n, nd a real n  n nonsingular diagonal matrix X = diag(c1;    ; cn) such that the eigenvalues of the matrix X ?1 AX ?1 are precisely 1 ;    ; n. Both Problem HA and Problem HM' were rst posed by Downing and Householder [52]. Clearly, both Problem HM and Problem HM' are essentially same, so are both Problem SA (SM) and Problem HA (HM). We discuss only Problem SA and Problem SM hereafter. Problem SA is obtained by studying inverse Sturm-Liouville problems. Consider the boundary value problem ?u00(x) + p(x)u(x) = u(x) (8)

u(0) = u() = 0

Suppose that the potential p(x) is unknown, but the spectrum fig1 1 is given. Can we determine p(x)? This continuous problem has been studied by many authors (see, for example, [12, 66, 80, 109, 116]). We are mainly interested in the discrete analogue of this problem. Let us use a uniform mesh, de ning h = n+1 , uk = u(kh), ck = p(kh), k = 1;    ; n, and assume that fign1 is given. Using central di erences to approximate u00 we obtain ?uk?1 + 2uk ? uk+1 + c u =  u ; k = 1;    ; n; u = u = 0

h2

k k

j k

0

Thus we have a Problem SA with

0 2 ?1 BB ?1 2 ?1 B ?1 : : A = h12 B BB : : ?1 @

?1 2 ?1 ?1 2

1 C C C C C C A

n+1

(9)

[80] is a comprehensive reference for both the discrete and continuous inverse SturmLiouville problem. [151] gave one example of Problem SM arising from engineering application (see also [24]). In fact, Problem SM arises also in the discrete analogue [64] of the following inverse boundary value problem ?u00(x) = (x)u(x) (10) 6

u(0) = u() = 0

The question is whether we can determine the density function (x) from the eigenvalues fig1 1. There have been important advances in the existence of solution to Problem SA and Problem SM in the last 40 years. We assume that aii = 0(i = 1;    ; n) in Problem SA and aii = 1(i = 1;    ; n) in Problem SM and that 1  2      n. Let 1  2      n be the eigenvalues of the matrix A. From (1) and (2) and the symmetry of the matrix A, we have the following necessary conditions for the solvability of Problem SA. n X j =1

and

n 1 (X  )2  kAk2

F

(11)

(i ? j )2  2nkAk2F

(12)

2j ?

n n X X i=1 j =1

n

j =1

j

Xu [141] gave the following results. (i) A necessary condition for the solvability of Problem SA is

X

1i