Mr. Harish As you said, your code does not solve

0 downloads 0 Views 2MB Size Report
Equation (1) does not seem to me having a closed-form or even a direct solution, what do you ... Obs. SR = kron(inv(S'),inv(R)) and DkI = kron(D', eye(Y)).
Mr. Harish As you said, your code does not solve these equations because they have a Hadamard product term. As you I do not know - or how to access - a Matlab function or a computer program for solving your equations. In my last answer I suggested you to change the look of these equations with some linear algebra and rewriting Hadamard product as a Kronecker product (in fact I mentioned wrongly Schur product but I wanted to say Kronecker product). That way, may be you or someone else could devise a computational solution for them. To illustrate that, considering (as you put) all matrices nxn and nonsingular, it is possible to obtain an equivalent equation for yours equations in the form (using some of the properties and relations presented below): vec(Y) = [(S’)-1 Ä R-1] { vec(Z) – vec(C) .* [(D’ Ä In) vec(Y)]}

(1)

or, alternatively, (S’ Ä R) vec(Y) = vec{Z – En’ [(In Ä Y) (C Ä D)] En}

(2)

where S = H F-1 B

,

R = A E-1 G

,

Z = A E-1 Q2 F-1 B – Q1

En’ = [E11 E22 … Enn] a selection matrix, Eii (nxn), Eii(i,j) = 0 , i¹j, and Eii(i,i) = 1. In these equations, .* denotes Hadamard product, Ä denotes Kronecker product and vec is the operation of vectorization (by stacking columns) of a matrix. Equation (1) does not seem to me having a closed-form or even a direct solution, what do you think about that? In fact, it suggested me an iterative solution. For example, a relatively simple approach is: let J(Y) := || vec(Y) - [(S’)-1 Ä R-1] vec (f(Y))||

(3)

where vec(f(Y)) = vec(Z) - vec(C) .* [(D’ Ä In) vec(Y)], so, search (iterate) Y in (1) for minimizing (3). I wrote a program in Matlab for this (see below). Some observations about this direct approach: (i) May be a possible approach to consider only for n small; (ii) Solving as I did it can show numerical robustness as well as convergence problems. For example, in a sample problem with n=2 the result I got was very good (see below); in another example, with n=3, the result I obtained was very poor, may be due inclusive to the general matrices I utilized in the example (are they need to be general or they have some properties like positivy?). In conclusion, to solve this general problem as you formulated the relatively simple and direct iterative computational approach I described do not seem to be appropriate (reliable) and to be succeeded it is required more research and knowledge about the problem (and perhaps some ingenuity in crafting the solution). I also suggested you a look at the Simoncini paper (specially section 7.3 about a Sylvester-like equation) for a possible inspiration. Adade

I stop here because the subject is not a priority for me at this moment, but I would thank you if you inform me when you effectively get a better solution/approach.

Example global Z C DkI SR; A=[2 1; 1 1]; B=[1 1; 1 0]; C = [1 2; 3 4]; D = [1 -1; 0 1]; E = [4 1; 1 1]; F = [5 0; 0 1]; G = [1 1; 0 2]; H = [2 0; 1 1]; Q1 = eye(2); Q2= 2*eye(2); S = H*inv(F)*B; R = A * inv(E)*G; Z = A * inv(E) * Q2 * inv(F) * B - Q1; DkI = kron(D',eye(size(D))); SR = kron(inv(S'), inv(R)); Y0 = [1 1; 0 1]; vecY = Y0(:); [vecY,J] = fminsearch(@normaEvec,vecY); J J= 5.7312e-005 Y=reshape(vecY,2,2) Y= -0.0081 -0.1081 0.4590 0.1066 R*Y*S + (C .* (Y*D)) ans = 0.4667 0.1333 2.0000 -1.0000 >> Z Z= 0.4667 0.1333 2.0000 -1.0000

function J = normaEvec(vecY) % This function calculates the Euclidean norm of % y := vec(Y)- SR * vec(f(Y)) % where vec(f(Y)) = vec(Z) - vec(C) .* (DkI * vec(Y)) % Minimization of J gives an approximate solution to % the matrix equation: R*Y*S + (C .* (Y*D)) = Z. % Obs. SR = kron(inv(S'),inv(R)) and DkI = kron(D', eye(Y)). % R and S are nonsingular and all matrices have compatible dimensions. % global Z C DkI SR; vecf = Z(:) - C(:) .* (DkI * vecY); J = norm(vecY - (SR * vecf)); end

Important relations and properties A Ä (B Ä C) = (A Ä B) Ä C (associativity) A Ä (B Ä C) = (A Ä B) + (A Ä C) (distributivity) (A Ä B) Ä C = (A Ä C) + (B Ä C) kÄA=AÄk=kA , k scalar k1 A Ä k2 B = k1 k2 A Ä B, k1 , k2 scalars (M C) Ä (Y D) = (M Ä Y) (C Ä D) (mixed-product property), for conforming matrices -1 -1 -1 (A Ä B) = A Ä B (A Ä B)’ = A’ Ä B’ , (A Ä B)H = AH Ä BH (A Ä In) (Im Ä B) = A Ä B = (Im Ä B) (A Ä In) For vectors a and b, a’ Ä b = b a’ = b Ä a’ For partitioned matrices, [A1 A2] Ä B = [A1 Ä B A2 Ä B], but A Ä [B1 B2] ¹ [A Ä B1 A Ä B2] tr(A Ä B) = tr(A) tr(B) rank(A Ä B) = rank(A) rank(B) The vec operator creates a column vector from a matrix by stacking the n column vectors of the matrix below one another:

=



vec(A+B) = vec(A) + vec(B) vec(k A) = k vec(A) , k scalar vec(a b’) = b Ä a , for any vectors a and b vec[C .* (Y D)] = vec(C) .* vec(Y D) vec(A B C) = (C’ Ä A) vec(B) Amxn , Bnxp , Cpxq vec(A B) = (I Ä A) vec(B) = (B’ Ä I) vec(A) vec(A B C) = (Iq Ä A B) vec(C) = (C’ B’ Ä Im) vec(A) tr(A B C) = vec(A’)’ (I Ä B) vec(C) tr(A B) = vec(A’)’ vec(B) tr(A’ B C D’) = vec(A)’ (D Ä B) vec(C) A .* B = B .* A

k (A .*B) = (kA).*B , k scalar (A + B) .* Q = A.*Q + B.*Q , A,B,Q (nxm) A.*B = En’ (A Ä B)Em , where Em (m2xm) is a selection matrix. AYB=C Þ (B’ Ä A) vec(Y) = vec(A Y B) = vec(C)