Models with orthogonal block structure, with diagonal

0 downloads 0 Views 77KB Size Report
†Unidade Departamental de Matemática e Física, Instituto Politécnico de Tomar, 2300-313 Tomar, Portugal. Abstract. We intend to show that in the family of ...
Models with Orthogonal Block Structure, with Diagonal Blockwise Variance-Covariance Matrices Francisco Carvalho∗,† , João T. Mexia∗ and Ricardo Covas∗,† ∗



CMA - Centro de Matemática e Aplicações, Universidade Nova de Lisboa, 2829-516 Caparica, Portugal Unidade Departamental de Matemática e Física, Instituto Politécnico de Tomar, 2300-313 Tomar, Portugal

Abstract. We intend to show that in the family of models with orthogonal block structure, OBS, we may single out those with blockwise diagonal variance-covariance matrices, DOBS. Namely we show that for every model with observation vector y with OBS, there is a model y˚ = P y , with P orthogonal which is DOBS and that the estimation of relevant parameters may be carried out for y˚ . Keywords: Orthogonal block structure, variance-covariance matrices, estimation PACS: 02.50.-r

INTRODUCTION Models with orthogonal block structure (OBS), have, see [4, 5], variance covariance matrices m

V (γ ) =

∑ γ jQ j,

j=1

with non-negative variance components γ j ≥ 0, j = 1, . . . , m, and where the matrices Q 1 , . . . , Q m are known mutually orthogonal orthogonal projection matrices (MOOPM), that add up to I n , the identity matrix of dimension n. Besides the Q 1 , . . . , Q m , other orthogonal projection matrices play an important part in the inference on these models. If the model has mean vector µ = X β and the g j vectors of A j constitute an orthonormnal basis for the range space of Q j ,   A j X ) [orthogonal complement of Ω j relative to Q j ) and K j K cj is the orthogonal projection matrix on Ω = R(A R(Q Rg j ], we take then  ′ ⊤ j = 1, . . . , m  Q j = A j K jA j 

We can reorder the Q 1 , . . . , Q m to have unbiased estimator (BQUE) for γ j is

Q cj = A ⊤j K cj A j

j = 1, . . . , m.

Q′j ) > 0, if p j = rank(Q

γej =

and only if j ≤ z. Moreover, see [1], the best quadratic

y⊤ Qcj y , pcj

Qcj ) > 0. We will make the estimability assumption that pcj > 0, j = 1, . . . , m. whenever pcj = rank(Q In the next section we will use orthogonal transformation to lighten the inference for models with OBS.

EQUIVALENCE RELATIONS Since the inverse of orthogonal matrices are orthogonal matrices, it is easy to see that writing y τ y˚ , where y˚ = P y ,

International Conference of Numerical Analysis and Applied Mathematics (ICNAAM 2016) AIP Conf. Proc. 1863, 560090-1–560090-4; doi: 10.1063/1.4992773 Published by AIP Publishing. 978-0-7354-1538-6/$30.00

560090-1

with P orthogonal, we define an equivalence relation between models with observation vectors y and y˚ . If y has OBS with variance-covariance matrix V (γ ), the variance-covariance matrices of y˚ will be V˚ (γ ) =

m

∑ γ j Q˚ j ,

j=1

˚ 1 , . . . , Q˚ m MOOPM that add up to I n . Thus the family Θ n of n dimension models with OBS will be saturated with Q for τ . Since ˚ j ) = rank(Q Q j) = g j, rank(Q j = 1, . . . , m, the vector g = ( g1 · · · gm ) will be invariant in the τ equivalence class. Q j ), the row vectors of A˚ j = A j P ⊤ will Moreover, if the row vectors of A j constitute an orthogonal basis for R(Q ˚ j ), j = 1, . . . , m. Besides this, y˚ will have mean vector constitute an orthogonal basis for R(Q

µ˚ = P µ = X˚ β , where X˚ = P X , so thus K˚ j = K j and

c K˚ j

X˚ j = A˚ j X˚ = A j P ⊤ P X = X j ,

j = 1, . . . , m,

= K cj , j = 1, . . . , m, as well as ˚ ′ = A˚ ⊤ K˚ j A˚ j = P A ⊤ K j A j P ⊤ = P Q ′ P ⊤ , Q j j j j

j = 1, . . . , m

c

˚ = P Q c P ⊤ , j = 1, . . . , m. and as Q j j Now, see [2], with U such that U ⊤U =

z

∑ Q˚ j ,

j=1

U y is linearly sufficient thus, see [3], whatever the estimable linear function c ⊤ β , we have for it a best linear unbiased estimator, (BLUE), c ⊤ (γ )yy. The obtaining of e c (γ ), given c and γ can be carried out, see [2]. Moreover with c˚ (γ ) = P c (γ ), we have c˚ (γ )⊤ y˚ = c˚ (γ )yy, so we will also have for the BLUE, e c˚ (γ ) = Pe c (γ ).

A special case, with interest, of models with OBS is that of models with commutative orthogonal block structure, X ) commutes with the MOOPM Q 1 , . . . , Q m , thus, (COBS), in which the orthogonal projection matrix T on R(X whatever γ , with V (γ ). Then see [6], the least square estimators, (LSE), will be uniformly best linear unbiased estimator, (UBLUE), meaning that they will be BLUE whatever γ . Now the orthogonal projection matrix T˚ on R(X˚ ), with X˚ = P X , is given by ⊤ T˚ = P T P ⊤ = U˚ U˚ ,

˚ j = P Q j P ⊤ , j = 1, . . . , m, and since y˚ = P y have variancewith U˚ = U P ⊤ . Thus T˚ will commute with the MOOPM Q covariance matrices m ˚ j, P⊤ = ∑ γ j Q V˚ (γ ) = PV (γ )P j=1

we see that y˚ has COBS. Thus the family Θ n of n dimension models with COBS is also saturated for the equivalence relation τ .

560090-2

DIAGONAL OBS We now single out an interesting subfamily Dn of Θn that of the n dimension models with OBS and blockwise diagonal D( γ1 I g1 · · · γn I gn ). These will be the diagonal OBS (DOBS). ˚ 1, . . . , Q ˚ z and Q c , . . . , Q c are MOOPM, and that Going back to the proceeding section, we recall that the matrices Q 1 m we have z m ˚ j + ∑ γ j Qc , V (γ ) = ∑ γ j Q j

j=1

j=1

c ˚ j ), moreover, if the p j [pcj ] row vectors of A˚ j , j = 1, . . . , z [A˚ j , j = 1, . . . , m] constitute an orthonormal basis for R(Q c Q j ), j = 1, . . . , m], j = 1, . . . , z [R(Q h i ⊤ c⊤ P˚˚ = A˚ ⊤ · · · A˚ ⊤ A˚ c ˚ · · · A 1 z 1 m

is orthogonal, so y˚˚ = P˚˚ y will have OBS. With D( C 1 · · · C m ), the blockwise design matrix whose principal blocks C m , the variance-covariance matrices of y˚˚ = P˚˚ y will be are C 1 , . . . ,C  V˚˚ (γ ) = D γ1 I p1 · · · γz I pz γ 1 I pc1 · · · γm I pcm . To see the lightning of the calculations we get using these models, we may point out that with  j = 1, . . . , z  y˚˚ j = A˚ j y 

c y˚ cj = A˚ j y

j = 1, . . . , z,

the BQUE estimators of the variance components will be

γej =

k˚y c k2 pcj

j = 1, . . . , m.

˚˚ j = P˚˚ Q ˚ j P˚˚ ⊤ , j = 1, . . . , z we have Moreover, with Q z

∑ Q˚˚ j = D

I p1

···

0 pc1 ×pc1

I pz

···

j=1

with





U˚˚ = 

z

Ip 0 q×p

0 pcm ×pcm



⊤ = U˚˚ U˚˚ ,



,

where p = ∑ p j and q = n − p. j=1

Now, given γ , the BLUE for c ⊤ µ˚˚ , with µ˚˚ = P˚˚ µ , the mean vector for the model with DOBS will be, see [2], e a ⊤ µ˚˚ with e a = X˚˚



†⊤

n−k

c + ∑ ueℓ α ℓ , i=1

where X˚˚ will be the Moore-Penrose matrix inverse of X˚˚ = P˚˚ X , the α 1 , . . . , α n−k constitute an orthonormal basis of the orthogonal complement of R(X˚˚ ), and u1 , . . . , un−k are obtained solving the system W (γ )uu = b (γ ), where W (γ ) = [wh,h′ (γ )] with ˚˚ wh,h′ (γ ) = α ⊤ h′ V (γ )α h ,

h = 1, . . . , n − k; h′ = 1, . . . , n − k,

560090-3

and b (γ ) has the components

 ⊤  † ˚ ˚ bh (γ ) = −α ⊤ V ( γ ) X˚˚ c . h

We point out that if v (1) and v (2) have subvectors v j (1) and v j (2) with p j , j = 1, . . . , z, [pcj−z , j = z + 1, . . ., m + z] components, we have v (1)⊤V˚ (γ )vv(2) =

z

m+z

j=1

j=z+1

∑ γ j v j (1)⊤ v j (2) + ∑

γ j−z v j (1)⊤ v j (2),

which lightens obtaining the elements of matrix W (γ ) and the components of b (γ ). Lastly we point out that applying the Gram-Schmidt orthogonalization technique to matrix

h

X˚˚

.. .

first the η 1 , . . . , η k , that constitute an orthonormal basis for R(X˚˚ ) and then the α 1 , . . . , α n−k .

In

i , we obtain

FINAL COMMENT When we replace y by y˚ , the family of statistics obtained from y is the same as the one obtained from y˚ . Namely we can replace y by y˚˚ and obtain a more thackable observations vector for which we have the same optimal results as previously, but with lighter writing.

ACKNOWLEDGEMENTS This work was partially supported by CMA / FCT / UNL, under the project UID/MAT/00297/2013.

REFERENCES 1. Carvalho, F.; Mexia, J.T.; Zmy´slony, R. (2016). Best Quadratic Unbiased Estimators for Variance Components in models with Orthogonal Block Structure. In Simo Puntanen, S. Ejaz Ahmed and Francisco Carvalho, Proceedings of the International Workshop on Matrices and Statistics (IWMS’2016), Funchal, Madeira (Portugal). Springer. 2. Ferreira, S.S.; Ferreira, D.; Nunes, C. (2012) Linear and quadratic sufficiency and commutativity. AIP Conf. Proc. 1479, 1694–1697. 3. Mueller, J. (1987). Sufficiency and completeness in the linear model. Journal of Multivariate Analysis, 21 (2), pp. 312–323. 4. Nelder, J.A. (1965a). The Analysis of Randomized Experiments with Orthogonal Block Structure. I - Block Structure and the Null Analysis of Variance. Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences, 283 (1393), pp. 147–162. 5. Nelder, J.A. (1965b). The Analysis of Randomized Experiments with Orthogonal Block Structure. II - Treatment, Structure and the General Analysis of Variance. Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences, 283 (1393), pp. 163–178. 6. Zmy´slony, R. (1978). A characterization of Best Linear Unbiased Estimators in the general linear model. Lecture Notes in Statistics 2, pp. 365–373.

560090-4