Signal Processing, IEEE Transactions on - IEEE Xplore

0 downloads 0 Views 295KB Size Report
ES subspace parameterization in time series analysis and adaptive filtering is not ... THE ESTIMATION of signal subspaces is often a major ini- tial step in the .... the first approximant (17) is incorporated in this second approx- imation step.



Equirotational Stack Parameterization in Subspace Estimation and Tracking Peter Strobach, Senior Member, IEEE

Abstract—In this paper, we study the following “equirotational” stack (ES) parameterization of subspaces:



.. .


where is the core basis matrix, and is the subrotor. The fact that successive submatrices in the basis stack are just identically rotated versions of each other is usually a direct consequence of uniform sampling. Uniformly sampled complex exponential sequences can always be represented perfectly in subspaces of this kind. Early notions of ES subspace parameterization appear in array processing, particularly in direction finding using multiple invariance ESPRIT and regular array geometries (uniform spatial sampling). Another potential application area is spatiotemporal array data analysis. Even an application of ES subspace parameterization in time series analysis and adaptive filtering is not unreasonable. We present a class of fast algorithms for total least squares (TLS) estimation and tracking of the parameters and . Using these new algorithms, signal subspaces can be estimated with a much higher accuracy, provided only that the subspaces of the given signals are ES parameterizable. This is always the case for uniformly sampled narrowband signals. The achievable gain in estimated subspace SNR is then 10 log10 (4 ) dB over conventional (unparameterized) subspace tracking, where the potential ES structure of the underlying data cannot be exploited. Consequently, we make the point that our algorithms offer a significant performance gain in all major application areas with uniformly sampled narrowband signals in noise over the previously used conventional (unparameterized) subspace estimators and trackers. Index Terms—Array processing, ESPRIT, parameter estimation, subspace tracking.



HE ESTIMATION of signal subspaces is often a major initial step in the processing of noise corrupted narrowband signals. A great interest has grown around adaptive subspace estimators, the so-called subspace trackers. A survey on subspace tracking can be found in [2]. More recent results are described, compared, and discussed in [3]–[5] and the references listed therein. Manuscript received October 1, 1998; revised June 22, 1999. The associate editor coordinating the review of this paper and approving it for publication was Dr. José C. Principe. The author was with the Department of Mathematics, University of Passau, Passau, Germany. He is now with Fachhochschule Furtwangen, Furtwangen, Germany. Publisher Item Identifier S 1053-587X(00)01557-9.

These algorithms have often been classified according , or to their computational complexity in techniques, where denotes the input vector dimendenotes the number of dominant characteristic sion, and (singular or eigen)-values and vectors to be tracked. From a more advanced point of view, all these previously developed and studied algorithms belong to the class of “unparameterized” (or unstructured) subspace trackers characterized by the property that they represent the subspace of interest simply by a set of linearly independent vectors of dimension . The specific requirements of multidimensional signal processing, particularly direction finding using multiple invariance (MI) ESPRIT [1], [6], [7] have recently motivated the development of a completely new category of subspace trackers, where the subspace basis consists of a set of “stacked” submatrices of the form

.. .


is the column-orthonormal core basis mawhere is the set of complex subrotors of trix, and . This can be called a “stack” parameterization dimension of the subspace basis with the characteristic property that the subbases represented by the submatrices in are just rotated versions of each other. A class of TLS subspace trackers based on this stack parameterization has been developed in [8], and an application of these subspace trackers in an adaptive MI-ESPRIT algorithm has been described. In this specific application, we were interested in the subrotors. The complex eigenvalues of these subrotors represent the desired phase parameters. Hence, the subrotors play a key role in multidimensional direction finding, harmonic retrieval, and frequency estimation. The property that the subbases in the stack are just rotated versions of each other is equally as important. This is exploited in our TLS-based algorithms which, according to the TLS principle, superimpose the appropriately rotated submatrices to obtain a more accurate estimate of the core basis matrix . We show that this improves the estimated subspace SNR by a factor dB over conventional unstructured subspace of tracking. This SNR improvement can be quite significant when is high. the overmodeling factor Therefore, the question arises as to whether the concept of parameterized subspace tracking using the TLS principle can be applied to a broader class of problems. Most interestingly, it

1053-587X/00$10.00 © 2000 IEEE


turns out that stack parameterization in combination with TLS estimation of subspaces is applicable in all areas where noisecorrupted narrowband signals must be processed. In the case of uniform sampling, we get an even more constrained parameterization, where the subrotors form a series of matrix powers (2) This parameterization is fully determined by the core basis matrix and a single subrotor . We call this an equirotational stack (ES) parameterization of a subspace. Uniformly sampled complex exponential sequences can always be represented perfectly in subspaces of this kind. The ES subspace parameterization principle is, hence, of utmost importance in the analysis of narrowband uniformly sampled complex signals. In this paper, we develop a special class of subspace estimators and trackers for ES parameterizable subspaces. Quite naturally, these algorithms are founded on some major theoretical results described in [8]. Note, however, that we must extend the basic theory of [8] significantly to accommodate the rich structure of an ES parameterizable subspace in an optimal sense. The ES assumption leads to a form of “nested” (two-fold) TLS approximation on the estimator side. This way, our new ES-TLS subspace trackers achieve an SNR performance gain dB over conventional unstructured (or unpaof rameterized) subspace tracking. Hence, another 6 dB in SNR performance is gained in the step from MI parameterization independent subrotors to the more constrained ES using parameterization using only a single subrotor . This paper is organized as follows. In Section II, we introduce the ES data model and develop the basic theoretical framework for TLS estimation of the ES parameters. In Section III, we develop two fast algorithms for sequential ES parameter tracking. Experimental results and comparisons are shown in Section IV. In Section V, we summarize the conclusions. II. PRINCIPLES



In this section, we develop the necessary theoretical framework for equirotational stack (ES) parameterization of subspaces. This comprises an ES data model and two typical examples for ES parameterizable signals. Finally, we introduce a nested two-fold TLS estimator for ES subspace parameter estimation. This estimator concept forms the basis for the sequential ES-TLS subspace trackers described in the following section.


vector of incoherent sources; realization of white noise. The above ES data model is much less restrictive than expected. It can be applied in all cases where stationary harmonic and uniformly sampled sequences must be processed. Typical application areas are the following. 1) Spatiotemporal Uniform Linear Array Data: Assume that we have given a sequence of sampled output vectors of sensors stacked in a uniform (equispaced) linear array of , where

.. . (4) It is readily verified that this data satisfies the ES model (3). See also [8]. Basically, the same configuration can be used for two-dimensional (2-D) uniform linear array data [1]. In these is often (but not array data cases, the subvector dimension necessarily) determined by the number of sensors in a subarray. 2) Time Series Data: Less evident is the stack organization for time series data. Consider a time series with samples (5) . Again, Subsequent samples may be stacked in a vector may be subdivided in subvectors of dimension . Note that besides a standard block segmentation, alternative strategies such as interleaved and overlapped subvector constructions can be applied. In time series analysis, there exists usually no physical or technical side condition that determines the number of subdivisions in the stack. Nevertheless, the overall noise suppression capability or the achievable subspace SNR of a subspace tracker based on ES-TLS subspace estimation will depend crucially on the appropriate choice of the dimension parameters and as a function of the subspace dimension . Toward the end of this section, we develop some rules for optimizing these dimension parameters. B. ES-TLS Subspace Estimation: Basic Results

A. ES Data Model

We now develop the basic theory underlying the TLS estimation of the ES parameters and from a given ES parameterizable data set. For this purpose, introduce the covariance matrix

In what follows, we assume that a signal snapshot vector satisfies the following ES data model:

(6) .. .

.. .


and Let values of

denote the dominant eigenvectors and eigen, respectively. Define (7)

where signal steering matrix; diagonal-phase parameter matrix;

In what follows, we assume that no noise is present in the data, . In this case, spans a subspace in i.e.,



which all data vectors can be represented perfectly. The following rotational invariance relations hold:

This yields (15) We deduce

.. .

.. .

.. .


are subspace rotors. We have and , where can be interpreted as a diagonal decomposition. Our goal is the estimation of and from a given unstructured or unparameterized subspace basis . Later, we develop algorithms where is not even computed explicitly. This leads to a fast ES-TLS subspace tracker. In addition, note that we may compute as the matrix of complex eigenvalues of . This plays a role in modal parameter estimation and is discussed in more detail in [8]. The TLS principle can now be applied. For this purpose, we submatrices in recall that the subsequent the stack are rotated versions of each other. Moreover, pairs of submatrices are also rotated versions of each other. This rich structure of ES subspaces leads to the following approach:



(16a) (16b) A first TLS approximant is established in (17) It is evident from (11)–(13) that in the noise-free data case, . the approximant is equal the true matrix, i.e., Hence, we recall (9) and (10) and use (17) to obtain

(18) This constitutes a second TLS approximation step. Interestingly, the first approximant (17) is incorporated in this second approximation step. We call this a “nested” TLS approximation structure. A comparison of terms in (18) yields (19)

(9) has only rank and is, hence, represented perfectly Clearly, by a truncated SVD of the form (10) is the column-orthonormal matrix of left singular is the diagonal matrix of singular values, and is the Hermitian transpose of right singular vectors partitioned in submatrices, each . of dimension and must span the same subspace. Note that both

where vectors,

Hence, there exists a subspace rotor

so that (11)

and are just rotated versions of each other. Moreover, and . Consider a column concatenation of (12) has only rank and is, hence, represented It is evident that perfectly by a truncated SVD of the form (13) Set (14)

Define (20) to find that (21) Observe that hereby, we define separate expressions for each power of . Table I summarizes the necessary steps in this nested TLS routine for ES parameter estimation. Note that in the table, we have taken into account that in the presence of noise, (10) and (13) describe rank approximants of the respective matrices. In addition, the computed ES structure in (8) is only an approximant to when noise is present. C. Noise Suppression Capabilities The basic algorithm structure, as displayed in Table I, allows some meaningful interpretations regarding the expected noise suppression capabilities. Additionally, we shall develop some rules for determining the dimension parameters and in case these parameter can be chosen arbitrarily in a given application. Noise is suppressed in the algorithm through the rank approximations (10) and (13). Assuming multiple noise eigenvalues, i.e., a perfectly flat noise floor level, the noise in a low-rank approximation is suppressed by a factor equal to the ratio of the number of singular values of the complete SVD and




A. Estimation of an Unstructured Basis A primary task is the estimation of an unstructured basis from a sequence of data vectors of dimension . Suppose that we have given an estimate of the data covariance maaccording to (6). trix A powerful classical method for computing approximations to an unstructured set of dominant basis vectors is the following orthogonal iteration: random, orthonormal until convergence iterate -factorization


Details about orthogonal iteration can be found in [4] and [8] and in the classical numerical analysis literature [9] and [10]. B. Computing Truncated SVD’s Once an estimate of is available, the algorithm in Table I must be computed. This essentially requires the computation of the truncated SVD’s (10), (13). A powerful classical method for computing truncated SVD’s is the following bi-iteration [5], [11]. Suppose we have given a matrix . We wish to compute the truncated SVD as defined in (10). For this purpose, the following bi-iteration is established: random, orthonormal until convergence iterate the number of singular values of the truncated SVD. Consider comprises approximation (13). The complete SVD of singular values. The truncated SVD comprises singular values. Hence, this approximation step reduces the noise power by a fixed factor of 2. The noise suppression factor obtained from approximation (10), however, is a function of the dimenrows and columns. sions of . This matrix has Hence, the complete SVD comprises singular values. Therefore, the noise suppression factor will if . Otherwise, the noise be equal . The overall maximum suppression factor will be equal noise suppression factor that can be achieved by this algorithm if and otherwise. If is, hence, possible, the dimension parameters and should be chosen becomes a square (or almost square) matrix. In the so that case of a perfectly square matrix , the algorithm reaches its maximum performance for a given fixed input vector length . III. FAST ALGORITHMS FOR ES SUBSPACE ESTIMATION AND TRACKING In this section, we develop fast realizations of the basic algorithm listed in Table I. Subspace iteration, i.e., orthogonal iteration and bi-iteration concepts [4], [5] are employed for this purpose. As a final result, we obtain two fast subspace trackers based on ES parameterization of the estimated subspace. In the derivation of these algorithms, we make a heavy use of some theoretical concepts obtained in [8].

-factorization (23) -factorization. This iteration produces the truncated SVD of

in the form

(24) where (25a) (25b) (25c) In the same fashion, we define a bi-iteration for the truncated as SVD of random, orthonormal until convergence iterate -factorization (26) -factorization.



This iteration produces the truncated SVD of

in the form

assume that the data covariance matrix is updated in time according to the following rule:

(27) where (28a) (28b) (28c) obtained by the The used in (25a) to construct

-factorization in (23) is as

(29) According to the nested TLS structure described in Table I, we use relations (25a)–(25c) and (28a)–(28c) to express and in terms of the iterated matrices in (23) and (26) as

(34) where is a positive exponential forgetting factor less than 1. The underlying idea in the sequential algorithms is that in each time step, a single iteration of the batch algorithm of Table II will be sufficient to track the ES subspace, as desired. This assumption is quite reasonable because for an close to a value of 1, which is the usual case in practice, the perturbation due to updating will be sufficiently small. This direct in form of a tracker, however, is quite uninteresting because the explicit updating of a covariance matrix of dimension is computationally demanding. We develop fast algorithms that circumvent this difficulty. For this purpose, we employ the theory of fast structured subspace updating presented in [8]. There, it was shown that the propagation of structured subspaces in time can be described using the following model: (35) where

(30) (31) (32)

.. .


Table II is a nested TLS algorithm for estimation of an ES sub. It is important space from a given data covariance matrix to realize that this algorithm is essentially an ES-structured orthogonal iteration. The basic difference to the unstructured classical orthogonal iteration of (22) is that the stack matrix now acts as a structured iterated basis, whereas in the classical . The algorithm, one uses an unstructured iterated basis computation of the iterated ES structured basis requires a nested TLS approximation step, as summarized in Table I. This is implemented in terms of truncated bi-iteration SVD’s (23) and (26). These iterations are incorporated in the overall ES-structured orthogonal iteration of this estimator. Finally, note that the algorithm in Table II requires an initialization of the structured iterated basis . As an alternative to a completely random initialization, a few iterations of the unstructured orthogonal iteration (22) are computed, and the structured basis is initialized with the unstructured basis estimate of (22) . as

(36) is the component of is a “gap” matrix. Clearly, that can be represented perfectly in the subspace spanned , and spans an orthogonal innovations subby space. Concepts of this kind have been studied in the context of fast subspace tracking using subspace iterations [4], [5]. The definition of the ES basis matrix (33) can be used to express more compactly as

(37) where (38) can be obtained replacing the A fast time updating of iteration index by the discrete time in (22). Additionally, the covariance matrix becomes a function of time and is updated according to (34):

(39) Define

C. ES-TLS Subspace Tracking—Algorithm 1 We are now in a position to derive fully sequential algorithms for ES-TLS subspace tracking. In the sequential algorithms, we





where we used the input data stack segmentation defined in (3) that follows directly from (33). and the definition of This yields

this term without affecting the update of the following reasons. 1) If

significantly for

has a value close to 1, we can assume that



Finally, apply the time updating model of ES subspaces (35) to . This gives

because the subspace innovation caused by time updating will be small. 2) Additionally, we observe that this residual term with the full covariance matrix does not affect the stability of the overall recursion. This stability is always guaranteed beis a norm reducing (contractive) rotor by defcause must be less or equal 1 in inition. All eigenvalues of magnitude. Hence, taking into account that we have in practice, stability is always guaranteed, even

(42) beThe annoying term in this expression is cause it still contains the full covariance matrix. We can neglect



with a simplified update that neglects the term . , we replace Finally, in this fast time updating recursion of by its TLS approximant. According to (8), (32), and (33), we obtain

Hereby, the A-matrices are completely eliminated from the recursion. They are replaced by direct update recursions for the auxiliary bi-iteration matrices and . The following expressions are obtained:

.. . .. .

(44) The fast time updating recursion of

finally becomes (45)

Hence, the ES-TLS subspace tracker Algorithm 1 comprises the . The rerecursions (40) and (45) for time updating of cursions for constructing the structured ES subspace estimate follow, as listed in Table II. These recursions are from computed once in each time step. Finally, we compute according to (38) and the characteristic “gap” matrix , as is computed according to defined in (37). The new (44), and the algorithm is complete. It is recommended that this subspace tracker is started from a batch estimate. One way to achieve this is the batch algorithm of Table II. This method, however, requires a data covariance matrix as input and is therefore relatively time consuming. As data snapshot an alternative, we can simply collect a few vectors in the columns of an auxiliary matrix. We compute an estimate of the left singular vectors of that matrix using a few batch bi-iterations. These singular vectors represent a coarse initial estimate of an unstructured basis of the subspace to be tracked. This unstructured basis estimate is then used to form . We then compute a few sweeps of the bi-iteraan initial tions (23) and (26) with this . This yields the desired initial estimates of the iterated matrices required to start the tracker. Additionally, we must compute (30)–(32) to obtain an initial and . Throughout all our experiments with guess of the trackers of this class, we used this initialization. D. ES-TLS Subspace Tracking—Algorithm 2 A quick inspection reveals that Algorithm 1 has a dominant arithmetic operations per time update. complexity of This operations count is mainly dominated by the explicit time according to (45). We can further streamline updating of the algorithm and establish a variant in which the explicit comis completely circumvented. This requires putation of some algebra. We will restrict ourselves to a verbal explanation of the necessary steps and summarize the results to save space. in (45) must be expressed in terms of and according to (44). 2) Equation (45) is then merged with (8) and (9), i.e., can be expressed in terms of , , , and an innovations term that comprises and . and , respectively, is 3) This representation of then substituted into bi-iteration (23). 1)


where (47) In a similar fashion, we obtain

.. .

.. .

.. .



(49) Observe that the computation of these expressions is much less demanding than expected because most of the operations inonly. Hence, the dominant volve matrices of dimension arithmetic complexity of the algorithm is reduced to operations per time update. Note also that Algorithm 1 and Algorithm 2 are theoretically equivalent because in the derivation of Algorithm 2, there is no additional approximation involved. Table III is a complete quasicode listing of Algorithm 2. IV. EXPERIMENTAL VERIFICATION The described ES-TLS estimation algorithms have been tested experimentally. We show results of a series of computer experiments, where Algorithm 2 as listed in Table III was applied on synthetic sets of uniform linear array data generated by the spatiotemporal uniform linear array data model as described in Section II-A. In a first experiment, we assumed a stationary scenario with two incoherent sources with parameand . In the ters




array, we assumed equispaced sensors. We processed successive snapshot vectors. Test data sets jointly with superimposed white Gaussian noise were generated in the range from 10 to 10 dB in steps of 5 dB. The following algorithms were applied on these data sets. 1) Karasalo’s classical unparameterized subspace tracker . A quasicode with an input vector dimension of listing of this algorithm is available in [5]. 2) The MI-TLS subspace tracker of [8, Tab. II]. This subspace tracker is also based on a stack parameterization, subrotors but the underlying assumption is that the in the model are all independent. Recall (1) in Section I. 3) The ES-TLS subspace tracker of Table III in this paper. This subspace tracker assumes that subsequent subrotations are all identical.

The algorithm rank parameter was set to . Each experiment has been carried out using five trial runs with identical data and independent superimposed noise realizations. The exponen. tial forgetting factor in the algorithms was set to Each run comprises 4000 time steps. The criterion used for comparisons is the so-called subspace SNR (SSNR) as described in the Appendix. This criterion can be expressed in decibels and comprises all principal angles. It reflects the overall achieved noise suppression and the quality of the estimated subspace better than the previously used dominant principal angle criterion. The reference subspace in the computation of the SSNR is spanned by the dominant eigenvectors of the signal-only covariance matrix estimated using (34) with signal-only snapshot vectors and a compatible forgetting factor . of



Fig. 1. SSNR adaptation characteristics of Karasalo, MI-TLS, and ES-TLS algorithms for a data SNR of 10 dB. Five independent trial runs displayed for each algorithm.

Fig. 3. SSNR adaptation characteristics of Karasalo, MI-TLS, and ES-TLS algorithms for a data SNR of 10 dB. Five independent trial runs displayed for each algorithm.

Fig. 2. SSNR adaptation characteristics of Karasalo, MI-TLS, and ES-TLS algorithms for a data SNR of 0 dB. Five independent trial runs displayed for each algorithm.

Fig. 4. Average steady-state SSNR as a function of the data SNR for Karasalo, MI-TLS, and ES-TLS algorithms.

Figs. 1–3 show the start-up and convergence characteristics of the three algorithms under test in terms of the computed SSNR. Five independent trial runs were plotted in one diagram. It is seen that MI-TLS performs already significantly better than Karasalo’s classical unparameterized subspace tracker. ES-TLS produces the best SSNR results and outperforms both Karasalo and MI-TLS trackers, as expected. For a more global comparison, we computed the average SSNR over five trial runs after for each of initial adaptation in the range the three algorithms for data SNR’s ranging from 10 to 10 dB in steps of 5 dB. The resulting average SSNR values as a function of the data SNR for the three algorithms under comparison are plotted in Fig. 4. It is seen that we measured a constant performance gain of approximately 6.8 dB achieved by the MI-TLS technique over unparameterized subspace tracking. An additional gain of approximately 4.4 dB is achieved by the use of ES-TLS subspace tracking. Thus, the overall gain in performance achieved by ES-TLS over unparameterized subspace tracking is approximately 12.2 dB in this example. Let us compare this value with the theoretical SSNR improvement that can be expected in this case. According to the considerations


made in Section II-C, the upper bound on SSNR improvement of ES-TLS over unparameterized subspace tracking is given by dB dB because we have here a with . slightly overquadratic is relatively The important observation is that although small, we still achieved a remarkable performance gain by the application of the ES-TLS technique over standard unparameterized subspace tracking. The loss of 2.57 dB in SSNR performance relative to the theoretical maximum can be explained by the inevitable correlation of the noise in finite records and the finite integration time caused by exponential forgetting. In a second experiment, the transient characteristics of Algorithm 2 (Table III) is studied using nonstationary data as input. For this purpose, the ES data model (3) and (4) was operated and . with time varying parameters The trajectories of these parameters are displayed in Fig. 5. It is seen that we have three stationary segments and smooth transitions between these segments. The exponential forgetting factor for fast tracking. All other was reduced to a value of parameters remained unchanged. Figs. 6–8 show the tracking characteristics of the algorithm in terms of the SSNR performance for a decreasing data SNR of 10 , 0 and 10 dB. Again,


Fig. 5.

Parameter trajectories in the nonstationary data case.

Fig. 6. SSNR tracking characteristics of Karasalo, MI-TLS, and ES-TLS subspace trackers. The data SNR was 10 dB. Five independent trial runs displayed for each algorithm.


the performance is compared with classical unstructured subspace tracking and less constrained MI-TLS subspace tracking. It is seen that the tracking characteristics is not affected by structuring the subspace. All algorithms under test show the expected loss in SSNR performance in the transient regions where the model assumptions are violated. The speed of adaptation on the next stationary data segment is entirely a function of the exponential forgetting factor . As a consequence of a reduced exponential forgetting factor, all three algorithms under comparison show an equal loss in SSNR performance. There is a trade-off between adaptation speed and asymptotic SSNR performance as a function of . The basic characteristics of the estimators is largely independent of the data SNR. V. CONCLUSIONS It has been shown that stack parameterization of basis matrices and the associated TLS techniques can improve the accuracy of estimated subspaces significantly. A special case of interest occurs with uniformly sampled, narrowband signals, which is a most frequent application area of subspace tracking. We have shown that subspaces for signals of this class are amenable to a “equirotational stack” (ES) parameterization,


Fig. 7. SSNR tracking characteristics of Karasalo, MI-TLS, and ES-TLS subspace trackers. The data SNR was 0 dB. Five independent trial runs displayed for each algorithm.

Fig. 8. SSNR tracking characteristics of Karasalo, MI-TLS, and ES-TLS subspace trackers. The data SNR was 10 dB. Five independent trial runs displayed for each algorithm.


where the basis matrix is represented by a stack of submatrices that are just identically rotated versions of each other. This property of the uniformly sampled narrowband signal subspace has been exploited in the development of a special class of TLS subspace estimators and trackers for maximum noise suppression. The fastest of these trackers approaches a dominant operations per time update, which is the complexity of lowest principal complexity that can be achieved in a problem of this kind. Bounds on the achievable SNR improvement have also been developed. Computer experiments have drawn a realistic picture about the operation and performance of the algorithms. Comparisons with unparameterized subspace tracking and MI subspace tracking with independent subrotors have shed some light on the great potential of the ES-TLS subspace tracking concept over classical unparameterized subspace tracking. APPENDIX A problem that is often encountered in subspace estimation and tracking is the comparison of true and estimated subspaces. Let and denote orthonormal basis sets of true and estimated



subspaces of dimension should be used for comparisons. 1) Subspace SNR: Let

. The following criteria

(50) The subspace SNR is then defined as SSNR


2) Principal Angles: A comparison of subspaces is often made in terms of principal angles or some dominant principal angles. Denote (52) . The square-roots of the eigenvalues the EVD of diag are the cosines of the principal angles (53) Finally, note that SSNR


and hence, it is seen that the SSNR criterion comprises all principal angles and is therefore a more general criterion. We will provide a meaningful interpretation of the SSNR criwith reterion. Consider the orthogonal decomposition of spect to the subspace spanned by the column vectors of (55) denotes the true signal component in In this expression, that can be represented in the true subspace spanned by . . Hence, represents an orAdditionally, we have thogonal error subspace. Consequently, the SSNR criterion is defined as the ratio of the squared Frobenius norms of signal and noise components in the estimated subspace SSNR (56)

REFERENCES [1] A. L. Swindlehurst, B. Ottersten, R. Roy, and T. Kailath, “Multiple invariance ESPRIT,” IEEE Trans. Signal Processing, vol. 40, pp. 867–881, Apr. 1992. [2] P. Comon and G. H. Golub, “Tracking a few extreme singular values and vectors in signal processing,” Proc. IEEE, vol. 78, pp. 1327–1343, Aug. 1990. [3] E. M. Dowling, L. P. Ammann, and R. D. DeGroat, “A TQR-iteration based adaptive SVD for real-time angle and frequency tracking,” IEEE Trans. Signal Processing, vol. 42, pp. 914–926, Apr. 1994. [4] P. Strobach, “Low rank adaptive filters,” IEEE Trans. Signal Processing, vol. 44, pp. 2932–2947, Dec. 1996. , “Bi-iteration SVD subspace tracking algorithms,” IEEE Trans. [5] Signal Processing, vol. 45, pp. 1222–1240, May 1997. [6] A. J. van der Veen, P. B. Ober, and E. F. Deprettere, “Azimuth and elevation computation in high resolution DOA estimation,” IEEE Trans. Signal Processing, vol. 40, pp. 1828–1832, July 1992. [7] A. Swindlehurst and T. Kailath, “Azimuth/elevation direction finding using regular array geometries,” IEEE Trans. Aerosp. Electron. Syst., vol. 29, pp. 145–156, Jan. 1993. [8] P. Strobach, “Bi-iteration multiple invariance subspace tracking and adaptive ESPRIT,” IEEE Trans. Signal Processing, vol. 48, Jan 2000. [9] G. W. Stewart, “Methods of simultaneous iteration for calculating eigenvectors of matrices,” in Topics in Numerical Analysis II, J. J. H. Miller, Ed. New York: Academic, 1975, pp. 169–185. [10] G. H. Golub and C. F. Van Loan, Matrix Computations, 2nd ed. Baltimore, MD: John Hopkins Univ. Press, 1989. [11] F. L. Bauer, “Das verfahren der treppeniteration und verwandte verfahren zur lösung algebraischer eigenwertprobleme,” Z. Angew. Math. Phys., vol. 8, pp. 214–235, 1957.

Peter Strobach (M’86–SM’91) received the Engineer’s degree in electrical engineering from Fachhochschule Regensburg, Regensburg, Germany, in 1978, the Dipl.-Ing. degree from Technical University Munich, Munich, Germany, in 1983, and the Dr.-Ing. (Ph.D.) degree from Bundeswehr University, Munich, in 1985. From 1976 to 1977, he was with CERN Nuclear Research, Geneva, Switzerland, from 1978 to 1982, he was with Messerschmidt-Boelkow-Blohm GmbH, Munich, and from May 1986 to December 1992, he was with Siemens AG, Zentralabteilung Forschung und Entwicklung (ZFE), Munich. In the summer of 1990, he held the first adaptive filter course ever held in Germany at the University of Erlangen, Erlangen, Germany. In January 1993, he joined the Faculty of Fachhochschule Furtwangen (Black Forest), Furtwangen, Germany. From March to September 1998, he was a Visiting Professor in the Department of Mathematics, University of Passau, Passau, Germany. Dr. Strobach is listed in all major Who’s Who, including Who’s Who in the World.