Decentralized Processing in Sensor Arrays - IEEE Xplore

0 downloads 0 Views 745KB Size Report
A N important problem in radar, sonar, and seismology is to determine the number and location of radiating sources by a passive array of sensors.
IEEE TRANSACTIONS ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL. ASSP-33, NO. 4, OCTOBER 1985

1123

Decentralized Processing in Sensor Arrays MAT1 WAX AND THOMAS KAILATH, FELLOW, IEEE

wherein each subarray communicates to the central processor its sample-covariance matrix. We describe algorithms, for estimating both the number and the locations of the impinging wavefronts from the sample-covariance matrices, that outperform the conventional scheme. The communication load involved in the new scheme is somewhat higher than that of the I. INTRODUCTION triangulation scheme, but still substantially lower than that of N important problem in radar, sonar, and seismology is the centralized scheme. to determine the number and location of radiating sources The organization of the paper is as follows. Section II by a passive array of sensors. As is well-known(see e.g., formulates the problem. Section III’presents the keyproperties Schweppe (1968) and Wax and Kailath (1983a)), the optimal of the covariance matrices of the subarrays. Section IV solution to this problem involves a centralized and coherent introduces the decentralized algorithms for estimating the processing of all the sensor outputs. Unfortunately, this numberof sources and their locations from the samplecentralized scheme is in manycases unattractive; in the case of covariance matrices. Section V describes extensions to the large arrays composed of many subarrays at geographically case that the center-frequencies of the sources are unknown dispersed sites, the constraints that generally exist on the and to the case that the sources are moving. Section VI capacity of the communication links make the optimal central- presents simulation results that illustrate the improved perized scheme very unattractive, if not impossible. Moreover, formance obtained by the new scheme with respect to the centralized coherent processing is in many cases impossible, conventional triangulation scheme. Section VI1 presents furespecially in the sonar context, because of loss of coherence ther possible extensions and some concluding remarks. between signals received on widely separated subarrays. To alleviate the communication burden one may have to 11. PROBLEM FORMULATION resort to suboptimal schemes that decentralize the processing Consider a very large array composed of L subarrays and thus trade off performance for communication load. The located at geographically dispersed sites. Let ph(h = 1, , simplest and probably the most common decentralized scheme L ) denote the number of sensors in the hth subarray. Assume is known as triangulation. The direction-of-arrival and the “signature” of each of the impinging wavefronts are esti- that 4 narrowband sources with aknown center-frequency u0 mated at every subarray and then communicated to the central are located in the far-field of the subarrays. The latter processor, where all the bearings, or lines-of-positions, assumption implies that the wavefronts received by the corresponding to the same source are first identified; this step subarrays are essentially planar (see Fig. 1). Forsimplicity we is referred to as “data association.” Then these lines of assume that the array and thesources are confined to the plane. position are “intersected” to obtain the source location. This Using complex (analytic) signal representation, the signal at scheme involves a substantially lower communication load the ith sensor of the hth subarray, can be expressed as than the centralized scheme, since onlyasmallnumberof Q parameters has to be communicated from every subarray. As a Xi,h(t) = aik,hskik(t-T i k , h ) ini,h(t) (1) result of this significant data reduction, the accuracy of the k= I triangulation scheme is lower than that of the optimal centralized scheme. A different decentralized scheme base on where sk( .) is the signal of the kth source, 7;k,h is the pairwise processing of the sensor signals is discussed by propagation delay from the location of the kth source to the ith Weinstein (198 1). sensor of the hth subarray, ajk,h is the attenuation from the In this paper we present a new decentralized scheme location of the kth source to the ith sensor of the hth subarray, and ni,h( is the additive noise at the ith sensor of the hth subarray. Manuscript received October 12, 1984; revised February 28, 1985. This The narrowband assumption implies that the complex work was supported in part by theJointServicesProgram at Stanford University (U.S. Army, U.S. Navy,U.S.AirForce)underContract envelopes of the signals do not change significantly over the DAAG29-84-K-0047 and the Department of the Navy (NAVELEX) under time interval it takes the wavefront to propagate across any of Contract NOOO39-84-C-0211. the subarrays. More formally, for every signal (k = 1, * , Theauthorsare with theInformationSystemsLaboratory,Stanford 4) and every sensor (i = 1, , ph) the following University, Stanford, CA 94305.

Abstract-Wepresentanewscheme for decentralizedprocessingin passive sensor arraysbased on communicatingthesample-covariance matrices of the subarrays.The new scheme offers improved accuracy over the conventional triangulation scheme withonly a modest increase in the communication load.

A

-

a )

-

0096-3518/85/1000-1123$01.OO

0 1985 IEEE

1124

IEEE TRANSACTIONS ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL. ASSP-33, NO. 4, OCTOBER 1985

formulated: given the NL “snapshots” X h ( t l ) , * xh(tN) (h = 1, * , L ) , estimate, in a decentralized fashion, both the number of sources q and the source locations rl, * * * , rq. a ,

In. THESUBARRAY COVARIANCE MATRIX

Fig. 1. The array configuration.

approximation holds: &(t - q k , h ) = e-jwo%hsk(t).Using this relation, we can rewrite (1) as

The covariance matrices of the subarrays play a central role in the new decentralized scheme. We now briefly describe the key properties of these matrices. Multiplying (4.a) by its conjugate transpose andtaking expectations, recalling that the signals and noises are independent and zero-mean, we obtain that the covariance matrix of the hth subarray is given by

Rh =A$A Stacking the Ph X 1 samples X j , h ( t ) into a vector &(t), referred to as the “snapshot” vector of the hth subarray, we get

(5)

fu21

where S denotes the covariance matrix of the signals and 7 denotes the conjugate transpose. Let 2 * * L and U l , h , . ‘ uph3h denote the eigenvalues and eigenvectors, respectively, of Rh. Since the matrices S and A h have been assumed to be of full rank, it readily follows that 9

(6. 4

&+l,h=“’=&h,h=u2 where ah(rk) is the Ph X 1 vector referred to as the locationvector of the kth source with respect to the hth subarray

and rk is the 2

X

1 vector of the coordinates of the kth source

Using matrix notation we can rewrite (1) as

X

{ah(r1), * *

*

9

ah(rq>>

( V q + I,h, ’

* Y

Vph,h)*

(6.b)

That is, the smallest eigenvalue Rh is of multiplicity ph - q, and theeigenvectors corresponding to the smallest eigenvalue, referred to as the “noise” eigenvectors, are orthogonal to the location-vectors of the sources. As weshall see, these two relations form the basis of the new decentralized scheme. IV. THEDECENTRALIZED ESTIMATION SCHEME Having described the problem and the key properties of the covariance matrices of the subarrays wenowturntothe description of the new decentralized processing scheme. In this scheme, each subarray communicates to the central processor its sample-covariance matrix, that is, thehth subarray communicates the P h x P h matrix

where Ah is the P h x q matrix

and s(t) is the q

and

1 signal vector

, Let N snapshots of the whole array be taken at times t l , The following assumptions regarding the signals, the noise, and the array geometry, are made. 1) The signal vector s( is an ergodic and stationary random process, with zero meanand positive definite variance-covariance matrix S. 2) The number ofsignals q is smaller than the dimension of the smallest subarray (q < m i n h P h ) . 3) Any set of q + 1 “location vectors” ah@)(h = 1, ., L ) is linearly independent. 4) The noise vectors nh( .) (h = 1 , * ,L ) are ergodic and stationary random processes, uncorrelated with the signals, of zero mean and with variance matrix of the form u2Z, where u2 is an unknown nonnegative scalar, and I is the identity matrix. The problem weshall address in this paper cannow be tN.

e )

a

-

From the received data . * , I&}, the central processor then estimates the number and the locations of the sources. In what follows we describe these estimation schemes.

A . Estimation of the Number of Sources Since the multiplicity of the smallest eigenvalue of Rh is Ph - q, it follows that the number of sources, q , can be determined from the multiplicity of the smallest eigenvalue of each of the covariance matrices Rh (h = 1, * -, L). In practice, however, we have only the estimates R 1 , * ., R L of these covariance matrices. Being based on a finite number of samples, there is zero probability that the smallest eigenvalue of Rh will have multiplicity P h - q; with probability one, the small eigenvalues will all be different. Thus, a sophisticated

-

1125

WAX AND KAILATH: DECENTRALIZED PROCESSOR IN SENSOR ARRAYS

approach, basedon statistical considerations, is needed to determine the underlying multiplicity. One such approach, developed byBartlett(1954)and Lawley (1956), is based on a nested sequence of hypothesis tests. Recently, a different approach, based on the AIC and MDL model selection criteria, introduced by Akaike (1973, 1974) andby Schwartz (1978) andRissanen (1978, 1983), was developed byWaxandKailath (1983b, 1985). Unlike the Bartlett-Lawley method, this new methoddoes not require any subjective threshold settings. Consider the problem of estimating the number of sources from the sample-covariance of one subarray, say, Rh. The expression for the AIC for this problem, WaxandKailath (1983b, 1985), is given by

at the different subarrays as independent. Let v 1 1 * * 2 vM denote the eigenvalues of R . Obviously, they are the ordered , RL. The rearrangement of all the eigenvalues of R 1 , multiplicity of the smallest eigenvalue of R , cr2, is therefore , RL, given by the sum of the multiplicities of a2 in R 1 , L i.e., C h = l ( P h - q) = M - Lq, reflecting the fact that all the subarrays receive the same number of sources. Thus, it seems advantageous to estimate the number of signals from the multiplicity of the smallest eigenvalue of R rather than from the multiplicity of each Rh separately. The AIC and the MDL for this problem, obtained in complete analogy to (8), are given by

-

-

\"' + k(2M-Lk)

+ k(2ph- k)

(8.a)

(lO.a)

and

while the MDL is given by

1 + - k(2ph- k) log 2

N (8.b)

-

where Il,h > * > lph,h denote the eigenvalues of Rh. The number of sources is obtained as the value of k E (0, P h - l}, for which either AIC (k, h) or MDL ( k , h) is minimized. Note that the two criteria differ only inthe second term-the MDL contains an extra factor of 1/2 log N.It was shown by Wax and Kailath thatthe MDL yields a consistent estimate of the number of signals while the AIC tends, asymptotically, to overestimate the number of signals. However, with finite data, both criteria have given the same results in all of our examples so far. Applying the above procedure to each &(h = 1, * , L ) willyield L different estimates Q1, , QL. The global estimate 4 can then be determined from these values by some kind of averaging. In the above approach each estimate a h was obtained independently of the others; the fact that all the subarrays receive the same number of sources was ignored. A more sophisticated approach that takes advantage of this a priori knowledge will be now described. Let the sample covariance matrices gh(h = 1, * L ) be embedded in ablock-diagonal matrix of dimension Xi= Ph = A M 27 = block-diag { R 1 , * * , RL). (9) +

a ,

-

-

+-21 k(2M- Lk) log N

(lO.b)

- -

where gl * gMdenote the eigenvalues of 8,the estimate of R obtained from the sample-covariance matrices R l , * * , gL. The estimate of the number of sources, Q , is obtained as the value of k E (0,1, ,minh @h) - 1) for which either the AIC or the MDL is minimized.

--

B. Estimation of the Source Locations Let r = (x, y ) denote a general point in the plane, and let the L location vectors from this point to the different subarrays be denoted by ah@)(h = 1, * * ,L). As we have seen in (4), for a position of a source, r = rk(k = 1, * * , q), the location vectors al(rk), * a&) are orthogonal, respectively, to the sets of noise eigenvectors { u ~ + ~ , ~ ,u ~ ~ , ~ } , { u ~ + ~ , ~ , ..., upL,L}.Thus, if the noise eigenvectors were known, one + e ,

- e * ,

- e . ,

could determine the source locations by searching for those pointsin the plane for which these orthogonality relations hold. The uniqueness of this solution can be guaranteed if certain conditions regarding the geometry of the problem are satisfied. In practice, however, we have only estimates ofthe eigenvectors, given by the eigenvectors Ui,h of the samplecovariance matrices &. These estimates do not, with pro5ability one, obey the orthogonality relation (6.b). Nonetheless, if the errors in the estimated eigenvectors are small, that is, if Note that R can be regarded as a model for the covariance the number of snapshots is high enough, the orthogonality matrix of the whole array, obtained by considering the signals relation between the location vector of the sources and the e ,

1126

IEEE TRANSACTIONS ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL. ASSP-33, NO, 4, OCTOBER 1985

subspace spanned by the noise eigenvectors willbeonly with 8 denoting the Kronecker product, and c(wk)denoting slightly perturbed. Thus, instead of looking for the points in the m X 1 vector the plane that yield exact orthogonality to these subspaces, we should define a metric that measures the distance from orthogonality and look for the points inthe plane that minimize In matrix notation (13) becomes this metric. One possible metric is the sumof the squared cosines of the angles between the location-vector and the noise xh(t) =AhS(t) + nh(t) (14.a) subspaces, given by where Ah is the phm X q matrix

52

lah(r)tUh,ilZ/ah(r)tah(r).

(1 1)

Ah=[ah(Pl, wl),

h = l i=d+I

* “ 7

ah(rq, wq11.

(14.b)

Assuming that the columns of the matrix Ah are linearly With this metric, the locations of the sources are determined independent and that the spacing between the taps is greater as the Q points in the plane for which the following expression than the correlation time of the noise, the method described in is minimized: the previous section can be readily extended to estimate both Ph Q(r)= Iah(r)tUi,h12/ah(r)tah(r). (12) the locations and the center-frequencies of the sources. Let E,$ > > lphm,h and U l , h , Uphm,h denote the eigenvalues h = l ;=@+I and eigenvectors, respectively, ofthe phm X pkm sampleWe should point out that the in case of multiple sources, this covariance matrix of the hth subarray. In analogy to (12) the estimator will have spurious minima at locations correspondlocations and the center-frequencies of the sources are ing to the spurious “intersections” in the triangulation determined as the Q points for which the following expression method. Thus, as in the triangulation method, a data associais maximized tion step is required to eliminate the spurious minima.

-

a ,

L

V . EXTENSIONS

Q(r9

Ph

l@h(r, w)tUk,h12/ah(r, w)’@h(r,

w)=

w).

(15)

h=l k=rj+l

In this section we present two extensions of the decentralized scheme to more complex scenarios that may arise in passive sensor arrays. A. Simultaneous Localization

and Center-Frequency

Estimation The algorithms we have derived were based on the assumptionthat signals are narrowband andhave the same known center-frequency. This case, while, strictly speaking, never occurring in practice, is still a good model for the case that the signals are prefiltered by a narrow bandpass filter with a known center frequency. Unfortunately, this attempt to bypass the frequency estimation problem is not always desirable. In many applications, especially in those for which the center-frequency of the signal provides the only “signature” of the source, the accuracy desired in estimating the center-frequency is much above that provided by the bandpass filter. In what follows, we extend the localization scheme (12) to simultaneously estimate the center-frequency and thelocations of the sources. Let wk denote the center-frequency of the kth source. Following Wax et al. (1984), we put a tapped-delay line with m taps spacedD delay units apart at each sensor. Let xi denote the vector consisting of the phm samples of all the taps of the sensors of the hth subarray. Following (l), xl,(t) can be expressed as 4 Xh(t)

ah(rkt wk)sk(t)inh(t)

=

(13.a)

k= 1

where ah(rk, w k ) is the phm x 1 vector given by

ah(rk, @ k ) = a h ( Y k )

@

c(0k)

(13.b)

We should point out that if the center-frequency of a sources can be regarded as a valid “signature”, that is, if all the center-frequencies of the sources are different, then (15), unlike (12), has an inherent data association mechanism. B. Moving Sources

Up to this point we restricted the discussion to the case that the sources are stationary. Now we extend the approach to the case that the sources are moving. Assume that q sources with the same center-frequency wo are moving in the plane of the array; say the kth source is movinginvelocity bk. Because of the relative movement between the source and the subarrays, the signal received by the subarrays willbe Doppler-shifted. Notethatsince the sources are in the far-field of the subarray, the signals at the different sensors of the subarray will have identical Dopplershift. Denoting by Wk,h the frequency of the kth source as received by the hth subarray, it follows by the well-known Doppler-shift relation (seee.g., Van Trees (1971), p. 240) that

where Y k , h is the angle between bk and the ith subarray, and c is the velocity-of propagation. Noticethat since the Doppler-shift is different for every subarray, the signals received by the subarrays willhave different center-frequencies. Using the scheme derived in Section A, only now casting the estimator (15) in terms of the locations and the velocities of the sources, we obtain that the locations and velocities of the sources are determined as the 4

1127

WAX AND KAILATH: DECENTRALIZED PROCESSOR IN SENSOR ARRAYS

(17.d) where Y~ denotes the angle between the veiocity b and the hth subarray. VI. SIMULATION RESULTS In this section we present simulation results that illustrate the improved performance offered by the new decentralized scheme with respect to the conventional triangulation method. We begin by demonstrating the performanceof the algorithm forestimatingthenumberofsources.Thefirstexamplewe considered consisted of two sources, one at (150, 0) and the other at (0, 20), and an array composedoftwo (L = 2) uniform linear subarrays, each with five ( p h = 5 ) sensors spaced one wavelength apart. The subarrays were located at ( - l000,O) and (0, - lOOO), respectively, with inclinations 90" and 0", respectively, to thex axis (see Fig. 2). The signalto-noise ratio of each signal at the subarrays, defined as the 10 log (s2/aZ),with s2 denoting the signal power and u2 denoting the noise power, was 5 dB. 10 Monte-Carlo runs were performed, each consisting of N = 100 snapshots. In each run the estimate of the number of sources was determined both by the new decentralized scheme, (lo), and by applying (8) separately to each sample-covariance matrix Rh(h = 1, 2,). The number of sources obtained from (lo), in 9 outof 10 runs, was 2, while the number of sources obtained by applying (8) to the first and second subarrays was 1 and 2, respectively, in 9 out of 10 runs. The second example we considered consistedof three sources located at (250, 0), (0, 0), and (0, 500), respectively, and an array composed of three (L = 3) uniform linear subarrays, each with five ( p h = 5 ) sensors spaced one wavelength apart. The subarrays were located at ( - 1000, 0), (- 1000, - 1000) and (0, - lOOO), respectively, with inclina45" and 0", respectively, to the x axis (see Fig. 3). tions W", The signal-to-noise ratio at the subarrays was 25 dB, and the number ofsnapshots was N = 100. As in the first example, 10 Monte-Carlo runs were performed. Using (lo), in 9 out ofthe 10 runs, the number of sources obtained was 3, while by using (S), in 9 out of the 10 runs, the number of sources obtained from the first, second and third subarrays was 2, 3 and 2, respectively. Next, we compared the accuracy of the new localization algorithm with that ofthe conventional triangulation algorithm. First we considered the localization of a single source located at (0, 0) by the array described in Fig. 2. The signalto-noise ratio at the subarrays was 0 dB, and the number of

.. Fig. 2. An array composed of two ( L = 2) uniform linear subarrays, each consisting of five ( p h = 5) sensorsspacedonewavelength apart. The subarrays are located at (- 1O00, 0) and (0, - lOOO), respectively, with inclinations 90" and 0', respectively, with respect to the x axis.

Fig. 3. An array composed of three (L = 3) uniform linear subarrays, each consisting of five ( p h = 5) sensorsspacedonewavelengthapart.The subarrays are located at (- 1O00, 0) (- 1,OOO, - 1O00) and (0, - 1O00), respectively, with inclinations 90', 45" and O", respectively, with respect to the x axis.

snapshots was N = 100. 10 Monte-Carlo runs were performed. For each run, the estimate of the location was computed both by(12)andby triangulation. In the latter scheme we used the eigenstructure algorithm of Schmidt (1979) for direction-of-arrival estimation, and the maximum likelihood algorithm of Stansfield (1947) for source localization. The variance of the location estimate, defined as

with (zi,j3denoting the estimate of the location of the source obtained at the ith run, was computed for both schemes. The resulting variance of the estimator (12) was 27.0, while for the conventional triangulation method the resulting variance was 35.6. In another example, we considered the localization of the same source located at (0, 0) by the array described Fig. 3. The signal-to-noise ratio at the subarrays was 0 d B , and the number of snapshots was N = 100. As before, 10 Monte-

1128

IEEE TRANSACTIONS ON

ACOUSTICS. SPEECH, AND SIGNAL PROCESSING, VOL.

Carlo runs were performed. The variance of the estimator (12) was 14.5, while that of the conventional triangulation method was 22.0.

ASSP-33, NO. 4, OCTOBER 1985

to the case of wideband sources could be carried out as in Wax

et al. (1984). ACKNOWLEDGMENT

VII. CONCLUDING REMARKS

The authors are grateful to the reviewers for their suggesWe have presented a new decentralized scheme for passive tions. sensor array processing, which was shown to provide better accuracy than that offered by the conventional triangulation ‘REFERENCES method at the expense of a slight increase of the communicaAkaike, €3. (1973): “Information Theory and anExtension of the tion load. Although the improved accuracy is modest, it may Maximum Likelihood Principle,” in Proc. 2nd Int’l. Symp. Znform. Thy., Tsahkadso, Armenia, USSR, Supp. to Problems of Control nevertheless be crucial in many applications. and Inform. Thy., pp. 267-281. The basic localization scheme, (12), still requires, as does Akaike, H. (1974): “A New Look at the Statistical Model Identificathe triangulation scheme, a preliminary data association step. tion,” IEEE Trans. Autom. Contr., Vol. AC-19, pp. 716-723. Anderson, T. W. (1963): “Asymptotic Theory for Principal CompoHowever, if the center-frequencies of the sources are all nent Analysis,” Ann. ofMath. Stat., Vol. 34, pp. 122-148. different, the estimator (15) provides an alternative with an Bienvenu, G . and Kopp, L. (1980): “Adaptivity to Background Noise inherent data association mechanism. Spatial Coherence for HighResolutionPassive Methods,” in Proc. IEEE ICASSP 80 (Denver, CO), pp. 307-310. In the derivation, we have assumed that the average noise Bartlett, M. S. (1954): “A Note on the Multiplying Factors for Various power in all the subarrays is the same. If this assumption does x* Approximations,” J. Roy. Stat. Soc., Ser. B, Vol. 16, pp.296not hold, the algorithm (10) for estimating the numberof 298. Johnson, D. H. and DeGraff, S . R. (1982): “Improving the resolution sources is no longer valid. A more robust algorithm, which of BearingEstimation in Passive Sonar Arrays by Eigenstructure ’ applies equally well to subarrays with different noise levels, Analysis,” IEEE Trans. on ASSP, Vol. ASSP-30, pp. 638-647. will be now described, As (lo), it is based upon the application Kumaresan, R. and Tufts, D. W. (1983):“EstimatingtheAnglesof Arrival of Multiple Planewaves,” IEEE Trans. on AES, Vol. AESMDL to the model R given in (9). oftheAICandthe 19, pp.134-139. However, unlike (IO), which capitalized on the multiplicity of Lawley, D. N. (1956): “Tests of Significance of the Latent Rootsof the the smallest eigenvalue of R , this scheme capitalizes on the Covariance and Correlation Matrices,” Biometricka, Vol. 43, pp. 128-136. block-diagonal structure of R and on the multiplicity of the Rissanen, J. (1978): “Modeling by Shortest Data Description,” smallest eigenvalue of each block. Indeed, the log-likelihood Automatica, Vol. 14, pp. 465-471. for the model R is the sum of the log-likelihoods for R I , * * , Rissanen, J. (1983): “A universal prior for the integers and estimation byminimumdescription length,” Ann. of Stat., Vol. 11, pp.417RL, which implies that AIC ( k ) = AIC ( k , h). Thus 431. from (8) we obtain Schwartz, G . (1978): “Estimating the Dimension of a Model,” Ann.

+k(2M-Lk)

(18)

and similarly for the MDL. Unlikethe detection algorithm (lo), the localization algorithm (12) remains unchanged if the noise power inthe subarrays is different. This is because the orthogonality relations it was based upon holdregardless of the noiselevels. The arguments leading to the specific estimator (12) were clearly ad hoc. In fact, (12) can be regarded as a decentralized version of the eigenstructure estimator of Schmidt (1979) and BienvenuandKopp (1980). Decentralized versions ofthe eigenstructure estimator of Johnson and DeGraff(1982) and of Kumaresanand Tufts (1983)can be similarly constructed. Moreover, decentralized versions ofanyof the existing localization algorithms-theconventional beamforming, the maximum entropy and the minimum variance-can be analogously constructed. For example, the decentralized version of the conventional beamforming is

The discussion inthis paper was limited, for sake of simplicity, to the case of narrowband sources. The extension

of Stat., Vol. 6, pp. 461-464. Schmidt, R. 0. (1979): “Multiple Emitter Location and Signal Parameter Estimation,” Proc. RADC Spectral Estimation Workshop (Rome, NY),pp. 243-258. Stansfield, R. G. (1947):“StatisticalTheory of DF Fixing,” IEE (London), Vol. 94, pp. 762-770. Van Trees, H. L. (1971): Detection, Estimation and Modulation Theory, Vol. 2, New York, Wiley. Wax, M. and Kailath, T. (1983a): “Optimum Localization of Multiple Sources by Passive Arrays,” IEEE Trans. on ASSP, Voi. ASSP-31, NO. 5, pp. 1210-1218. of Wax, M. and Kailath, T. (1983b): “Determining theNumber Signals by Information Criteria,” ASSP Spectrum Estimation Workshop II (Tampa, FL), pp. 192-196. Wax,M., Shan, T.-J. andKailath, T. (1984):“Spatio-Temporal SpectralAnalysis by Eigenstructure Methods,” IEEE Trans. on ASSP, Vol. ASSP-32, pp. 817-827. Wax, M. and Kailath, T. (1985): “Detection of Signals by Information Theoretic Criteria,” IEEE Trans. on ASSP, Vol. ASSP-33, pp. 387392. Weinstein, E. (1981): “Decentralization of the Maximum Likelihood EstimatoranditsApplication to Passive Array Processing,” IEEE Trans. on ASSP, Vol. ASSP-29, pp. 945-951.

WAX AND KAILATH: DECENTRALIZED PROCESSOR IN SENSOR ARRAYS

conducted research on the development of communication systems, tracking systems, and position location techniques. He is currently a Research Assistant with the Department of Electrical Engineering,Stanford University, Stanford, CA, and also working part-time at IBM Research Laboratories, San Jose, CA. His main interest is statistical signal processing.

Thomas Kailath (F'70) was born in Poona, India on June 7, 1935. He received the B.E. degree in telecommunications engineering from the University of Poona in 1956 and the S.M. and Sc.D. degrees in electrical engineering from the Massachusetts Institute of Technology in 1959 and 1961, respectively. During 1961-1962 he worked at the Jet PropulsionLaboratories,Pasadena,CA, where he also taught part-time at the California Institute of Technology. From 1963-1968, he was Associate Profes-

1129 sor and from 1968, Professor of Electrical Engineering at Stanford University.He was Directorof the Information Systems Laboratory from1971 through 1980, and is presently Associate Chairman of the Department of Electrical Engineering. 'He has held shorter term appointments at several institutions around the world, including a Ford Fellowship in 1963 at UC Berkeley, a Guggenheim Fellowship in 1970 atthe Indian Institute of Science, Bangalore, India, a Churchill Fellowship in 1977 at Cambridge University, England (where he is a Life Fellow of Churchill College), and a Michael Fellowship in 1984 at the Weizmann Institute of Science, Rehovot, Israel.He is on the editorial board of several engineering and mathematics journals and is also the editor of the Prentice-Hall Series on Information and System Sciences. He was on the IEEE Press Board for several years. From 19711978, he was a member of the Administrative Committees of the IEEE Professional Group on Information Theory and the IEEE Control Systems Society. During 1975 he served as President of the Information Theory Group. Dr. Kailath is a member of the National Academy of Engineering and a Fellow of the Institute of Mathematical Statistics. He is also a member of the American Mathematical Society,the Society for Industrial and Applied Mathematics, the Society of Exploration Geophysicists and severalother scientific organizations. He is the author of Linear Systems, Prentice-Hall, 1980, Lectures on Wiener and Kalman Filtering, Springer-Verlag, 1981, editor of Benchmark Papers in Linear Least-Squares Estimation, Academic Press, 1977, and of Reviews of Modern Signal Processing, Hemisphere-Springer-Verlag,1985, and coeditor of VLSIand Modern Signal Processing, Prentice-Hall, 1985.