System Identification Using Quantized Data

0 downloads 0 Views 305KB Size Report
improve the performance of a control loop considering the effect of quantization. On the other hand, traditional system identification tech- niques do not consider ..... quantization noise is very close to a uniformly distributed noise for the cases ...
System identification using quantized data Juan C. Agüero, Graham C. Goodwin, and Juan I. Yuz

Abstract— In this paper we consider the problem of identification of linear systems using quantized data. We argue that, where possible, it is desirable to not utilize “naively” quantized data but instead it is preferable to choose the quantization mechanism carefully. In particular, we show that using a generalized noise shaping coder improves the accuracy of the estimates. We examine the accuracy of estimates for both naive and coded quantizers.

I. I NTRODUCTION Quantized data is inherent in any digital procedure. This is due to the fact that data is obtained from a communication channel or more simply from an A/D converter (see e.g. [1]). This has lead to a new research area in Control Theory called Networked Control Systems (NCS). NCS deals with the idea of controlling a process when the input and output signals are transmitted via a communication channel. In [2] it has been shown that a noise shaping quantizer can significantly improve the performance of a control loop considering the effect of quantization. On the other hand, traditional system identification techniques do not consider the fact that the measurements of input and output signals are subject to quantization. In the recent papers of [3], [4], [5], [6], [7] it has been pointed out that it is desirable to extend the traditional theory of System Identification in order to tackle the fact that the measurements are quantized and to be able to obtain “better” estimates. In particular, in [5] an Instrumental Variables approach is proposed. This latter paper uses a traditional quantizer (equi-spaced levels) which approximate the signal as the closest value in a defined set of discrete values. In the current paper we use a variation of this simple equi-spaced level quantizer. We propose to use a quantization noise shaping technique (see Figure 1) in order to obtain more accurate estimates (note that the sigma-delta modulator is used for data collection and storage i.e. it is part of the measuring system). We apply this idea to several identification problems to show that it has the potential to greatly enhance identification accuracy using quantized data. As a preliminary example, we consider the problem of estimating a constant from noisy, quantized data. For this problem, we show that the effect on the covariance of a LS estimate decays as 1/N when the traditional quantizer is used (Q = 0 in Figure 1). On the other hand, when a noise shaping quantizer is used (Q 6= 0) Graham C. Goodwin and Juan C. Agüero are with the School of Electrical Engineering and Computer Science, The University of Newcastle, Australia. Email: {juan.aguero,graham.goodwin}@newcastle.edu.au Juan I. Yuz is with the Electronics Department, Universidad Técnica Federico Santa María, Valparaíso, Chile. Email: [email protected]

we show that the covariance of the LS estimate decays as 1/N 2 . This dramatic improvement supports the use of more sophisticated quantization schemes than the “naive” scheme (Q(q −1 ) = 0). The area of Experiment Design has recently experienced renew attention (see e.g. [8], [9], [10]) and the incorporation of new computational techniques such as Linear Matrix inequalities (LMI) and Semidefinite Programming has been proposed (see e.g. [8], [11]). In this paper we analyze the effect of the noise shaping quantizer on the accuracy of the estimation of a Box-Jenkins model. We use a recent approach proposed in the Experiment Design area [12] to prove that is better to use a noise shaping quantizer. The theoretical developments in this paper are based on a well established model for the quantizer (see e.g. [13], [14]), namely uniformly distributed additive noise. To verify the applicability of this model by ensuring that, in the examples, the “true” quantizer is used. This demonstrates that the theory based on the simplified white noise model gives remarkably good predictions. Of course, this is not all that surprising if one recognizes that similar conclusion are well known in the Signal Processing literature [13]. II. Q UANTIZATION The uniform quantizer, Q, with range [−V, V ] utilized in this paper is defined as follows:  x  (| ∆ | + 21 )∆ if − 2V < x < 2V V if x > 2V yq (t) = (1)  −V if x < −2V b where ∆ = 22V b −1 , and b is the number of bits utilized (2 is the number of levels in the quantizer). A standard methodology in signal processing literature ([13]) is to model the quantizer by a white uniformly ∆ distributed additive noise1 in [− ∆ 2 , 2 ]. Here, we assume that V is such that the probability of overflow is small and the number of quantization levels is high (see e.g. [13]). We consider a variant of this quantization scheme shown in Figure 1. This configuration is usually known as a noise shaping coder in the ∆Σ converter area (see e.g. [14]) . In order to analyze the effect of this type of quantization scheme on the accuracy of estimates, we assume that the quantizer Q is replaced by a uniformly distributed additive noise. 1 The

noise variance is given by σq2 =

∆2 12

=

V2 . 3(2b −1)

s(t) +

x(t)

sq (t)

Under these conditions, we have that the LS estimate for θo is given by:

Q − −

N 1 X θˆLS = yq (t) N t=1

+

Q(q −1 )

Fig. 1.

= θo +

a.s. θˆLS −−→ θo n o E θˆLS = θo

III. T HE ESTIMATION OF A CONSTANT To illustrate the advantages of using noise shaping quantization in System Identification, we analyze the simplest possible estimation problem. Namely, a constant is measured in the presence of noise. We assume that the measurement device can only take values in a reduced set of values (i.e. the data is quantized). The “true”model that generates the data is given by y(t) = θo + w(t)

(2)

2 where w(t) is zero mean white noise with variance σw . The measurements are the set {yq (1), yq (2), · · · , yq (N )} where yq (t) is the output of the quantizer with input y(t) (see Figure 1). It is well known that the Least Squares estimate for θo (assuming that the quantizer does not affect the signal y(t)) is given by

1 θˆLS = N

yq (t)

(3)

t=1

We next present a result which characterizes the accuracy of this estimate when quantized data is deployed. Theorem 1: If the probability of overflow in the quantizer is small, and the number of levels in the quantizer is sufficiently large such that the white noise model for the quantizer is valid, then the covariance of θˆLS when Q(q −1 ) = 0 in Figure 1 is given by n o σ2 σq2 cov θˆLS = w + N N and when Q(q

−1

) is chosen as Q(q

−1

)=q

n o σ2 σq2 cov θˆLS = w + 2 N N

(8)

−1

(9) (10)

where we have used the Law of Large Numbers (see e.g. [15]), and applied the expected valued operator E {·} to equation (7). We also have that the covariance of the LS estimate is given by: " #2  N   1 X n o w(t) + v(t) cov θˆLS =E   N t=1 " #2 " #2  N N  1 X  X 1 w(t) + v(t) =E  N  N t=1 t=1 #) (" N N 1 XX w(t)v(k) +E N 2 t=1 k=1

σ2 = w +∆ N If v(t) = q(t) (Q = 0) then we have that " #2 " #2  N N  1 X  σ2 X 1 q ∆=E w(t) + q(t) =  N  N N t=1 t=1

(11)

(12)

Thus, we have that the covariance of the LS estimate (for Q = 0) is given by: n o σ2 σq2 cov θˆLS = w + (13) N N If v(t) = q(t) − q(t − 1) then we have that N X

(4)

v(t) = q(N ) − q(0) = q(N )

(14)

t=1

, then Then, (5)

where q −1 the backward shift operator. Proof: Under the assumptions of the Theorem it is possible to characterizes the quantized measurements by the following model (see e.g. [13]): yq (t) = θo + w(t) + v(t)

N 1 X (w(t) + v(t)) N t=1

We then have that the LS estimate is unbiased and converges almost surely (a.s.) to θo . That is:

Noise shaping quantizer

N X

(7)

(6)

where v(t) = S(q −1 )q(t), q(t) is uniformly distributed noise independent of w(t), and S(q −1 ) = 1 − Q(q −1 ).

1 2 σ (15) N2 q where we have taken q(0) = 0 by initializing the quantizer appropriately. Thus, we have that the covariance of the LS estimate (for Q(q −1 ) = q −1 ) is given by: n o σ2 σq2 cov θˆLS = w + 2 (16) N N  The above Theorem establishes that the effect on the estimation accuracy of the quantization is dramatically improved ∆=

1

0.9

3

0.8 Covariance of the estimate of !o via LS

4

3.5

Estimate of !o via LS

2.5

2

1.5

1

0.5

0.7

0.6

0.5

0.4

0.3

0

0.2

!0.5

0.1

!1

5

10

15 N

20

25

30

(a) LS estimate using a naive quantizer (blue-dashed line), and using a noise shaping quantizer (red-continuous line) with 2 bits.

Fig. 2.

5

10

15 N

20

25

(b) Covariance of the LS estimate using a naive quantizer (bluethick-dashed line) and using a noise shaping quantizer (red-thickcontinuous line) with 2 bits, 1/N (black-thin-dashed line), 1/N 2 (black-thin-continuous line).

(17)

1/N and 1/N 2 for the naive and the noise shaping quantizer respectively. We see that the estimate by using the noise shaping quantizer is unbiased, and that there is a small bias in the naive quantizer (Q(q −1 ) = 0). We plot the histogram of the quantization error in Figure 3. It can be seen that the quantization noise is very close to a uniformly distributed noise for the cases when the noise shaping quantizer was used. The quantization noise for the naive quantizer is similar to a uniformly distributed noise, but the mean value is a small positive value. This explains the biased obtained in the naive scheme. 450

450

400

400

350

350

yq (t) = sq (t)

(18)

300

300

ya (t) = yq (t) − d(t)

(19)

250

250

200

200

150

150

100

100

50

50

where yq (t) is the “stored” quantized data and ya (t) is the data used for identification (i.e. the operation (19) is done afterwards in the process of estimation). In this example we add a zero mean white noise Gaussian signal with variance σd2 = 1 as a dither signal. We run 100 Monte-Carlo simulations for data length, N , varying from 1 to 30. We first analyze the case when the quantizer can take 4 possible values (b = 2). In this case we expect that the quantization noise will be uniformly distributed in [−1.7, 1.7]. This means that σq2 = 0.92. Figure 2 show the mean value of the LS estimate from the 100 experiments for a quantizer that uses 2 bits (b = 2). The empirical covariance of this estimate is shown in Figure 2. It can be seen that (as predicted by Theorem 1) the covariance of the LS estimate, θˆLS decays proportional to 2 It is also possible, by using the Central Limit Theorem (see e.g. [15]), √ d 2 + σ 2 ) for Q(q −1 ) = 0, and to show that N (θˆLS − θo ) − → N (0, σw q √ d N (θˆLS − θo ) − → N (0, σ 2 ) for Q(q −1 ) = q −1 . w

30

Example III-A with 2 bits.

when a noise shaping quantization scheme is utilized. Notice that this is a finite sample result2 based on the “white noise” model of quantization. We thus, provide an example where real quantized data is deployed. Example III-A: Consider that the data is generated by (2) with θo = 1 and w(t) zero mean Gaussian white noise 2 with variance σw = 0.01. The measurement device can only take values in the range [−V, V ] where V = 5. For this application we add an extra signal d(t) (called subtractive dither) before the quantizer. This dither signal is needed because the signal of interest is a constant. This signal can then be subtracted before the data is analyzed (see e.g. [13], [14]). Hence, we use (see Figure 1) s(t) = y(t) + d(t)

0

0

!5

0 Quantization error

5

0

!5

0 Quantization error

5

Fig. 3. Histogram of the quantization error with 2 bit. Left: Naive quantizer, Right: Noise shaping quantizer

We see from Figure 2 that the predictions made by the “white noise” model are remarkably accurate. We have also run simulations using more than two bits. The results are very similar to the case of 2 bits (the bias in the naive approach is eliminated). We next analyze the case where the measurement device can only store two possible levels (±5) i.e. essentially the data consists only of a sign function. In this case the theory predicts that the quantization noise will be uniformly distributed in [−5, 5] with a variance of σq2 = 8.3. For this very low bit rate we would expect departures from the “white noise” model used for the theoretical de-

12

4

3&5

10 Covariance of the estimate of !o via LS

3

)*+,-.+/0120!103,.045

2&5

2

1&5

1

0&5

8

6

4

0

2 !0&5

!1

5

10

15 (

20

25

30

(a) LS estimate using a naive quantizer (blue-dashed line), and using a noise shaping quantizer (red-continuous line) with 1 bits.

Fig. 4.

450

400

400

350

350

300

300

250

250

200

200

150

150

100

100

50

50

0

!5

0 'uant,-at,.n err.r

5

0

5

10

15 N

20

25

30

(b) Covariance of the LS estimate using a naive quantizer (bluethick-dashed line) and using a noise shaping quantizer (red-thickcontinuous line) with 1 bit, 1/N (black-thin-dashed line), 1/N 2 (black-thin-continuous line).

Example III-A with 1 bit.

velopment. In Figure 4 it is shown that the naive quantizer provides a biased estimate, and it is unbiased for the noise shaping configuration for very small data length. Figure 4 shows that the effect of the quantization on the covariance of the LS estimate decays again proportional to 1/N and 1/N 2 for the naive and noise shaping quantizer respectively. Even though the assumption of uniformly distributed noise does not hold for Q(q −1 ) = 0, it can be seen that this is still satisfied in the noise shaping configuration (see Figure 5). 450

0

signals [16]. In [17] it has been shown that EIV models can be identified from second order properties if some a-priori knowledge for the system is available. The case of quantized data seems to be a “nice” setup to apply an identification procedure of EIV systems since the model for the noise is “known”. Thus, it is possible to obtain identifiability for the noise shaping quantizer case [17]. The EIV framework seems to be an interesting approach to analyze System Identification problems using quantized data. However, in many applications it is possible to generate the input signal. In this case it can be said that the input signal is exactly known and it is not subject to quantization. This will be the approach used in this section. Assuming that the additive noise model for the quantizer is applicable, we have that the measurements are generated by the following model: yq (t) = Go (q −1 )u(t) + Ho (q −1 )w(t) + S(q −1 )q(t) (21)

!5

0 'uant,-at,.n err.r

5

Fig. 5. Histogram of the quantization error with 1 bit. Left: Naive quantizer, Right: Noise shaping quantizer

Also, the covariance behavior shown in Figure 4 is extremely close to that predicted by the theory. This is quite remarkable given that the quantizer uses only one bit. IV. DYNAMIC SYSTEMS In this section we analyze the effect of including the filter Q(q −1 ) in the quantizer on the accuracy of the estimates of the parameters of the following dynamic system: y(t) = Go (q −1 )u(t) + Ho (q −1 )w(t)

(20)

By using the additive noise model for the quantizer it is possible to setup the identification problem in the framework of Errors in Variables (EIV) systems [4], [5]. A difficulty for EIV systems is that are, in general, not identifiable from second order properties of the input-output

where S(q −1 ) = 1 − Q(q −1 ) and q(t) is white noise uniformly distributed. We use the following Box-Jenkins model (parameters of G(q −1 ) and H(q −1 ) are independent) yq (t) = G(q −1 , ρ)u(t) + H(q −1 , η)(t)

(22)

Assuming that the model structure for the dynamic model in equation (20) is general enough to not have undermodeling. We have that the covariance of ρˆ is given by(see e.g. [18, chapter 9]): Z ∂G ∂G H Φu Pρ−1 = (23) ∂ρ ∂ρ Φv where 2 Φv = σ w |Ho |2 + σq2 |S|2

(24)

In this section we show that, by choosing the filter Q(q −1 ) is possible to obtain better estimates for the vector of parameters ρˆ.

bits

Quantizer

cov {ˆ a}

n o cov ˆb

E {ˆ a}

n o E ˆb

3 3 4 4

Naive Noise Shaping Quantizer Naive Noise Shaping Quantizer

0.18 · 10−4 0.07 · 10−4 0.14 · 10−4 0.06 · 10−4

0.84 · 10−4 0.20 · 10−4 0.33 · 10−4 0.18 · 10−4

0.695 0.702 0.704 0.702

0.381 0.354 0.354 0.355

TABLE I E MPIRICAL COVARIANCE AND MEAN FOR THE ESTIMATES OF ao AND bo .

Definition 1: We say that an estimate ρˆ1 is better than an estimate ρˆ2 if3 Pρ−1 1

>

Pρ−1 2

(25)

where Pρ1 ,and Pρ2 are the covariance matrices of ρˆ1 and ρˆ2 respectively. The partial ordering for positive definite matrices defined above implies that ρˆ1 is preferable for any isotonic (order preserving) criterion to measure accuracy of the estimates. We next present a result which shows that the noise shaping quantizer provides a better estimate for the vector of parameters ρo . Theorem 2: If the input u(t) is a multi-sine signal with frequencies ω0 , ω1 , · · · , ωl then using a noise shaping quantizer (Q 6= 0) provides a better estimate for ρo . The accuracy is measured using any isotonic function of the covariance Pρ . Proof: We have that the “true” model can be described as yq (t) = Go (q −1 )u(t) + v(t)

(26)

2 Φv,naive = σw |Ho |2 + σq2

(27) (28)

In the above equations we have used the sub-index ns for the case Q 6= 0, and naive for the case Q = 0. Using simple algebra we have that 1 1 = 2 (29) Φns σw |Ho |2 + σq2 |S|2 1 = 2 +δ (30) σw |Ho |2 + σq2 where 1 1 − 2 2 |H |2 + σ 2 |S|2 2 2 σw σ |H o o | + σq q w 2 2 σq (1 − |S| ) = 2 2 2 |H |2 + σ 2 |S|2 ) (σw |Ho | + σq2 )(σw o q σq2 r(w) = Φv,naive

δ=

(31) (32) (33)

and r(w) =

1 − |S|2 + σq |S|2

2 |H |2 σw o

−1 −1 Pρ,ns = Pρ,naive +∆

(34)

3 A − B > 0 means that A − B is a positive definite matrix. This partial ordering of positive definite matrices implies that det {A} > det {B}, trace {A} > trace {B} (see Lemma 3 in [19]).

(35)

where Z ∆=

∂G ∂G H Φu σq2 r(w) ∂ρ ∂ρ Φv,naive

(36)

Considering that the input is a multi-sine it is sufficient to choose a filter S(q −1 ) (or Q(q −1 )) such that the zeros are located at the frequencies ω0 , ω1 , · · · , ωl to have that S(q −1 ) = 0 and then ∆ positive definite. Thus, using any isotonic (order preserving) function of the covariance matrix Pρ to measure the accuracy we have that the noise shaping quantizer is, in general, preferable.  Remark 1: The constraint imposed on the input spectrum in the previous Theorem seems to be restrictive. However, it is well known in experiment design literature that the optimal input spectrum has this property (see [20]). OOO Example IV-A: Consider the system y(t) = Go (q −1 )u(t) + w(t)

where the noise v(t) can be represented as: 2 Φv,ns = σw |Ho |2 + σq2 |S|2

Thus, we have that

(37)

where w(t) is zero mean Gaussian white noise of variance 0.0025. The dynamic system is given by Go (q −1 ) =

bo q −1 1 − ao q −1

(38)

where bo = 0.36, and ao = 0.7. We run 100 Monte-Carlo simulations with N = 1000 data points for a Quantizer with V = 5 for b = 3 and b = 4 bits. The input signal u(t) = 2sin(w0 t) with w0 = 0.2. The noise shaping quantizer uses a filter Q(q −1 ) such that S(q −1 ) = (1−ejw0 q −1 )(1−e−jw0 q −1 ). For this identification problem, the subtractive dither technique is not necessary since the signals are not constant as they were in the example III-A. We identify the system using a Box-Jenkins model4 and take quantized measurements. The results are presented in Table IV where it is shown that the accuracy of the estimates is improved by using the noise shaping configuration. The histogram of the quantization error is shown in Figures 6 and 7. In both Figures it can be seen that the uniformly distributed noise approximation is not strictly valid in the case of the naive quantizer. However, the uniformly distributed noise assumption holds with a high degree of accuracy in the 4 Note that the order of the noise model H in the Box-Jenkins model v depends on the order of S(q −1 ) which in turns depends on the order of the filter Q(q −1 ).

4500

4500

4000

4000

3500

3500

3000

3000

2500

2500

2000

2000

1500

1500

1000

1000

500

0 !1

estimates. The optimization should be done with the filter Q(q −1 ) used in the noise shaping quantizer as the design variable. It seems that the difficult issue here is that the function S(q −1 ) must satisfy the Bode integral constraint. VI. ACKNOWLEDGMENTS The authors would like to thank E. Silva and Dr. T. Wigren for helpful comments and suggestions.

500

!0.5

0 0.5 Quantization error

1

0 !1

R EFERENCES !0.5

0 0.5 Quantization error

1

Fig. 6. Histogram of the quantization noise for b = 3 using a naive quantizer (left), and using the noise shaping quantizer (right). 4500

4500

4000

4000

3500

3500

3000

3000

2500

2500

2000

2000

1500

1500

1000

1000

500

0 !1

500

!0.5

0 0.5 Quantization error

1

0 !1

!0.5

0 0.5 Quantization error

1

Fig. 7. Histogram of the quantization noise for b = 4 using a naive quantizer (left), and using the noise shaping quantizer (right).

noise shaping quantizer (Q 6= 0) case for both b = 3 and b = 4. Nonetheless, the results in Table IV show that the noise model for quantization is a good predictor of actual performance. V. C ONCLUSIONS AND FUTURE WORK In this paper we have analyzed the use of a more sophisticated quantization scheme in order to improve the accuracy of the estimates. We have analyzed the finite sample properties of a LS estimate for a simple estimation problem, and also the asymptotic properties of the parameter estimates of a linear model. In both type of problem we have found that the noise shaping quantizer provides dramatic improvements in the accuracy of the estimates. It seems to be possible to extend Theorem 2 for different types of input. The proof would be based on establishing conditions to have ∆ (in equation 36) as a positive definite matrix. Notice that the transfer function S(q −1 ) is a sensitivity function. This means that it must satisfy the Bode integral constraint [21]. This implies that reducing the impact of quantization in certain frequency ranges may increase the impact in other frequency ranges. The analysis developed in this paper has assumed that there is no under-modeling. It would be interesting to develop the same result but in a more robust framework where there is uncertainty in the model. This has been recently developed in the Experiment Design literature (see e.g. [10]). It would also be important to develop numerical tools to optimize cost functions which represent the accuracy of the

[1] T. C. Yang, “Networked control system: a brief survey,” Control Theory and Applications, IEE Proceedings-, vol. 153, no. 4, pp. 403– 412, 2006. [2] E. I. Silva, G. Goddwin, D. E. Quevedo, and M. Derpich, “Optimal noise shaping for networked control systems,” in European Control Conference ECC, Kos, Greece, 2007. [3] K. Tsumura and J. Maciejowski, “Optimal quantization of signals for system identification,” in European Control Conference ECC, Cambridge, U.K., 2003. [4] H. Suzuki and T. Sugie, “System identification based on quantized i/o data corrupted with noises,” in Proceedings of the 17th International Symposium on Mathematical Theory of Networks and Systems, Kyoto, Japan, July 2006. [5] ——, “System identification based on quantized i/o data corrupted with noises and its performance improvement,” in Proceedings of the 45th IEEE Conference on Decision and Control (CDC), San Diego, California, December 2006. [6] L. Y. Wang, G. G. Yin, and J.-F. Zhang, “System identification using quantized data,” in 14th IFAC symposium on system identification, Newcastle, Australia, 2006. [7] Y. Zhao, L. Y. Wang, G. G. Yin, and J.-F. Zhang, “Identification of wiener systems with binary-valued output observations,” Automatica, vol. 43, no. 10, pp. –, 2007. [8] H. Jansson and H. Hjalmarsson, “Input design via LMIs admitting frequency-wise model specifications in confidence regions,” IEEE Transactions on Automatic Control, vol. 50, no. 10, pp. 1534–1549, 2005. [9] X. Bombois, G. Scorletti, M. Gevers, P. M. J. Van den Hof, and R. Hildebrand, “Least costly identification experiment for control;,” Automatica, vol. 42, no. 10, pp. 1651–1662, 2006. [10] C. R. Rojas, J. S. Welsh, and G. C. Goodwin, “A receding horizon algorithm to generate binary signals with a prescribed autocovariance,” in American Control Conference, 2007. [11] J. C. Agüero, J. Welsh, and O. J. Rojas, “On the optimality of binaryopen-loop experiments to identify Hammerstein systems,” in American Control Conference, 2007. [12] J. C. Agüero and G. C. Goodwin, “Choosing between open and closed loop experiments in linear system identification,” IEEE Transactions on Automatic Control, vol. 52, no. 8, pp. 1475–1480, 2007. [13] N. Jayant and P. Noll, Digital coding of waveforms. Principles and approaches to speech and video. Englewood Cliffs, Prentice Hall, 1984. [14] S. R. Norsworthy, R. Schreier, and G. C. Temes, Delta-Sigma data converters. Theory, design, and simulation. IEEE Press, 1997. [15] F. Hayashi, Econometrics. Princeton University Press, 2000. [16] B. D. O. Anderson and M. Deistler, “Identifiability in dynamic errorsin-variables model,” Journal of Time Series Analysis, vol. 5, no. 1, pp. 001–013, 1984. [17] J. C. Agüero and G. C. Goodwin, “Identifiability of errors in variables dynamic systems,” To appear in Automatica, vol. 44, no. 2, 2008. [18] L. Ljung, System Identification: Theory for the user, 2nd ed. Prentice Hall, 1999. [19] J. C. Agüero and G. C. Goodwin, “On the optimality of open and closed loop experiments in system identification,” in 45th IEEE Conference on Decision and Control (CDC), San Diego, California, 2006. [20] G. C. Goodwin and R. Payne, Dynamic System Identification: Experiment design and data analysis. Academic Press, 1977. [21] G. C. Goodwin, S. F. Graebe, and M. E. Salgado, Control System Design. Upper Saddle River, NJ: Prentice Hall, 2001.