Weighted total least squares with weighted and hard constraints

4 downloads 0 Views 776KB Size Report
problem subject to weighted/hard linear(ized) equality constraints on ... Because the formulation is based on the standard least squares theory, the method.
Weighted total least squares with weighted and hard constraints

A.R. Amiri-Simkooei

Technical Report, No 101 Series on Mathematical Geodesy

Department of Geomatics Engineering Faculty of Civil Engineering and Transportation University of Isfahan 81746-73441 Isfahan, Iran Tel.: +98 31 3793 5289, Fax: +98 31 3793 5085, Email: [email protected]

1

Preface This technical report is a part of my research I completed in 2012-2013. It deals with the weighted total least squares (WTLS) subject to hard and weighted constraints. Different aspects of this problem are investigated in this report. Due to some unexpected issues the report was delayed to be published in a peer-reviewed journal. A variant of this report was finally submitted and published in Journal of Surveying Engineering in 2017. Because the work is still relevant to many Statistics and Geomatics applications in weighted total least squares problem with weighted and hard constraints I came also to the conclusion to publish this research paper, along with this preface, as a Technical Report in our website at the University of Isfahan and Researchgate. I hope this is a step forward to the WTLS problem.

Please refer to this document as follows:

Amiri-Simkooei, A.R. (2013). Weighted total least squares with weighted and hard constraints, Technical Report, No 101, Series on Mathematical Geodesy, Department of Geomatics Engineering, University of Isfahan, Isfahan, Iran

Alireza Amiri-Simkooei October 2013

2

Abstract Weighted total least squares (WTLS) has been widely used as a standard method to optimally adjust an errors-in-variables (EIV) model containing random errors both in the observation vector and in the coefficient matrix. An earlier work provided a simple and flexible formulation for WTLS based on the standard least squares theory. The formulation allows us to directly apply the available standard least squares theory to the EIV models. Among such applications, this contribution formulates the WTLS problem subject to weighted/hard linear(ized) equality constraints on unknown parameters. The constraints are to be properly incorporated into the system of equations in an EIV model of which a general structure for the (singular) covariance matrix Q

of the coefficient matrix is used. The

formulation can easily take into consideration any number of weighted/hard linear and nonlinear constraints; hard constraints turn out to be a special case of the general formulation of the weighted constraints. Because the formulation is based on the standard least squares theory, the method automatically approximates the covariance matrix of the estimates from which the precision of the ‘constrained’ estimates can be obtained. Three numerical examples with different scenarios are employed to demonstrate the efficacy of the proposed algorithm for geodetic applications.

Keywords. Weighted total least squares; Errors-in-variables model; linear equality constraints; 2D affine transformation.

3

1 Introduction In many geodetic applications, constraints have been widely used to incorporate prior knowledge on an observed linear system of equations. Such constraints may be incorporated to a system to improve the precision and accuracy of the results by reducing the number of unknown parameters or accordingly by increasing the redundancy of the system. There are two kinds of constraints. A socalled ‘equality constraints’ may guarantee the stability of a linear system of equations (Regalia, 1994; Lacy and Bernstein, 2003), while a so-called ‘inequality constraints’ may be incorporated to a system to guarantee the feasibility of the solution. For example, a positive variance estimate is feasible, while a negative one is not. Nonnegativity constraints guarantee feasible variance estimates (see Moghtased-Azar et al. 2014). In geodetic applications, the equality constraints are classified either as minimum constraints or as extra (called also redundant) constraints. The former, which is employed to deal with the so-called ‘free network adjustment’, is introduced to compensate the rank deficiency of the design matrix (and hence of the normal matrix) of the linear system. This problem is usually referred to as the problem of datum definition in geodetic networks (see, Teunissen 1985a; 2006; Dermanis 1994). The datum defect problem is also referred to as the ‘minimum constraints’ problem in which the datum constraints are just enough to compensate the rank deficiency of the system of equations. They can accordingly be handled by generalized inverses of which the Moore-Penrose (or pseudo) inverse is a special case. This corresponds to the inner constraints in geodetic problems. For estimability analysis on variant and invariant quantities in a rank deficient model of observation equations we may refer to Baarda (1973); Dermanis and Grafarend (1981); Dermanis (1994); Teunissen (1985a); Xu (1997). The latter, which is the subject of discussion in the present contribution, is motivated due to the existence of (redundant) prior knowledge that includes specific relationships between the unknown parameters (see, Teunissen 2004). The constraints can also be classified as ‘hard’ and ‘weighted’. Both will be addressed in the present contribution. The hard constraints can often represent a case where there exist known functional relations between the unknown parameters. More relax constraints are however the socalled ‘weighted constraints’ for which such functional relations can slightly be violated depending on their precision. Weighted constraints may also be referred to as an additional linear system of equations and/or a pseudo observation model. Total least squares originates from the work of Golub and van Loan (1980) in mathematical literature in which they introduced the errors-in-variables (EIV) models. Many other researchers have used the TLS method to various science and engineering problems. We may at least refer to van Huffel and Vandewalle (1991); Golub et al. (1999); Markovsky and van Huffel (2007). For a mathematical/statistical literature review we may refer to Xu et al. (2012). A linear EIV model differs from the standard linear model of observation equations because the coefficient matrix connecting the parameters to the random observables is also affected by random errors. In geodetic literature, an EIV model treated as the 2D nonlinear symmetric Helmert transformation was introduced by Teunissen (1988). Although the terminology EIV was not directly used, he gave the exact solution using a rotational invariant covariance structure. Since then, many researchers have contributed to the solution of the EIV models in geodetic literature. We may refer to Felus (2004); Akyilmaz (2007);

4

Schaffrin and Wieser (2008); Schaffrin and Felus (2009); Fang (2011; 2013; 2014a); Tong et al. (2011, 2014); and Shen et al. (2011); Xu et al. (2012); Xu and Liu (2014). Previous work on the equality constrained TLS problems can be summarized as follows. Van Huffel and Vandwalle (1991) and Dowling et al. (1992) proposed closed form solutions based on the singular value decomposition (SVD) for a TLS problem subject to only linear constraints. Schaffrin (2006) presented an iterative algorithm for the TLS problem with linear stochastic constraints. The TLS problem with quadratic constraints was investigated by Golub et al. (1999), Sima et al. (2004) and Beck and Ben-Tal (2006). Schaffrin and Felus (2009) proposed an iterative algorithm for the TLS problem subject to both linear and quadratic constraints. Fang (2011, 2014a, 2015) and Mahboub and Sharifi (2013a, 2013b) proposed the weighted total least squares (WTLS) subject to both linear and quadratic constraints. They consider a full covariance matrix of observed quantities in the observation vector and in the coefficient matrix. The former, in addition, takes their possible correlation into consideration. Zhang et al. (2016) proposed a constrained total least squares (CTLS) with identity covariance matrices. They convert physical constraints into mathematical forms and use CTLS to solve a 2D affine transformation. Research is also ongoing in the field of TLS problems with inequality constraints for which we may at least refer to De Moor (1990); Zhang et al. (2013); Fang (2014b). It is also relevant to address some of the works carried out in the field of EIV models with singular covariance matrices. Fang (2011) and Snow (2012) presented the WTLS estimates in an EIV model subject to priori information and with singular covariance matrices. Schaffrin et al. (2014) investigate the case of singular dispersion matrices and present an algorithm under a rank condition that indicates the existence of a unique TLS solution. Neitzel and Schaffrin (2016) consider the case of a singular dispersion matrix within the Gauss–Helmert model establishing necessary and sufficient conditions for a unique residual vector and the parameter vector. In particular they treat the case when the covariance matrix of the predicted observables in the Gauss–Helmert model is singular. Jazaeri et al. (2015) present WTLS adjustment with multiple constraints (including a quadratic constraint) and with singular covariance matrices. We highlight that the WTLS estimate in an EIV model with singular covariance matrix has also been investigated by many other researchers as well. This includes at least Fang (2013, 2014a,b, 2015), Tong et al. (2011, 2014), Amiri-Simkooei (2013), AmiriSimkooei and Jazaeri (2012, 2013). Xu et al. (2012, 2014) and and Shi et al. (2015) treated the singularity of the covariance matrix using their partial EIV model. Our goal to investigate the singularity of the covariance matrices in an EIV model is to present a mathematical proof on the equivalence of the WTLS solutions obtained either from the generalized inverse of the singular matrix or its regular inverse from a set of functionally independent variables. In the study by Amiri-Simkooei and Jazaeri (2012) the WTLS problem was formulated using the standard least squares theory. An alternative derivation, without using Lagrange multipliers, was provided by Jazaeri et al. (2014). This formulation allows us to apply the existing standard least squares theory to the EIV models. Based on this formulation, Amiri-Simkooei and Jazaeri (2013) applied the data snooping procedure to the EIV models. The theory of least squares variance component estimation was also applied to the EIV models by Amiri-Simkooei (2013). AmiriSimkooei et al. (2016a) applied the idea to a mixed EIV model. This contribution presents another application, namely the WTLS problem subject to linear and nonlinear constraints of which particular

5

attention is paid to the singular covariance matrices. When incorporating the constraints with the unknown parameters, the WTLS problem can accordingly be solved iteratively for which a new algorithm is presented. The WTLS problem is formulated subject to linear(ized) constraints, where the covariance of the coefficient matrix has a general structure;

matrix

=



and

=



are

considered to be special cases. This work differs from the previous works of Fang (2011,2014a,2015) and Mahboub and Sharifi (2013a,2013b) in the following aspects. 1) The WTLS formulation can easily take into consideration any number of linear and nonlinear constraints. Most previous works have considered only one quadratic constraint. Fang (2015) has also considered multiple nonlinear constraints using an iterative Newton method. Using this method one does not need to formulate the objection function with a g-inverse, and hence, the covariance matrix to be inverted is regular. 2) The formulation is generally presented for the weighted linear/nonlinear constraints; hard constraints turn out to be special case of the general formulation of the weighted constraints. We however note that the weighted constraints have already been used in an EIV model by Schaffrin (2006) and Tong et al. (2011) for the case of linear constraints. Fang (2011, 2014c) treat the EIV model with weighted constraints to compare it with hard constraints in his transformation example. 3) The formulation takes into account possible singularity of the covariance matrix

using the theory of generalized

inverses. 4) Because the formulation is based on the standard least squares theory, the method automatically provides the covariance matrix of the estimates from which the precision of the ‘constrained’ estimates can be obtained. In fact, the available standard least squares theory with constraints can be used to the WTLS problem with the constraints. And 5) The formulation is shown to be conceptually simple and practically efficient and to have algorithmically low complexity. Three numerical examples with different scenarios are employed to demonstrate the efficacy of the proposed algorithm in geodetic applications. This report is organized as follows. Section 2 presents a general solution to the WTLS problem with weighted and hard constraints. Singular covariance matrix

is treated in this section.

In Sect. 3 a few remarks on the formulation of the WTLS with constraints are highlighted. Special attention is paid to the WTLS with linear and quadratic constraints. In Sect. 4, simulation studies and empirical examples give insight into the efficacy of the proposed algorithm. Finally we make some conclusions in Sect. 5. 2. WTLS with weighted and hard constraints 2.1 Errors-in-variables (EIV) model =



+

Consider the following EIV model (1)

of which its stochastic properties are characterized by 0 0 ∶= ~ , !"# $ vec 0 0 is the %-vector of the error of observations,

' random error of the coefficient matrix

where !"#

and D

=

!"#

,

is the % × ' coefficient matrix,

(2)

is the % ×

is the '-vector of unknown parameters, D

=

are the corresponding symmetric and non-negative dispersion matrices of 6

size % × % and %' × %' for the vector of observation and coefficient matrix, respectively. The

one column underneath the other. In both expressions, σ#" is the (un)known variance factor of the unit symbol 'vec' denotes the operator that converts a matrix to a column vector by stacking the columns, =



+−

+

weight. In some cases it is more convenient to rewrite Eq. (1) in the form *

where

= vec

and + = vec

(3) .

2.2 EIV model with singular covariance matrices Possible singularity of is addressed in this section. Based on the Rao’s estimation theory, any symmetric non-negative definite matrix can be used as the covariance matrix of the observables (Rao 1973). We use the theory of generalized inverses (Rao 1972) to handle the problem of singular covariance matrix in the least squares solution. is in fact a non-negative definite covariance matrix, and hence possibly a singular matrix. Dermanis (2015a) identifies two kinds of singularities as extreme cases in an EIV model. They are itemized as follows: =

Case 1: The entries of A are functions of a smaller number of random variables ,

mean errors

with non-singular covariance matrix

,

-., .

, polluted by zero-

This is indeed the case when for

example dealing with duplication of elements and/or constant entries in A. Case 2: The entries of A are functionally independent variables, but their observations are obtained from prior estimates, which are stochastically dependent having singular covariance matrix. This may result when the prior estimates (e.g. coordinates) are obtained by means of the minimal constraints. The above two cases are now addressed. 2.2.1 Special form of singularity To show the equivalence of the solutions using the theories of generalized and regular inverses, we vec

=+=/

+ +"

may rewrite the elements of the design matrix

as

where / is a %' × 0 full column rank matrix,

(4)

elements of , and +" is an %'-vector containing the constant elements of vec vec





into account, one obtains

= vec

which follows =

=/

=+−

errors of



=/

1+ − /

,

,

is a vector of functionally independent random

+ +"

. When taking the (5) (6)

2 +

(7)

≡ min

(8)

Therefore, Eq. (3) is rewritten as *

,

The total least squares solution of the EIV model is then sought in the following minimization problem: *

34

where

=/

+

,

*

,

34 ,

,

is the positive definite covariance matrix of

/*

. Application of the error propagation law to

Eq. (6) yields ,

(9) 7

,

Because the errors in

are functionally independent, Eq. (6) is considered to be a consistent system

of equations. Therefore its solution is exact and there are many ways to solve this equation. For ,

example, the error vector 34

= 9/:; /< *

,

/:;



*

can uniquely be obtained as

where the subspace ℛ /: is complementary to ℛ / , i.e. ℛ / ⊕ ℛ /: = R sum of two subspaces and ℛ the range space of a matrix. Therefore ℛ

(10)

/:;

, with ⊕ the direct

complement to ℛ /: , satisfying ℛ /:; ⊕ ℛ /: = R@A and /:* /:; = 0. The least squares solution to the EIV model, is sought through minimization of the quadratic form

can be reformulated as *

or

34

,

*

,

34

,

where 3

,

,

,

=

=

= /:; /* /:;

rank

* ; /: *

34

/* /:;

34

34 ,

3

34 ,

34

9/:; /< *

/:;

*

*

,

E

3

3

,

,

, which with Eq.(10)

(11) (12)

9/:; /< *

34

/:;

*

(13)

is a reflexive generalized inverse (Rao 1997; Dermanis 1998) of 3

34

is the orthogonal

and the following two identities: = Generalized inverse 3 = 3 Reflexive property

, satisfying rank

The proof can simply be followed. This shows that any reflexive generalized inverse Q =

used as the weight matrix in an EIV model. The generalized inverse followed from Eq. (9) as

3

choice of /: ) left inverse of /.

= /3*

/ ,where /3 = 9/:; /< *

34 3 ,

34

One particular choice is /:; = / or /: = /; , which gives

Q =

R

= / /* /

R

= =

R

34

34

/* /

3

3

(14) can be

could also directly be

/:; is an arbitrary (due to arbitrary *

/

34 *

(15)

Symmetric property Symmetric property

(16)

,

=

The preceding generalized inverse, in addition to the properties in Eqs. (14), satisfies the following two properties: E

R

indicating that 1998).

*

R * R

is a Moore-Penrose inverse (pseudo inverse) of

In conclusion, we have

*

,

34 ,

,

=

inverse of the non-negative definite matrix

*

.

3

, where

3

(see Rao 1997; Dermanis

is an arbitrary reflexive generalized

Motivating Example: We closely follow an example by Dermanis (2015a). When a set of points are observed in two coordinate systems, the transformation parameters can be estimated in an EIV model using WTLS. The planar linear affine transformation (six-parameter transformation) is now considered. The model is expressed as

8

+4 Z [4 _ Y ^ T* TV UV 1 0 0 0 Y \4 ^ = (17) U* 0 0 0 TV UV 1 Y+# ^ Y [# ^ X \# ] where the parameters \4 and \# are the shifts along the u and v axes, respectively. The other

parameters +4 , +# , [4 and [# are related to the four physical parameters of a 2D linear transformation,

affinity) parameter. TV and UV are the coordinates of a point in the start system and T* and U* are their corresponding counterparts in the target system.

which include two scales along the u and v axes, one rotation, and one non-perpendicularity (or The coordinates of a series of points (i.e. ` = 1, … , 0 points) are observed in both the start and

the target systems. Equation (17) makes in total % = 20 number of equations and six number of

unknown parameters to be estimated. The observation vector and the design matrix are T*c TV UVc 1 0 0 0 Z c _ Z U* _ T U 0 0 0 Vc Vc 1 Y ^ Y ⋮c ^ ⋮ ⋮ ⋮^ ⋮ ⋮ ⋮ = Y ⋮ ^ , = Y (18) ⋮ ⋮ ⋮ ^ ⋮ ⋮ Y ^ Y ⋮ 0 0^ YT*e ^ YTVe UVe 1 0 T U XU*e ] X 0 V Ve 1] 0 0 e gives Denoting the functionally independent random variables of the design matrix A as TVc Z UV _ Y ⋮c ^ =Y ⋮ ^ (19) Y ^ YTVe ^ X UVe ] The fully populated full-rank covariance matrices of the coordinates in the start and target systems are (i.e. + = vec denoted as =/

where

Z Y /=Y Y Y X and

_ ^ j^ k^ l^ m]

4

,

/

fg hg

*

=

,

and

, respectively. The covariance matrix of the coefficient matrix (20)

#

(21)

0 , # 0 0 , k 0 0 , m 1 suffers

show that / / =

⊗ 2

4

=

reads (Amiri-Simkooei and Jazaeri, 2012)

1 0 0 = ⊗ j r 0 p o = ⊗ 0 r n l 0 We note that q o

fi hi

=

r



0 1 , 0 0 0 0 = r⊗ , (22) 1 0 0 0 = r⊗ , 0 0 from a rank deficiency of s = %' − 20 = 6% − % = 5%. Therefore its

=

r



=2

regular inverse does not exist. We may for example use its pseudo-inverse *

r

#

#r .

This, with Eq. (15), gives 9

R

from Eq. (15). One can

= k/ 4

R

/

34 * ,

which satisfies *

=k

R

4 *

,

(23) /* /

/ /

34 * ,

=

,

*

,

34 ,

,

(24)

2.2.2 General form of singularity Dermanis (2015a) suggests a general case that

suffers from two kinds of rank deficiencies. 1)

Duplication of elements and/or constant elements in A. 2) Intrinsic rank deficiencies of the functionally independent elements of A (for example due to datum problem). This section briefly explains this approach, which can originally be similar and closely related to Grafarend and Schaffrin be rank

= v; it suffers from a rank

deficiency of s = %' − v. We may use the eigenvalue decomposition (or singular value (1993) approach on the use of SVD. Let the rank of w4Λ4 w4*

= wΛw = where w = yw4 ⋮ w# z, w4 : %' × v, w# : %' × %' − v

as (Dermanis 2015a; Dermanis and Rummel 2000)

decomposition—SVD) of *

ww *

=

w* w

(25)

=

w4* w4

and hence

=

* | , w# w#

=

(26) 3| ,

w4* w# =

0, and Λ 0 (27) Λ= 4 0 0 with Λ4 = diag ~4, … , ~| ; without the loss of the generality we may assume ~4 ≥ ~# ≥ ⋯ ≥ ~| > 0. is an orthogonal matrix satisfying

We note that the columns of w form an orthogonal basis of R@A ; the subspaces ℛ w4 = ℛ R

@A

and ℛ w# = ℛ

w4* w#

;

= 0.

⊂ R

@A

are orthogonal complement of R

@A

, i.e. ℛ w4 ⊕ ℛ w# = R

Because w is an invertible matrix, we may introduce the modified error ƒ as ƒ w* ƒ = w * = „ 4* … = ƒ c ↔ = wƒ , w †

and

#

Q ˆ, 0 =„ c 0 0

having the covariance matrix ˆ,

= w*

w=Λ=

Λ4 0

0

Q ˆ,†



which results in Qˆ,c = Λ4 and Q ˆ,† = 0. This indicates that ƒ ƒ = wƒ = yw4 ⋮ w# z ƒ c = w4 ƒ c ↔ ƒ c = w4* † =



= 0. One then has

∈ ℛ U4 = ℛ Q . Equation (3) is then reformulated as

1+ − w4 ƒ c 2 +

indicating that *



A natural contribution to the total least squares problem is the quadratic form ƒ * c

Qˆ,c = Λ4, gives ƒ* c

34 ˆ, c ƒ c

= ƒ * c Λ34 4 ƒ

* Q = w4 Λ34 4 w4

where

which, with

=

c

=

w4 Λ4w4* ,

*

* w4Λ34 4 w4

=

*

Q

(28)

(29)

(30)

34 ˆ, c ƒ c ,

(31) which with

(32) (33)

satisfies the following four properties:

10



@A

= w4 Λ w4* = Generalized inverse q Q * oQ Q = w4 Λ34 Reflexive property 4 w4 = Q * 34 * * Q = w Λ w w Λ w = w w Symmetric property p 4 4 4 4 4 4 4 oQ * * * = w4 Λ34 Symmetric property 4 w4 w4 Λ w4 = w4 w4 n indicating that Q is the pseudo-inverse of

2.2.3 Relation between two representations

, i.e. Q =

R

(34) * = w4 Λ34 4 w4 .

To interrelate the methodologies of the previous two subsections, for any nonsingular v × v matrix S and %' × %' matrix T, we choose (Dermanis, 2015a) / = w4 ‹ and /Œ = •w# which gives

(35)

/Œ; = • 3* w4 We note that

(36)

ℛ / = ℛ w4 ‹ = ℛ w4

ℛ /Œ = ℛ •w#

(37)

and

= w4 ƒ

(38)

= w4 ‹ ‹

ƒ

= /‹

ƒ

Equation (30), with Eq. (35), follows the error vector c

= ‹ 34ƒ

which, with ,

= ‹ 34

c

=/

34

,

= ‹ 34w4*

ˆ,c ‹

c

, gives

34

as

c

(39) (40)



34 -.,

= ‹*

w4

34 * ˆ,c w4

34 ˆ,c ‹

(41)



(42)

Application of the error propagation law to the preceding equation yields -.,

=

3*

Therefore one has *

,

34 -.,

,

*

=

*

=

*

w4‹ 3* ‹ *

R

In agreement with

R

34 34 * ˆ,c ‹‹ w4



= w4

34 * ˆ,c w4

in Eq. (34).

A more general case may start from Eq. (13) in which a more relax generalized inverse (i.e. 3

reflexive generalized inverse) can be used. We use

inverse of /. Equations (35) and (36) will then give /3 = w4* • 34w4‹

34

w4* • 34

= /3*

,

or, with w4* w4 = | , as /3 = ‹ 34 w4* w4 w4* • 34 w4 34 w4* • 34 /3 = ‹ 34 w4* Ž•c

34

/ , where /3 = 9/:; /
¹ repeat End do Approximate covariance matrix ž› from Eq. (87) Estimate !›"# from Eq. (89)

= 0, which follows

Œ

= 0.

(132)

Fig. 1 Algorithm for solving a weighted total least squares problem subject to linear and quadratic constraints

20

Due to the intrinsic nonlinearity of the WTLS and the quadratic constraints used, the WTLS problem is to be solved by an iterative algorithm. One has to start with an initial guess for the unknown parameters for which one relies on the ordinary least squares without taking the constraints into account. Figure 1 shows the schematic algorithm to solve the WTLS problem with the constraints. The estimated vector is then used as an updated initial guess and the procedure is repeated until the estimates do not change by further iterations. 4 Numerical results and discussions The following three examples (with different scenarios) demonstrate the efficacy of the proposed WTLS algorithm subject to linear and quadratic weighted/hard constraints. 4.1 Simulated example The first example is a simulated example over 1000000 independent runs. We aim to study the efficacy of the presented WTLS formulation subject to the constraints. The results of three scenarios are presented. 1) solution without the constraints; 2) solution with a linear constraint; and 3) solution with both linear and quadratic constraints. Consider a simulated example for the system of equations

=

+



, where the

observation vector and the coefficient matrix are 11 2 3 1 Z0_ Z −1 0_ Y9^ Y 2 −1 4^ (133) = Y ^ + , = Y−1 ^+ Y0^ Y 3 −3 1^ X 2 1 4] X16] ¾ and = y1 2 3z . Therefore, in this simulated example, the actual observation vector, the actual design matrix and the unknown parameters are given. We now aim to add random errors

and

to

the observation vector and the design matrix. The random errors are assumed to be independent and normally distributed of which the expected values are zeros and the variances are provided in Table 2. These errors are added up to the actual observation vector and the design matrix to make the ‘simulated’ observations. ž¿† j

The linear and quadratic hard constraints of the three scenarios are

j



4

= 2 and

žc† k

+

ž†† #

+

= 5.25, respectively. This follows °* = y−1 0 1z, ²4 = diag k , # , j , ℓ = 2, and v = v4 = 5.25. 4 4 4

We thus have one linear constraint (04 = 1) and one quadratic constraint (0# = 1), which gives 0 = 2. The unknown parameters are then estimated using the WTLS formulation presented in Fig. 1 for the three scenarios explained above. The simulated process is repeated over 1000000 independent runs. The histograms of the estimated unknown parameters are presented in Fig. 2 for the three scenarios. The average (over 1000000 runs) values of the estimated parameters are presented in Table

3. The results are very close to their expected values, i.e., 4 = 1, # = 2 and j = 3, indicating that the bias induced due to the nonlinearity of the WTLS and the quadratic constraint is indeed not significant (see Teunissen, 1984; 1985b; 1990). Fang (2011,2014a) and Mahboub and Sharifi (2013a, 2013b) algorithms were also implemented to the above results. Our results were identical to those of the Fang’s algorithm using the linear and/or quadratic constraints. This also held using the Mahboub’s

21

algorithm for the linear constraints. But, this algorithm failed to converge to the optimal solution when including the quadratic constraint. We now present the covariance matrix of the ‘constrained’ estimates, which has not been provided by the studies of Fang (2011,2014a) and Mahboub and Sharifi (2013a,2013b). It is provided ž›

using two strategies: 1) The covariance matrix

is directly obtained using its general form in Eq.

(105) for the above-mentioned scenarios for each run. The final ž› ’s

ž›

is then obtained as the average of

estimate its covariance matrix. Let ›4 , ›# , and ›j be three vectors of size 1000000 × 1 consisting of the

over 1000000 independent runs. 2) The second strategy uses the estimated values of x to 4

= 1,

#

= 2 and

j

= 3, respectively. The 1000000 × 3 residual matrix of the

estimates is obtained by ¡ž = y ›4 − 1, ›# − 2, ›j − 3z. The covariance matrix of the unknown parameters can then be unbiasedly estimated by the least squares variance component estimation (LSVCE) method in a multivariate model as Σž› = ¡ž* ¡ž /1000000. For more information we refer to

the estimates for

The correlation matrix ²ž› obtained from

Teunissen and Amiri-Simkooei (2008) and Amiri-Simkooei (2009). ž›

and Σž› is presented in Table 4. The results of

these two strategies are nearly identical indicating that the covariance matrix

ž›

presented in Eq.

(105) is indeed a very good approximation of the real covariance matrix of the WTLS estimates, both for the unconstrained and constrained cases. The least precise estimates are obtained for the unconstrained case while the most precise estimates are obtained when including both the linear and quadratic constraints (see also Fig. 2). This is what we would expect because introducing constraints will increase the degrees of freedom (or accordingly reduce the number of unknowns) resulting in more precise estimates. Another observation is that the histograms in Fig. 2 show that the posterior distribution of the estimated parameters is approximately normal provided that the original observations are normally distributed. This is what one would expect from the standard least squares theory, but our results confirm this issue for the application considered in an EIV model.

Table 2 Variances of elements of observable vector and design matrix ` 1 2 3 4 5

! #³

! #³,c

17 10 15 6 13

8 10 16 7 8

× 103j

! #³,†

! #³,¿

11 9 16 19 6

7 14 9 18 18

Table 3 Estimated parameters (averaged over 1000000 runs) for three scenarios; unconstrained WTLS (WTLS), WTLS with linear constraint (WTLS+LC), and WTLS with linear and quadratic constraints (WTLS+LC+QC). Parameter

WTLS

WTLS+LC

WTLS+LC+QC

4

1.0002

1.0006

0.9998

j

2.0016

2.0013

1.9989

3.0007

3.0006

2.9998

#

22

Table 4 Correlation matrix ²ž (six 3 × 3 matrices in which diagonal entries—shown in bold—are standard deviation of estimates and off-diagonal entries are correlation coefficients) of WTLS

estimates for three scenarios obtained from covariance matrix ž› of estimates in Eq. (105) and from estimated covariance matrix Σž› = ¡ž* ¡ž /1000000, in which ¡ž is residual matrix of estimates; unconstrained WTLS (WTLS), WTLS with linear constraint (WTLS: LC), and WTLS with linear and quadratic constraints (WTLS: LC+QC).

Corr. ž›



²ž› Σž› ↓

²ž›

Parameter 4

4

WTLS #

j

4

WTLS: LC #

j

WTLS: LC+QC

4

#

j

#

0.099

-0.109

-0.299

0.052

-0.099

1.000

0.044

-0.999

1.000

-0.109

0.092

-0.014

-0.099

0.092

-0.099

-0.999

0.055

-0.999

4

-0.299

-0.014

0.078

1.000

-0.099

0.052

1.000

-0.999

0.044

0.099

-0.110

-0.299

0.052

-0.101

1.000

0.044

-0.999

1.000

-0.110

0.092

-0.015

-0.101

0.092

-0.101

-0.999

0.055

-0.999

-0.299

-0.015

0.079

1.000

-0.101

0.052

1.000

-0.999

0.044

j # j

Fig. 2 Histogram of estimated parameters for three scenarios;

4

(left frames),

#

(middle frames),

and j (right frames); unconstrained WTLS (top frames), WTLS with linear constraint (middle frames) and WTLS with linear and quadratic constraints (bottom frames).

23

4.2 Two-dimensional (2D) affine transformation The second example is a 2D planar linear affine transformation. The coordinates of ten data points, measured in the start and target coordinate systems, are listed in Table 5. The data come from AmiriSimkooei and Jazaeri (2013) of which an intentional gross error of size 0.1 was contaminated to one of the component. In the present contribution, this data set is used without the gross error. The nominal standard deviation of the measurements is 0.01 m in both systems. The model for the planar linear affine transformation (six-parameter transformation) is presented in Eq. (17). The WTLS results of four scenarios are presented for this data set. The first scenario takes no constraint into account. The second scenario takes a linear constraint into account (LC); the third scenario takes a linear and a quadratic constraint (LC+QC(1)); and the fourth scenario takes a linear constraint and two quadratic constraints (LC+QC(2)). For each scenario, the constraints are considered once to be hard and once to be weighted. We note that the studies of Fang (2011,2014a) and Mahboub and Sharifi (2013a,2013b) cannot handle the fourth scenario for the hard constraints and the second, third, and fourth scenarios for the weighted constraints. Fang (2015) can handle the case with hard constraints, and our results on the WTLS estimates along with their precision are identical to his results. The linear and quadratic constraints are as follows: 1) Linear constraint: \4 − \# = 0, which gives °* = y0 0 1 0 0 − 1z and ℓ = 0. For the hard constraint we have



= !ℓ# = 0, while for the weighted constraint we assume

= !ℓ# = 0.01# m# .

Âc† Æ 4 4 + c = 2, which gives ²4 = diag , , 0,0,0,0 and and v4 = 2. For 4m k 4m k !|#c = 0, while for the weighted constraint we assume !|#c = 0.001# m#.

2) First quadratic constraint: the hard constraint we have



3) Second quadratic constraint:

堠 k

+

Ɔ = 2, 4m !|#† = 0,

which gives ²# = diag 0,0,0, ,

4 4 ,0 k 4m

and and v# = 2.

while for the weighted constraint we assume !|#† =

0.001#m#. In this case, the first and second quadratic constraints will make the covariance matrix of the weighted constraints as | = 0.001# # m#, where # is an identity matrix of size 2. The estimated affine transformation parameters along with their standard deviations for four cases are presented in Table 6. The results include: 1) the WTLS estimates in which no constraint is used (case 1); 2) the WTLS estimates subject to a linear constraint (case 2); 3) the WTLS estimates subject to a linear constraint and a quadratic constraint (case 3); and 4) the WTLS estimates subject to a linear constraint and two quadratic constraints (case 4). A few observations can be highlighted from the results of this table. The first observation is that including more constraints to the WTLS problem will always increase (at least not decrease) the precision of the estimates. This is what we would expect because adding a constraint to a linear system will increase the number of equations (or accordingly decrease the number of unknowns) both of which increase the precision of the estimates. The second observation is that a similar situation (with the first observation) holds when comparing the results of the hard constraints to those of the weighted constraints. The precision of the estimates, in case of the hard constraints, is better than that of the weighted constraints. This is also what we would expect. The third observation is that the WTLS estimates subject to the hard constraints fulfill the constraints exactly. For example one can simply verify the linear constraint \̂4 − \̂# = 0 holds for the three scenarios mentioned above. This is also what we would expect and somehow a confirmation on the appropriate formulation of the proposed method. This issue is however not the case (as expected) when using the weighted constraints.

For the hard constraint we have

24

Table 5 Observed points in start and target coordinate systems Point No. 1 2 3 4 5 6 7 8 9 10

Start system TV 70.00 66.16 56.17 43.83 33.82 30.00 33.80 43.83 56.17 66.19

UV 49.98 61.74 69.02 69.01 61.77 50.00 38.25 30.97 30.98 38.24

Target system T* U* 180.00 59.98 141.21 114.67 86.70 163.71 37.26 188.45 11.77 179.38 19.99 140.00 58.77 85.35 113.31 36.28 162.77 11.56 188.24 20.61

Table 6 Estimated planar linear affine transformation parameters along with their standard deviations for four cases: case 1) without introducing the constraints (WTLS), case 2) after introducing the linear constraint (LC), case 3) after introducing the linear and the first quadratic constraint (LC+QC(1)), and case 4) after introducing the linear and the two quadratic constraints (LC+QC(2)). For each case two kinds of results are presented, weighted constraints (WC) and hard constraints (HC). Scenario WTLS (case 1)

LC (case 2)

LC+QC(1) (case 3)

LC+QC(2) (case 4)

Parameter UC

WC HC WC HC WC HC

+›4

[¡4

\̂4

+›#

[¡#

Estimated parameter with its standard deviation

\̂#

Estimate

3.99965

-1.99933

-0.01031

-1.99858

3.99994

-0.05903

Std. dev.

0.00102

0.00102

0.07390

0.00102

0.00102

0.07390

Estimate

3.99988

-1.99909

-0.03454

-1.99881

3.99971

-0.03479

Std. dev.

0.00078

0.00078

0.02599

0.00078

0.00078

0.02599

Estimate

3.99989

-1.99909

-0.03467

-1.99882

3.99971

-0.03467

Std. dev.

0.00078

0.00078

0.02551

0.00078

0.00078

0.02551

Estimate

4.00021

-1.99946

-0.03312

-1.99909

3.99996

-0.03348

Std. dev.

0.00060

0.00054

0.02590

0.00066

0.00067

0.02591

Estimate

4.00049

-1.99976

-0.03215

-1.99932

4.00017

-0.03215

Std. dev.

0.00037

0.00019

0.02535

0.00053

0.00057

0.02535

Estimate

4.00035

-1.99960

-0.03227

-1.99938

4.00022

-0.03245

Std. dev.

0.00056

0.00050

0.02587

0.00050

0.00056

0.02587

Estimate

4.00050

-1.99975

-0.03127

-1.99973

4.00053

-0.03127

Std. dev.

0.00037

0.00019

0.02533

0.00019

0.00037

0.02533

25

4.3 Photographed points using terrestrial cameras points A and B which are photographed by three terrestrial cameras ‹4 , ‹# , and ‹j . The principle The third is a real example presented by Mikhail and Ackermann (1976). Figure 3 shows two object

distance of the cameras is ® = 100 mm. The distances Ä4 , Ä# , Äj , Äk , Äl , Äm , Å4 and Å# are observed (see

s

= 7.8 m (Mikhail and Ackermann, 1976). We now present two kinds of solutions, i.e. without the

Table 7). A constraint is also used assuming the distance between points A and B is exactly known as Æ

constraint and with the constraint. This example has already been solved, without the constraint, using the mixed EIV model by Amiri-Simkooei et al. (2016a). We now present an alternative solution using the theories developed in the present contribution, both for the unconstrained and constrained cases.

Fig. 3 Object points A and B photographed by three terrestrial cameras (after Mikhail and Ackermann (1976), page 221)

26

Table 7 Observed distances and their standard deviations as provided by Mikhail and Ackermann (1976), page 221 Observation Ä4 Ä# Äj Äk Äl Äm Å4 Å#

Value 14.1 mm 16.6 mm 6.1 mm 7.1 mm 22.1 mm 26.3 mm 10.0 m 8.0 m

Standard deviation 0.10 mm 0.10 mm 0.10 mm 0.10 mm 0.10 mm 0.10 mm 0.05 m 0.05 m

Based on the geometry, one can write six observation equations as follow: Ä4 qÄ o# Äj Ä pk oÄl nÄm

− ® 4 = 0 − ® j = 0 # − ®Å4 + ® 4 = 0 k − ®Å4 + ® j = 0 # − ®Å4 − ®Å# + ® 4 = 0 k − ®Å4 − ®Å# + ® j = 0 #

k

(134)

which can be reformulated as

= Ä4 # − ® 4 = Ä# k − ® j j = ®Å4 = Äj # + ® 4 p k = ®Å4 = Äk k + ® j o l = ®Å4 + ®Å# = Äl # + ® 4 n m = ®Å4 + ®Å# = Äm k + ® j q o

4

#

where the observable vector is vector as

= ´0,0,

0 Z0 Y 0 = 25® # Y Y0 Y0 X0

0 0 0 0 0 0

0 0 1 1 1 1

¿

,

0 0 1 1 1 1

È

,

0 0 1 1 2 2

É

,

(135)

= y 4, … , Ê

*

* mz

= y0, 0, 10.0®, 10.0®, 18.0®, 18.0®z* , with its error

µ . The observables’ covariance matrix is

0 0_ ^ 1^ # cm 1^ 2^ 2]

which suffers from a rank deficiency of 4. The WTLS formulation of the above equations using an EIV model is then − = − , with −® Z Y0 ® =Y Y0 Y® X0

Ä4 0 Äj 0 Äl 0

0 −® 0 ® 0 ®

0 Ä# _ 0 ^^ Äk ^ , 0^ Äm ]

Z0 Y0 0 = Y0 Y Y0 X0

¸c

0

¸¿

0

¸É

0

0 0 0 0 0 0

0

¸† _

0^ ^ ¸È ^ 0^ ¸Ê ]

27

and 4 # Ì , j



k

One can simply show that the 24 × 24 covariance matrix suffers from a rank deficiency of 18. and are rank deficient. In this contribution, only the rank Therefore, in this example both deficiency of

was addressed (rank deficiency of

can accordingly be considered). Therefore the

objective function to be minimized is of the form Φ

≔ + +

*

3

+

*

3



2~ 1 − − + *⊗ 2 * * 2” \ − ‘ *

(136)

For the unconstrained case we may ignore the term 2”* \ − ‘ * in the preceding equation. We estimate the coordinates of both object points A and B under two scenarios. In the first scenario, we estimate the coordinates of A and B without the constraint. In the second scenario, the hard constraint s Æ = 7.8 m is also implemented to the final coordinates of A and B. For both scenarios the coordinates of the two points are estimated with the threshold of ¹ = 1034# in Fig. 1. Both scenarios converge to the final solutions after a few iterations. Starting from the same initial values, the problem is converged to the final solution in five and six iterations for the unconstrained and constrained cases, respectively. Table 8 provides the estimated unknown parameters along with their standard deviations. As expected, the precision of the estimates increases when taking the distance constraint (s Æ = 7.8 m) into consideration. The estimated variance factors of the unit weight are !›"# =0.822842 (for WTLS)

and !›"# =0.565310 (for WTLS plus constraint). All results presented here are in agreement with those presented by Mikhail and Ackermann (1976), which has been solved using a mixed (combined) observation model. We now have an alternative formulation.

Table 8 Estimated coordinates of points A and B along with their standard deviations for two scenarios of unconstrained and constrained cases. Unconstrained solution

Constrained solution

Estimated coordinates ›4 m

›# m ›j m ›k m

WTLS

Std. Dev.

WTLS + Constraint

Std. Dev.

6.995202

0.037

6.994315

0.031

49.717378

0.249

49.758064

0.155

6.981612

0.034

6.983369

0.028

41.969771

0.195

41.956661

0.155

28

5 Concluding remarks This contribution presented the formulation of the weighted total least squares (WTLS) subject to linear(ized) equality constraints. The formulation is based on the standard least squares theory when several (non)linear constraints (e.g. quadratic constraints) on the unknown parameters are to be incorporated to the model. A simple iterative algorithm was provided to solve the WTLS problem with such constraints. The algorithm was shown to be very efficient for the applications considered in the present contribution. A few aspects of the research are highlighted: (1) the formulation takes a of the design matrix A into account. A proper

general covariance matrix

=



is constructed based

or = ⊗ are considered to be special cases of the general formulation. (2) The formulation can easily take into account any number of linear nonlinear constraints. For example, there is no restriction to use any number of quadratic constraints in the formulation. Most previous work has always considered only one quadratic constraint. (3) The WTLS formulation with constraints is generally presented for the ‘weighted’ linear and quadratic constraints. Having an on the application of the error propagation law to the columns of the design matrix. Also,

arbitrary covariance matrix

Œ

= 0. Therefore, hard constraints turn out to be a special case of

makes the weighted constraints more general (more relax) compared to

the hard constraints that consider

Œ

the general formulation. (4) The WTLS with constraints was formulated based on the standard least squares theory. Therefore the existing body of knowledge of the standard least squares theory (with constraints) can be applied to the WTLS problem (with constraints). For example, the method automatically provides (approximates) the covariance matrix of the estimates from which the precision of the ‘constrained’ estimates can be obtained. It is a fairly good approximation of real precision based on the results of the simulated example. And (5) The presented WTLS formulation with constraints was shown to be very efficient in practice. Two numerical examples demonstrated the efficacy of the proposed algorithm in geodetic applications. The consistency of the results with our expectation, when including the constraints into account, along with the high convergence rate of the algorithm confirmed that the proposed method can be used for many EIV geodetic applications when there exist a few linear and/or quadratic constraints on the unknown parameters. Because the hard constraints are a special case of the weighted constraints, one would expect a linear convergence rate for the WTLS with hard constraints. The convergence rate of the WTLS solution with hard constraints should be further investigated.

Acknowledgments. I would like to acknowledge Prof. Athanasios Dermanis at the Aristotle University of Thessaloniki for his unpublished paper on singular covariance matrices of the EIV models, which improved the quality of this report.

29

References Akyilmaz O (2007) Total least squares solution of coordinate transformation. Survey Review 39(303):68–80 Amiri-Simkooei AR (2007) Least squares variance component estimation: theory and GPS applications. PhD Thesis, Delft University of Technology, Publication on Geodesy, 64, Netherlands Geodetic Commission, Delft Amiri-Simkooei AR (2009) Noise in multivariate GPS position time-series, Journal of Geodesy, 83(2):175-187 Amiri-Simkooei AR (2013) Application of least squares variance component estimation to errors-invariables models, Journal of Geodesy, 87:935-944 Amiri-Simkooei AR, Jazaeri S (2012) Weighted total least squares formulated by standard least squares theory. Journal of Geodetic Science, 2(2):113–124 Amiri-Simkooei AR, Jazaeri S (2013) Data-snooping procedure applied to errors-in-variables models. Studia Geophysica et Geodaetica, 57 (3):426–441 Amiri-Simkooei AR, Zangeneh-Nejad F, Asgari J, Jazaeri S (2014) Estimation of straight line parameters with fully correlated coordinates. Measurement, 48:378–386 Amiri-Simkooei AR, Mortazavi S, Asgari J (2016a) Weighted total least squares applied to mixed observation model, Survey Review, 48 (349): 278-286 Amiri-Simkooei AR, Zangeneh-Nejad F, Asgari J (2016b) On the covariance matrix of weighted total least squares estimates, J Surv Eng, 142(3): 04015014 Baarda W (1973) S-transformations and criterion matrices, Technical report, Netherlands Geodetic Commission, Publ. on Geodesy, New Series, Vol. 5(1), Delft Beck A, Ben-Tal A (2006) On the solution of the Tikhonov regularization of the total least squares. SIAM J Optim, 17:98-118 De Moor B (1990) Total linear least squares with inequality constraints. ESAT-SISTA Report 199002, Department of Electrical Engineering, Katholieke Universiteit Leuven, Belgium Dermanis A, and Grafarend E (1981) Estimability analysis of geodetic, astrometric and geodynamical quantities in very long baseline interferometry, Geophys J R Astr Soc, 64, 31–64 Dermanis A (1994) Free networks solutions with the Direct Linear Transformation (DLT) method. ISPRS J Photogram Rem Sens, 49: 2–12 Dermanis A (1998) Generalized inverses of nonlinear mappings and the nonlinear geodetic datum problem, Journal of Geodesy, Volume 72, Issue 2, pp 71-100 Dermanis A, Rummel R (2000) Data analysis methods in geodesy. In: Dermanis A, Grün A, Sansò F (eds) Geomatic methods for the analysis of data in the Earth sciences. Lecture notes in Earth sciences, vol 95. Springer, Berlin, pp 17–92 Dermanis A (2015a) Some remarks on the EIV model with singular covariance matrix, Unpublished research paper, Aristotle University of Thessaloniki, Available at website: https://www.researchgate.net Dermanis A (2015b) Personal communication, Department of Geodesy and Surveying, School of Rural and Surveying Engineering, Faculty of Engineering, The Aristotle University of Thessaloniki, Greece

30

Dowling EM, Degroat RD, Linebarger DA (1992) Total least squares with linear constraints. Acoustics, Speech, and Signal Processing (ICASSP-92), 5:341-344, Institute of Electrical and Electronics Engineers, Signal Processing Society Fang X (2011) Weighted total least squares solutions for applications in Geodesy. Ph.D. dissertation, Publ. No. 294, Dept. of Geodesy and Geoinformatics, Leibniz University, Hannover, Germany. Fang X (2013) Weighted total least squares: necessary and sufficient conditions, fixed and random parameters. Journal of Geodesy, 87:733–749 Fang X (2014a) A structured and constrained total least squares solution with cross-covariances. Studia Geophysica et Geodaetica, 58 (1):1-16 Fang X (2014b) On non-combinatorial weighted total least squares with inequality constraints, Journal of Geodesy, 88 (8): 805-816 Fang X (2014c) A total least squares solution for geodetic datum transformations. Acta Geodaetica et Geophysica, 49 (2): 189-207 Fang X (2015) Weighted total least-squares with constraints: a universal formula for geodetic symmetrical transformations, Journal of Geodesy, 89(5):459-469 Felus YA (2004) Application of total least squares for spatial point process analysis. J Surv Eng, 130:126–133 Golub G, Van Loan C (1980) An analysis of the total least squares problem. SIAM J Num Anal 17:883–893 Golub GH, Hansen PC, O’Leary DP (1999) Tikhonov regularization and total least squares. SIAM J Matrix Anal Appl, 21:185–194. Henderson HV, Searle SR (1981) On deriving the inverse of a sum of matrices, Journal of SIAM Review, 23 (1):53–60 Jazaeri S, Amiri-Simkooei AR, Sharifi MA (2014) Iterative algorithm for weighted total least squares adjustment, Survey Review, 46 (334): 19-27 Jazaeri S, Schaffrin B, Snow K (2015) On weighted total least-squares adjustment with multiple constraints and singular dispersion matrices. ZFV, DOI 10.12902/zfv-0017-2014 Koch KR (1999) Parameter estimation and hypothesis testing in linear models. Springer, Berlin Lacy SL, Bernstein DS (2003) Subspace identification with guaranteed stability using constrained optimization, IEEE Trans. Auto. Control, 48:1259–1263 Mahboub V, Sharifi MA (2013a) On weighted total least squares with linear and quadratic constraints. Journal of Geodesy, 87, 279-286 Mahboub V, Sharifi MA (2013b) Erratum to: On weighted total least squares with linear and quadratic constraints, Journal of Geodesy, 87, 607-608 Markovsky I, van Huffel S (2007) Overview of total least squares methods. Signal Proc 87:2283– 2302 Moghtased-Azar K, Tehranchi R, Amiri-Simkooei AR (2014) An alternative method for non-negative estimation of variance components, Journal of Geodesy, 88:427-439 Neitzel F, Schaffrin B (2016) On the Gauss-Helmert model with a singular dispersion matrixwhere BQ is of smaller rank than B. Journal of Computational and Applied Mathematics, 291:458-467, DOI:1 0.1016/j.cam.2015.03.006 Rao CR (1973). Linear Statistical Inference and Its Applications, 2nd Edition, Wiley, New York

31

Rao CR, SK Mitra (1972). Generalized Inverse of Matrices and Its Applications (Probability & Mathematical Statistics), Wiley, New York Regalia PA (1994) An unbiased equation error identifier and reduced-order approximations, IEEE Trans. Signal processing, 42:1397–1411 Schaffrin B (2006) A note on constrained total least squares estimation, Linear Algebra and its Applications, 417:245–58 Schaffrin B, Felus Y (2009) An algorithmic approach to the total least squares problem with linear and quadratic constraints. Studia Geophysica et Geodaetica, 53:1–16 Schaffrin B, Wieser A (2008) On weighted total least squares adjustment for linear regression. Journal of Geodesy 82(7):415–421 Schaffrin B, Snow K, Neitzel F (2014) On the errors-in-variables model with singular dispersion matrices, Journal of Geodetic Science, 4:28-36 Shen Y, Li B, Chen Y (2011) An iterative solution of weighted total least squares adjustment. Journal of Geodesy, 85:229–238 Shi Y, Xu PL, Liu J, Shi C (2015) Alternative formulae for parameter estimation in partial errors-invariables models. Journal of Geodesy, 89(1):13–16. Sima DM, van Huffel S, Golub GH (2004) Regularized total least squares based on quadratic eigenvalue problem solver. BIT Numerical Mathematics, 44:793–812 Snow K (2012) Topics in total least-squares adjustment within the errors-in-variables model: singular cofactor matrices and priori information. PhD Dissertation, report No, 502, Geodetic Science Program, School of Earth Sciences, the Ohio State University, Columbus Ohio, USA Teunissen PJG (1984) A note on the use of Gauss' formula in nonlinear geodetic adjustments, Statistics and Descisions, 2:455–466 Teunissen PJG (1985a) Generalized inverses, adjustment, the datum problem and S-transformations, in Optimization of Geodetic Networks, E. W. Grafarend and F. Sanso, eds., Springer, Berlin, 11– 55. Teunissen, PJG (1985b) The geometry of geodetic inverse linear mapping and nonlinear adjustment. Netherlands Geodetic Commission, Publication on Geodesy, New Series, Vol. 8, No. 1, Delft Teunissen PJG (1988) The nonlinear 2D symmetric Helmert transformation: an exact nonlinear least squares solution, Bull Geod, 62:1–15 Teunissen PJG (1990) Nonlinear least squares. Manus Geod, 15(3):137–150 Teunissen PJG (2004) Adjustment theory: an introduction. Delft University Press, Delft University of Technology, Series on Mathematical Geodesy and Positioning, http://www.vssd.nl/hlf/a030.htm Teunissen PJG (2006) Network quality control. Delft University Press, Delft University of Technology, Series on Mathematical Geodesy and Positioning, http://www. vssd.nl/hlf/a030.htm Teunissen PJG, Simons DG, Tiberius CCJM (2005) Probability and observation theory, Faculty of Aerospace Engineering, Delft University, Delft University of Technology. (Lecture notes AE2E01) Teunissen PJG, Amiri-Simkooei AR (2008) Least squares variance component estimation. Journal of Geodesy, 82(2): 65–82

32

Tong X, Jin Y, Li L (2011) An improved weighted total least squares method with applications in linear fitting and coordinate transformation. Journal of Surveying Engineering, 137 (4):120–128 Tong, X., Jin, Y., Zhang, S., Li, L., Liu, S. (2014). Bias-corrected weighted total least-squares adjustment of condition equations, Journal of Surveying Engineering, 10.1061/(ASCE)SU.19435428.0000140, 04014013. Van Huffel, S, Vandewalle J (1991) The total least squares problem. Computational Aspects and Analysis. SIAM, Philadelphia Xu PL (1997) A general solution in geodetic nonlinear rank-defect models, Boll Geod Sc Affini, 56, 1–25 Xu PL, Liu JN, Shi C (2012) Total least squares adjustment in partial errors-in-variables models: algorithm and statistical analysis. Journal of Geodesy, 86:661–675 Xu PL, Liu J (2014) Variance components in errors-in-variables models: estimability, stability and bias analysis, Journal of Geodesy, 88(8): 719-734 Zhang S, Tong X, Zhang K (2013) A solution to EIV model with inequality constraints and its geodetic applications. Journal of Geodesy, 87 (1): 23-28 Zhang S, Zhang K, Liu P (2016) Total least-squares estimation for 2D affine coordinate transformation with constraints on physical parameters. Journal of Surveying Engineering, 142(3): 04016009

33