Convergence of a generalized MHSS iterative method

0 downloads 0 Views 710KB Size Report
show that the new iterative methods are feasible and effective. Key words: Complex symmetric matrix, Hermitian and skew-Hermitian splitting, positive definite ...
J. COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 17, NO.2, 2014, COPYRIGHT 2014 EUDOXUS PRESS, LLC

Convergence of a generalized MHSS iterative method for augmented systems ∗ Yu-Qin Baia,b†, Ting-Zhu Huanga, , Yan-Ping Xiaob a. School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, Sichuan, 611731, P. R. China b. School of Computer Science and Information Engineering, Northwest University for Nationalities, Lanzhou, Gansu, 730030, P. R. China

Abstract Bai et al. [Modified HSS iteration methods for a class of complex symmetric linear systems, Computing 87 (2010) 93-111.], have presented the modified HSS (MHSS) algorithm to solve a class of complex symmetric linear systems. In this paper, we establish the generalized MHSS(GMHSS) method for solving augmented systems. We prove the convergence of the proposed method under suitable restrictions on the iteration parameters. Lastly, numerical experiments are carried out and experimental results show that the new iterative methods are feasible and effective. Key words: Complex symmetric matrix, Hermitian and skew-Hermitian splitting, positive definite matrix, iterative methods, convergence. AMSC : 65F10; 65F15; 65F50

1

Introduction For solving the linear system x, b ∈ Cn ,

Ax = b,

(1)

where A = (ai,j ) ∈ Cn×n is a complex symmetric matrix of the form A = W + iT, ∗

This research is supported by NSFC (61170309,11161041), Chinese Universities Specialized Research Fund for the Doctoral Program (20110185110020) † Corresponding author: [email protected].

1

316

BAI ET AL 316-328

J. COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 17, NO.2, 2014, COPYRIGHT 2014 EUDOXUS PRESS, LLC

and W, T ∈ Rn×n are real symmetric matrices with W be positive definite and T positive semidefinite. If T ̸= 0, we obtain that A is non-Hermitian. In this paper, √ we use i = −1 to denote the imaginary unit. Since the matrix A naturally possesses a Hermitian/skew-Hermitian splitting (HSS) [1-6] A = H + S, where

1 1 H = (A + A∗ ), S = (A − A∗ ), 2 2 ∗ with A being the conjugate transpose of A, based on this particular matrix splitting for solving the system of linear equations (1), the HSS splitting method was considered and was further discussed in [7-11]. Recently, Huang et al. [12] studied the spectral properties of HSS preconditioner for saddle point problems. To make the HSS method more attractive, an asymmetric Hermitian/skewHermitian (AHSS) iteration in [11] was considered. Bai et al. [13] presented the modified HSS to the complex symmetric linear systems, and the modified HSS(MHSS) iteration method is as following:

The MHSS iteration method: Given an initial guess x(0) , for k = 0, 1, 2 . . . until x(k) converges, compute { (W + αI)xk+ 1 = (αI − iT )xk + b, 2 (2) (T + αI)xk+1 = (αI + iW )xk+ 1 − ib, 2

(k = 0, 1, · · · ), where x0 is an arbitrary initial guess and α is a given positive constant. They have also proved that for any positive α the MHSS method converges unconditionally to the unique solution of the complex symmetric system of linear equations. Guo and Wang in [14] studied the convergence of MHSS when W and T are not real nonsymmetric matrices. Based on the MHSS splitting method, in this paper, we present a different approach to solve Eq.(1), called the generalized modified Hermitian/ skewHermitian splitting iteration method, shortened to the GMHSS iteration. Let us describe it as follows. The GMHSS iteration method: Given an initial guess x(0) , for k = 0, 1, 2 . . . until x(k) converges, compute { (W + αP )xk+ 1 = (αP − iT )xk + b, 2 (3) (T + βP )xk+1 = (βP + iW )xk+ 1 − ib, 2

where α and β are given positive constants, W is positive definite and T is positkive semidefinite. We asume that the matrix P is symmetric positive defi2

317

BAI ET AL 316-328

J. COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 17, NO.2, 2014, COPYRIGHT 2014 EUDOXUS PRESS, LLC

nite and P is exchangeable with the matrices W and T , respectively. Remark 1.1. The above matrix P is obtained easily, such as P is positive definite diagonal matrix. Obviously, P is exchangeable with matrices W and T . In this paper, we will analyze the GMHSS iteration method and we find that if the matrix W is positive definite and the matrix T is positive semidefinite, the GMHSS iteration can converge to the unique solution of linear system (1) with any given positive α and β, which is restricted in an appropriate region. And the upper bound of the contraction factor of the GMHSS iteration is only related to paramaters of α and β, the spectrum of the P −1 W and P −1 T . The outline of this paper is as follows. First in section 2, we will discuss the convergence properties of the GMHSS iterative method. In Section 3 the IGMHSS iteration is described. In Section 4, we provide numerical experiments to illustrate the theoretical results obtained in Section 2, and the effectiveness of proposed methods. Lastly, we obtain some conclusions in section 5.

2

Convergence analysis of the GMHSS method

In this section, we will study the convergence properties of the generalized MHSS iteration and derive the upper bound of the contraction factor. First, we notice that the GMHSS iteration method is a kind of two-splitting iteration splitting iteration, acturally, so we quote the convergence criterion for a two-step splitting iteration. Lemma 2.1.[2] let A ∈ C n×n , A = Mi − Ni (i = 1, 2) be two splitting of the matrix A, and x0 ∈ C n be a given initial vector. If xk is a two-step iteration sequence defined by { M1 xk+ 1 = N1 xk + b, 2 M2 xk+1 = N2 xk + b, k = 0, 1, 2, · · · , then xk+1 = M2−1 N2 M1−1 N1 xk + M2−1 (I + N2 M1−1 )b, k = 0, 1, 2, · · · . Moreover, if the spectral radius ρ(M2−1 N2 M1−1 N1 ) < 1, then the iterative sequence xk converges to the unique solution xk ∈ C n of the linear equations (1) for all initial vectors x0 ∈ C n . For the convergence property of the GMHSS iteration, we obtain the following convergence property based on above lemma 2.1.

3

318

BAI ET AL 316-328

J. COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 17, NO.2, 2014, COPYRIGHT 2014 EUDOXUS PRESS, LLC

Theorem 2.2. Let A ∈ Cn×n , with W ∈ Rn×n and T ∈ Rn×n symmetric positive definite and symmetric positive semidefinite, respectively. Let α and β be positive constants, the matrix P be symmetric positive definite being exchangeable with the matrices W and T , respectively. Then the iteration matrix M (α, β) of the GMHSS method is M (α, β) = (βP + T )−1 (βP + iW )(αP + W )−1 (αP − iT ),

(4)

and its spectral radius ρ(M (α, β)) is bounded by √ √ β 2 + λi 2 α2 + ui 2 max , δ(α, β) ≡ max λi ∈λ(P −1 W ) α + λi ui ∈λ(P −1 T ) β + ui

(5)

where λ(M ) is the spectral set of matrix M . And, for any given positive parameter α, if paramater β satisfies √ √ λmax α < β ≤ α(α + 2λmin ), (6) 2α + λmax then δ(α, β) < 1 , i.e. the GMHSS iteration converges, where λmax and λmin are the maximum and minimum eigenvalues of matrix P −1 W . Proof. We set M1 = αP + W, N1 = αP − iT, M2 = βP + T and N2 = βP + iW, in Lemma 2.1. Since α and β are positive constants, αP + W and βP + T are nonsingular, we obtain (4). By direct computations, we have ρ(M (α, β)) = ≤ ≤ = =

ρ((βP + T )−1 (βP + iW )(αP + W )−1 (αP − iT )) ∥ (βP + T )−1 (βP + iW )(αP + W )−1 (αP − iT ) ∥2 ∥ (βP + iW )(αP + W )−1 ∥2 ∥ (αP − iT )(βP + T )−1 ∥2 −1 ∥ (βI + iP √ W )(αI + P −1 W√ )−1 ∥2 ∥ (αI − iP −1 T )(βI + P −1 T )−1 ∥2 max −1

λi ∈λ(P

W)

β 2 +λi 2 α+λi

max −1

ui ∈λ(P

T)

α2 +ui 2 , β+ui

then we obtain the upper bound of ρ(M (α, β)) by (5). Since W , P are symmetric positive definite and T is symmetric positive semidefinite, λi > 0, ui ≥ 0, and α > 0 and β > 0, we obtain the following equality: 4

319

BAI ET AL 316-328

J. COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 17, NO.2, 2014, COPYRIGHT 2014 EUDOXUS PRESS, LLC

{√

√ max λi

β 2 +λi 2 α+λi



β 2 +λmax 2 α+λmax

β 2 +λmin 2 , α+λmin

}

= max  √ β 2 +λmax 2  ∗   α+λmax , β ≤ β , = √    β 2 +λmin 2 , β ≥ β ∗ , α+λmin

where β ∗ is a function of λmax , λ√min and α. α2 +u 2 Case1: If β > α, then max β+ui i < 1, thus, ui

 √ β 2 +λmax 2  ∗   α+λmax , β ≤ β ,



β 2 + λi 2 = √  α + λi   β 2 +λmin 2 , β ≥ β ∗ , α+λmin

δ(α, β) < max λi



When β ≤ β , if



β 2 +λmax 2 α+λmax

≤ 1, then δ(α, β) < 1

we can obtain the following inequality easily: α < β ≤ β ∗.

(7)

And when β ≥ β ∗ , by simple computation, we obtain β∗ < β ≤

√ α(α + 2λmin ).

(8)

α 0, 2α + λmax 2α + λmax

that is to say the available β always exists for any given positive α. And if λmin and α are large, ηmax is small, then the area of available β is larger. An upper bound of the contraction factor of the GMHSS iteration is given by δ(α, β), which is dependent on the spectrum of P −1 W and P −1 T and the parameters choice of α and β. Corollary 2.3. Let A, W , T and P be the matrices defined in Theorem 2.2, and λmax and λmin are the upper and the lower bounds for the matrix P −1 W , respectively. Then for any given positive parameter α, the optimal β should be √ α2 (λmin + λmax ) + 2αλmin λmax β= . (13) 2α + λmin + λmax

Proof. In order to minimize the bound in (5), we can obtain the following equality: √ √ β 2 + λmin 2 β 2 + λmax 2 = . α + λmin α + λmax 6

321

BAI ET AL 316-328

J. COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 17, NO.2, 2014, COPYRIGHT 2014 EUDOXUS PRESS, LLC

By simple calculation, we get (13) and the proof is completed. 2 √ { } β 2 +λ 2 Remark 2.4. Let δ(α, β) = max α+λi i max 1, αβ , we can obtain its opλi

timal parameter α when the first derivative of δ(α, β) is zero, here β is a function of α. It is difficult to find the explicit expression of α, but we can estimate the better values of α by numerical experiments.

3

The IGMHSS iteration method

In the GMHSS iteration, we need to solve two linear sub-systems whose coefficient matrices are αP + W and βP + T , respectively. This is very costly and impractical to inverse them exactly in actual implementations . In order to improve the GMHSS iteration more efficiently, we can employ inexact GMHSS(IGMHSS) iteration to solve the two subproblems. As supposed, αP + W and βP + T are symmetric positive definite, we can employ conjugate gradient (CG) method to solve these two linear systems. The IGMHSS iteration scheme is as following. 1. k:=0; 2. r(k) = b − Ax(k) ; 3. employing CG method approximately to solve (αP + W )y (k) = r(k) ; 1 4. x(k+ 2 ) = x(k) + y (k) ; 1 1 5. r(k+ 2 ) = −ib + iAx(k+ 2 ) ; 1 6. employing CG method approximately to solve (βP + T )z (k) = r(k+ 2 ) ; 1 7. x(k+1) = x(k+ 2 ) + z (k) ; 8. Set k = k + 1 and goto 2; 9. Set x = x(k) and output x. In our implementations of IGMHSS iteration scheme, b = Ae, (e is (1, 1, · · · 1)T ∈ C m ). All tests are started from the zero vector. The iteration is terminated once the current iterate xk satisfies ∥b − Axk ∥2 ≤ 10−6 , ∥b∥2 and the stopping criteria for the inner CG is ∥r(k,lk ) ∥2 ≤ 10−2 , ∥b − Axk ∥2 where r(k,lk ) represents the residual of the lk th inner iterate in the kth outer iterate, and parameter β is chosen to be β in (13). 7

322

BAI ET AL 316-328

J. COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 17, NO.2, 2014, COPYRIGHT 2014 EUDOXUS PRESS, LLC

4

Numerical experiments

In this section, we provide numerical experiments to illustrate the theoretical results obtained in Section 2 and the effectiveness of GMHSS iterations. All numerical experiments are carried out on a PC equipped with Intel Core i3 2.3 GHz CPU and 2.00 GB RAM memory Using MATLAB R2010a. Example 4.1.[13] We consider the linear system of the form √ √ 3+ 3 3− 3 I) + i(K − I)]x = b, [(K + τ τ

(14)

1 on the unit square [0, 1] × [0, 1] with the mesh-size h = m+1 . τ is the time stepsize and K is the five-point centered difference matrix approximating the negative Laplacian operator L = −∆ with homogeneous Dirichlet boundary conditions. The matrix K ∈ Rn×n possesses the tensor-product form K = Im ⊗ Vm + Vm ⊗ Im , where ⊗ denotes the Kronecker product, and Vm is tridiagonal matrix given by with Vm = h−2 tridiag(−1, 2, −1) ∈ Rm×m . Hence, K is an n × n blocktridiagonal matrix, with n = m2 . We take √ √ 3− 3 3+ 3 W =K+ I and T =K− I. τ τ

Different τ result in different A. In our tests, we take τ = h. Furthermore, we normalize the linear system by multiplying both sides through by h2 . For more details, we refer to [15]. Example 4.2.[13] consider the linear system of the form [(−ω 2 M + K) + i(ωCV + CH )]x = b,

(15)

1 on the unit square [0, 1] × [0, 1] with the mesh-size h = m+1 . M and K are the inertia and the stiffness matrices, CV and CH are the viscous and the hysteretic damping matrices, respectively, and ω is the driving circular frequency. We take CH = µK with a damping coefficient, M = I, CV = 10I, and K the five point centered difference matrix approximating the negative Laplacian operator with homogeneous Dirichlet boundary conditions. The matrix K ∈ Rn×n possesses the tensor-product form K = Im ⊗ Vm + Vm ⊗ Im , where ⊗ denotes the Kronecker product, and Vm is tridiagonal matrix given by with Vm = h−2 tridiag(−1, 2, −1) ∈ Rm×m . Hence, K is an n × n blocktridiagonal matrix, with n = m2 . We also normalize the system by multiplying both sides through by h2 as before. In our tests, we set ω = π, and µ = 0.02. We take

W = K − ω2I

and

T = µK + 10ωI.

8

323

BAI ET AL 316-328

J. COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 17, NO.2, 2014, COPYRIGHT 2014 EUDOXUS PRESS, LLC

In Fig.1 and 2, we plot the spectral radius of the iteration matrix of MHSS and GMHSS methods with different α and different matrix P for example 4.1 and 4.2 when m = 16, 24, respectively, MHSS represents the spectral radius of the iteration matrix of the MHSS method and GMHSS represents that of the iteration matrix of the GMSS method with corresponding matrix P . From Fig.1 and 2, it is clear that if β is chosen to be the optimal one, the spectral radius of the iteration matrix of the GMHSS method is always smaller than that of the MHSS method. If P = 0.1 × I, spectral radius of the GMHSS is much smaller than that of MHSS and GMHSS. It is obviously that different choices of matrix P can decrease spectral radius of the iteration matrix greatly. However, the matrix P of GMHSS method is not optimal one and only satisfies the condition of theorem 2.2. In Fig.3 and 4, we dipict the number of iterations with different α and different matrix P for example 4.1 and 4.2 when m = 16, 24, respectively. From Fig.3 and 4, we find that the choices of matrix P have less impact on number of iterations. However, the number of GMHSS iterations is still much less than that of MHSS as α is increasing. √ In [13], they showed that the upper bound δ(α, β) is minimized when α = λmin λmax . In order to compare MHSS with GMHSS method conveniently, we just use this α and report the number of iterations (IT) and the elapsed CPU time (CPU) by MHSS and GMHSS methods for Example 4.1 and Example 4.2, which is listed in table 1 and table 2. Parameter β is still chosen to be β in (13). From table 1 and 2, we find that IT of GMHSS is just half of IT by MHSS when P = I. If P ̸= I, IT of GMHSS is still less than that of MHSS. However, the CPU for different methods are almost the same. m=16

m=24

0.95

0.95

0.9

0.9

IT

1

IT

1

0.85

0.85 . MHSS * GMHSS (P=I) + GMHSS (P=0.1× I)

. MHSS * GMHSS (P=I) + GMHSS (P=0.1× I) 0.8

0.75

0.8

0

10

20

30

40

50 paramater α

60

70

80

90

0.75

100

0

10

20

30

40

50 paramater α

60

70

80

90

100

Figure 1: Spectral radius of iteration matrices of MHSS and GMHSS methods with different α for example 4.1. 9

324

BAI ET AL 316-328

J. COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 17, NO.2, 2014, COPYRIGHT 2014 EUDOXUS PRESS, LLC

m=24

m=16

1

1

0.95

0.95

0.9

Spectral radius

Spectral radius

0.9

0.85

0.8

0.85

0.8 0.75

. MHSS * GMHSS (P=I) + GMHSS (P=0.1× I)

. MHSS * GMHSS (P=I) + GMHSS (P=0.1× I)

0.75

0.7

0.65

0.7 0

5

10

15

20

25 Paramater α

30

35

40

45

50

0

5

10

15

20

25 Paramater α

30

35

40

45

50

Figure 2: Spectral radius of iteration matrices of MHSS and GMHSS methods with different α for example 4.2.

m=16

m=24

1400

2500 . MHSS * GMHSS (P=I) + GMHSS (P=0.1× I)

1200

. MHSS * GMHSS (P=I) + GMHSS (P=0.1× I)

2000

Spectrial radius

1000

IT

800

600

1500

1000

400

500 200

0

0

5

10

15

20

25 Paramater α

30

35

40

45

0

50

0

5

10

15

20

25 Pararmater α

30

35

40

45

50

Figure 3: Number of iterations with different α for example 4.1.

10

325

BAI ET AL 316-328

J. COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 17, NO.2, 2014, COPYRIGHT 2014 EUDOXUS PRESS, LLC

m=24

m=16

9000

4500

4000

8000

. MHSS * GMHSS (P=I) + GMHSS (P=10× I)

7000

3000

6000

2500

5000

2000

4000

. MHSS * GMHSS (P=I) + GMHSS (P=10× I)

IT

3500

3000

1500

2000

1000

1000

500

0

0 0

5

10

15

20

25 Paramater α

30

35

40

45

50

0

5

10

15

20

25 Paramater α

30

35

40

45

50

Figure 4: Number of iterations with different α for example 4.2. Table 1 IT and CPU with MHSS and GMHSS methods for Example 4.1

MHSS GMHSS (P=I) GMHSS (P = 0.5 × I)

IT CPU IT CPU IT CPU

m=8 60 0.0226 30 0.0210 40 0.0210

m=16 82 0.2405 41 0.2369 48 0.2315

m=24 102 2.1704 51 2.1905 54 2.0479

m=32 118 8.8142 59 8.6661 59 8.7551

Table 2 IT and CPU with MHSS and GMHSS methods for Example 4.2

MHSS GMHSS (P=I) GMHSS (P = 10 × I)

IT CPU IT CPU IT CPU

m=8 70 0.0213 35 0.0213 99 0.0353

m=16 106 0.2923 53 0.2838 79 0.2889

m=24 140 2.6583 70 2.6104 79 1.8923

m=32 172 10.5413 86 10.4714 68 5.6304

11

326

BAI ET AL 316-328

J. COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 17, NO.2, 2014, COPYRIGHT 2014 EUDOXUS PRESS, LLC

5

Conclusions and remarks

In this paper, we present a generalized MHSS iterative method for augmented systems and demonstrate that this method converges to the unique solution of the linear system. When chosen the various parameters and matrix P , the spectral radii of the iteration matrices, the number of iterations (IT) and the elapsed CPU time with the proposed method are smaller than those in [13], which is shown through numerical experiments. Particularly, one may discuss how to shink the upper bound δ(α, β) furtherly, and obtain a set of new optimal parameters in order to accelerate the convergence of the considered method quickly.

References [1] M. Benzi and G. H. Golub, A preconditioner for generalized saddle point problems, SIAM J. Matrix Anal. Appl., 26 (2004), pp. 20-41. [2] Z.-Z. Bai, G. H. Golub, and M. K. Ng, Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems, SIAM J. Matrix Anal. Appl., 24 (2003), pp. 603-626. [3] Z.-Z. Bai, G. H. Golub, L.Z. Lu, J.F. Yin, Block triangular and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems, SIAM J.Sci. Comput., 26 (2005), pp. 844-863. [4] Z.-Z. Bai, G. H. Golub, J.Y. Pan, Preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive semidefinite linear systems, Technical Report SCCM-02-12, Scientific Computing and Computational Mathematics Propram, Department of Computer Science , Stanford University, Stanford, CA, 2002. [5] G. H. Golub, D. Vanderstraeten, On the preconditioning of matrices with a dominant skew-symmetric component, Numer. Algorithms., 25 (2000), pp. 223-239. [6] Z.-Z. Bai, G. H. Golub, and M. K. Ng, On successive-overrelaxation acceleration of the Hermitian and skew-Hermitian splitting iterations, Numer. Linera Appl., 14 (2007), pp. 319-335. [7] O. Widlund, A Lanczos method for a class of nonsymmetric systems of linear equations, SIAM J. Numer. Ananl., 15 (1978), pp. 801-812. [8] C. Greif and J. Varah, Iterative solutiong of cyclically reduced systems arising from discretizationg of the three-dimensional elliptic equations, SIAM J. Sci. Comput., 19 (1998), pp. 1918-1940. [9] C. Greif and J. Varah, Bolck stationary methods for nonsymmetric cyclically reduced systems arising from three-dimensional elliptic equations, SIAM J. Matrix Anal. Appl., 20 (1999), pp. 1038-1059.

12

327

BAI ET AL 316-328

J. COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 17, NO.2, 2014, COPYRIGHT 2014 EUDOXUS PRESS, LLC

[10] H. Elman and G. H. Golub, Iterative methods for cyclically reduced non-selfadjoint linear systems, Math. Comput., 54 (1990), pp. 671-700. [11] L. Li, T.-Z. Huang, X.-P. Liu, Asymmetric Hermitian and skew-Hermitian splitting methods for positive definite linear systems, Computers and Mathematics with Applications., 54 (2007), pp. 147-159. [12] T.-Z. Huang, S.-L. Wua, C.-X. Li, The spectral properties of the Hermitian and skew-Hermitian splitting preconditioner for generalized saddle point problems, Journal of Computational and Applied Mathematics., 229 (2009), pp. 37-46. [13] Z.-Z. Bai, M. Benzi, F. Chen, Modified HSS iteration methods for a class of complex symmetric linear systems, Computing., 87 (2010), pp. 93-111. [14] Xiao-Xia Guo, Shen Wang, Modified HSS iteration methods for a class of nonHermitian positive-definite linear systems, Applied Mathematics and Computation., 218 (2012), pp. 10122-10128. [15] O. Axelsson, Real valued iterative methods for solving complex symmetric linear systems, Numer Linear Algebra Appl., 7, pp. 197-218.

13

328

BAI ET AL 316-328