A Generalized HSS Iteration Method for Continuous Sylvester Equations

0 downloads 0 Views 2MB Size Report
Dec 13, 2013 - Journal of Applied Mathematics. Volume 2014, Article ID ... original HSS iteration method firstly proposed by Bai et al. [27] for solving systems ofΒ ...
Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2014, Article ID 578102, 9 pages http://dx.doi.org/10.1155/2014/578102

Research Article A Generalized HSS Iteration Method for Continuous Sylvester Equations Xu Li,1 Yu-Jiang Wu,1 Ai-Li Yang,1 and Jin-Yun Yuan2 1 2

School of Mathematics and Statistics, Lanzhou University, Lanzhou 730000, Gansu, China Department of Mathematics, Federal University of ParanΒ΄a, Centro PolitΒ΄ecnico, CP 19.081, 81531-980 Curitiba, PR, Brazil

Correspondence should be addressed to Xu Li; [email protected] and Yu-Jiang Wu; [email protected] Received 20 August 2013; Accepted 13 December 2013; Published 12 January 2014 Academic Editor: Qing-Wen Wang Copyright Β© 2014 Xu Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Based on the Hermitian and skew-Hermitian splitting (HSS) iteration technique, we establish a generalized HSS (GHSS) iteration method for solving large sparse continuous Sylvester equations with non-Hermitian and positive definite/semidefinite matrices. The GHSS method is essentially a four-parameter iteration which not only covers the standard HSS iteration but also enables us to optimize the iterative process. An exact parameter region of convergence for the method is strictly proved and a minimum value for the upper bound of the iterative spectrum is derived. Moreover, to reduce the computational cost, we establish an inexact variant of the GHSS (IGHSS) iteration method whose convergence property is discussed. Numerical experiments illustrate the efficiency and robustness of the GHSS iteration method and its inexact variant.

1. Introduction

Before giving its numerical scheme, we rewrite the continuous Sylvester equation (1) in the mathematically equivalent system of linear equations

Consider the following continuous Sylvester equation: 𝐴𝑋 + 𝑋𝐡 = 𝐢,

(1)

where 𝐴 ∈ Cπ‘šΓ—π‘š , 𝐡 ∈ C𝑛×𝑛 , and 𝐢 ∈ Cπ‘šΓ—π‘› are given complex matrices. Assume that (i) 𝐴, 𝐡, and 𝐢 are large and sparse matrices; (ii) at least one of 𝐴 and 𝐡 is non-Hermitian; (iii) both 𝐴 and 𝐡 are positive semi-definite, and at least one of them is positive definite. Since under assumptions (i)–(iii) there is no common eigenvalue between 𝐴 and βˆ’π΅, we obtain from [1, 2] that the continuous Sylvester equation (1) has a unique solution. Obviously, the continuous Lyapunov equation is a special case of the continuous Sylvester equation (1) with 𝐡 = π΄βˆ— and 𝐢 Hermitian, where π΄βˆ— represents the conjugate transpose of the matrix 𝐴. This continuous Sylvester equation arises in several areas of applications. For more details about the practical backgrounds of this class of problems, we refer to [2–15] and the references therein.

Aπ‘₯ = 𝑐,

(2)

where A = 𝐼 βŠ— 𝐴 + 𝐡𝑇 βŠ— 𝐼; the vectors π‘₯ and 𝑐 contain the concatenated columns of the matrices 𝑋 and 𝐢, respectively, with βŠ— being the Kronecker product symbol and 𝐡𝑇 representing the transpose of the matrix 𝐡. However, it is quite expensive and ill-conditioned to use the iteration method to solve this variation of the continuous Sylvester equation (1). There is a large number of numerical methods for solving the continuous Sylvester equation (1). The Bartels-Stewart and the Hessenberg-Schur methods [16, 17] are direct algorithms, which can only be applied to problems of reasonably small sizes. When the matrices 𝐴 and 𝐡 become large and sparse, iterative methods are usually employed for efficiently and accurately solving the continuous Sylvester equation (1), for instance, the Smith’s method [18], the alternating direction implicit (ADI) method [19–22], and others [23–26]. Recently, Bai established the Hermitian and skewHermitian splitting (HSS) [4] iterative method for solving the continuous Sylvester equation (1), which is based on

2

Journal of Applied Mathematics

the Hermitian and skew-Hermitian splitting of the matrices 𝐴 and 𝐡. This HSS iteration method is a matrix variant of the original HSS iteration method firstly proposed by Bai et al. [27] for solving systems of linear equations; see [28–39] for more detailed descriptions about the HSS iteration method and its variants. To further improve the convergence efficiency, in this paper we present a new generalized Hermitian and skewHermitian splitting (GHSS) method for solving the continuous Sylvester equation (1). It is a four-parameter iteration which enables the optimization of the iterative process, thereby achieving high efficiency and robustness. Similar approaches of using the parameterized acceleration technique in the algorithmic designs of the iterative methods can be seen in [34–39]. In the remainder of this paper, a matrix sequence (π‘˜) ∞ {π‘Œ }π‘˜=0 βŠ† Cπ‘šΓ—π‘› is said to be convergent to a matrix π‘Œ ∈ ∞ Cπ‘šΓ—π‘› if the corresponding vector sequence {𝑦(π‘˜) }π‘˜=0 βŠ† Cπ‘šπ‘› is π‘šπ‘› convergent to the corresponding vector 𝑦 ∈ C , where the vectors 𝑦(π‘˜) and 𝑦 contain the concatenated columns of the ∞ matrices π‘Œ(π‘˜) and π‘Œ, respectively. If {π‘Œ(π‘˜) }π‘˜=0 is convergent, then its convergence factor and convergence rate are defined ∞ as those of {𝑦(π‘˜) }π‘˜=0 , correspondingly. In addition, we use 𝑠𝑝(𝑉), ‖𝑉‖2 and ‖𝑉‖𝐹 to denote the spectrum, the spectral norm, and the Frobenius norm of the matrix 𝑉 ∈ Cπ‘šΓ—π‘š , respectively. Note that β€– β‹… β€–2 is also used to represent the 2norm of a vector. The rest of this paper is organized as follows. In Section 2, we present the GHSS method for solving the continuous Sylvester equation (1), in which we use four parameters instead of two parameters in the HSS method [4]. An exact parameter region of convergence for the method is strictly proved and a minimum value for the upper bound of the iterative spectrum is derived in Section 3. In Section 4, an inexact variant of the GHSS (IGHSS) iteration method is presented and its convergence property is studied. Numerical examples are given to illustrate the theoretical results and the effectiveness of the GHSS method in Section 5. Finally, we draw our conclusions.

2. The GHSS Method Here and in the sequel, we use 𝐻(𝑉) := (1/2)(𝑉 + π‘‰βˆ— ) and 𝑆(𝑉) := (1/2)(𝑉 βˆ’ π‘‰βˆ— ) to denote the Hermitian part and the skew-Hermitian part of the matrix 𝑉 ∈ Cπ‘šΓ—π‘› , respectively. Obviously, the matrix 𝑉 naturally possesses the Hermitian and skew-Hermitian splitting (HSS): 𝑉 = 𝐻 (𝑉) + 𝑆 (𝑉) ;

(3)

see [4, 27, 28]. Similar to the HSS method [4], we obtain the following splitting of 𝐴 and 𝐡: 𝐴 = (𝛼1 𝐼 + 𝐻 (𝐴)) βˆ’ (𝛼1 𝐼 βˆ’ 𝑆 (𝐴)) = (𝛽1 𝐼 + 𝑆 (𝐴)) βˆ’ (𝛽1 𝐼 βˆ’ 𝐻 (𝐴)) ,

𝐡 = (𝛼2 𝐼 + 𝐻 (𝐡)) βˆ’ (𝛼2 𝐼 βˆ’ 𝑆 (𝐡)) = (𝛽2 𝐼 + 𝑆 (𝐡)) βˆ’ (𝛽2 𝐼 βˆ’ 𝐻 (𝐡)) , (4) where 𝛼𝑗 (𝑗 = 1, 2) are given nonnegative constants and 𝛽𝑗 (𝑗 = 1, 2) are given positive constants and 𝐼 is the identity matrix of suitable dimension. Then the continuous Sylvester equation (1) can be equivalently reformulated as (𝛼1 𝐼 + 𝐻 (𝐴)) 𝑋 + 𝑋 (𝛼2 𝐼 + 𝐻 (𝐡)) = (𝛼1 𝐼 βˆ’ 𝑆 (𝐴)) 𝑋 + 𝑋 (𝛼2 𝐼 βˆ’ 𝑆 (𝐡)) + 𝐢, (𝛽1 𝐼 + 𝑆 (𝐴)) 𝑋 + 𝑋 (𝛽2 𝐼 + 𝑆 (𝐡))

(5)

= (𝛽1 𝐼 βˆ’ 𝐻 (𝐴)) 𝑋 + 𝑋 (𝛽2 𝐼 βˆ’ 𝐻 (𝐡)) + 𝐢. Under assumptions (i)–(iii), we observe that there is no common eigenvalue between the matrices 𝛼1 𝐼 + 𝐻(𝐴) and βˆ’(𝛼2 𝐼 + 𝐻(𝐡)), as well as between the matrices 𝛽1 𝐼 + 𝑆(𝐴) and βˆ’(𝛽2 𝐼 + 𝑆(𝐡)), so that the above two fixed-point matrix equations have unique solutions for all given right-hand side matrices. This leads to the following generalized Hermitian and skew-Hermitian splitting (GHSS) iteration method for solving the continuous Sylvester equation (1). Algorithm 1 (the GHSS iteration method). Given an initial guess 𝑋(0) ∈ Cπ‘šΓ—π‘› , compute 𝑋(π‘˜+1) ∈ Cπ‘šΓ—π‘› for π‘˜ = 0, 1, 2, . . . ∞ using the following iteration scheme until {𝑋(π‘˜) }π‘˜=0 satisfies the stopping criterion: (𝛼1 𝐼 + 𝐻 (𝐴)) 𝑋(π‘˜+1/2) + 𝑋(π‘˜+1/2) (𝛼2 𝐼 + 𝐻 (𝐡)) = (𝛼1 𝐼 βˆ’ 𝑆 (𝐴)) 𝑋(π‘˜) + 𝑋(π‘˜) (𝛼2 𝐼 βˆ’ 𝑆 (𝐡)) + 𝐢, (𝛽1 𝐼 + 𝑆 (𝐴)) 𝑋(π‘˜+1) + 𝑋(π‘˜+1) (𝛽2 𝐼 + 𝑆 (𝐡)) = (𝛽1 𝐼 βˆ’ 𝐻 (𝐴)) 𝑋(π‘˜+1/2) + 𝑋(π‘˜+1/2) (𝛽2 𝐼 βˆ’ 𝐻 (𝐡)) + 𝐢, (6) where 𝛼𝑗 (𝑗 = 1, 2) are given nonnegative constants and 𝛽𝑗 (𝑗 = 1, 2) are given positive constants and 𝐼 is the identity matrix of suitable dimension. Remark 2. The GHSS method has the same algorithmic structure as the HSS method [4], and thus two methods have the same computational cost in each iteration step. It is easy to see that the former reduces to the latter when 𝛼1 = 𝛽1 and 𝛼2 = 𝛽2 . Remark 3. When 𝐡 is a zero matrix, and 𝑋(π‘˜) and 𝐢 reduce to column vectors, the GHSS iteration method becomes the one for systems of linear equations; see [34–36]. In addition, when 𝐡 = π΄βˆ— and 𝐢 is Hermitian, it leads to an GHSS iteration method for the continuous Lyapunov equations.

3. Convergence Analysis of the GHSS Method Let 𝐻(𝐴), 𝐻(𝐡), and 𝑆(𝐴), 𝑆(𝐡) be the Hermitian and the skew-Hermitian parts of the matrices 𝐴 and 𝐡, respectively.

Journal of Applied Mathematics

3 with functions πœ™1 (𝛼, 𝛽), πœ™2 (𝛼, 𝛽) and π›½βˆ— (𝛼) denoted by

Denote πœ†(𝐻(𝐴)) = max πœ†(𝐻(𝐴)) min

=

(𝑆(𝐴)) = πœ‰max (𝑆(𝐴)) πœ‰min =

{πœ† 𝑗 } ,

(𝐻(𝐡)) πœ‡max =

min

{πœ† 𝑗 } ,

(𝐻(𝐡)) πœ‡min

max

󡄨 󡄨 {σ΅„¨σ΅„¨σ΅„¨σ΅„¨πœ‰π‘— 󡄨󡄨󡄨󡄨} ,

(𝑆(𝐡)) 𝜁max =

min

󡄨 󡄨 {σ΅„¨σ΅„¨σ΅„¨σ΅„¨πœ‰π‘— 󡄨󡄨󡄨󡄨} ,

(𝑆(𝐡)) 𝜁min =

max

πœ† 𝑗 βˆˆπ‘ π‘(𝐻(𝐴))

πœ† 𝑗 βˆˆπ‘ π‘(𝐻(𝐴))

π‘–πœ‰π‘— βˆˆπ‘ π‘(𝑆(𝐴))

π‘–πœ‰π‘— βˆˆπ‘ π‘(𝑆(𝐴))

=

max

πœ‡π‘˜ βˆˆπ‘ π‘(𝐻(𝐡))

min

πœ‡π‘˜ βˆˆπ‘ π‘(𝐻(𝐡))

{πœ‡π‘˜ } , {πœ‡π‘˜ } ,

max

󡄨 󡄨 {σ΅„¨σ΅„¨σ΅„¨πœπ‘˜ 󡄨󡄨󡄨} ,

min

󡄨 󡄨 {σ΅„¨σ΅„¨σ΅„¨πœπ‘˜ 󡄨󡄨󡄨} ,

π‘–πœπ‘˜ βˆˆπ‘ π‘(𝑆(𝐡))

π‘–πœπ‘˜ βˆˆπ‘ π‘(𝑆(𝐡))

(7) with 𝑖 = βˆšβˆ’1 and (𝐻(𝐡)) + πœ‡max , Θmax = πœ†(𝐻(𝐴)) max

(𝑆(𝐴)) (𝑆(𝐡)) Ξ₯max = πœ‰max + 𝜁max ,

(𝐻(𝐡)) + πœ‡min , Θmin = πœ†(𝐻(𝐴)) min

(𝑆(𝐴)) (𝑆(𝐡)) Ξ₯min = πœ‰min + 𝜁min .

(8)

In addition, denote by A = H + S, with H = 𝐻 (A) = 𝐼 βŠ— 𝐻 (𝐴) + 𝐻(𝐡)𝑇 βŠ— 𝐼, 𝑇

S = 𝑆 (A) = 𝐼 βŠ— 𝑆 (𝐴) + 𝑆(𝐡) βŠ— 𝐼.

(9)

Theorem 4. Assume that 𝐴 ∈ Cπ‘šΓ—π‘š and 𝐡 ∈ C𝑛×𝑛 are positive semi-definite matrices, and at least one of them is positive definite. Let 𝛼𝑗 (𝑗 = 1, 2) be nonnegative constants and 𝛽𝑗 (𝑗 = 1, 2) be positive constants. Denote by βˆ’1

𝑀 (𝛼, 𝛽) = (𝛽𝐼 + S) (𝛽𝐼 βˆ’ H) (𝛼𝐼 + H) (𝛼𝐼 βˆ’ S) , (10) 𝛼 = 𝛼1 + 𝛼2 ,

𝛽 = 𝛽1 + 𝛽2 .

(11)

Then the convergence factor of the GHSS iteration method (6) is given by the spectral radius 𝜌(𝑀(𝛼, 𝛽)) of the matrix 𝑀(𝛼, 𝛽), which is bounded by 𝜎 (𝛼, 𝛽) :=

2 2 ) + 2π›Όπ›½Ξ˜max + 2Ξ₯min Θmax , πœ™2 (𝛼, 𝛽) = (𝛽 βˆ’ 𝛼) (Θ2max βˆ’ Ξ₯min

π›½βˆ— (𝛼) =

𝛼 (Θmax + Θmin ) + 2Θmax Θmin ∈ [Θmin , Θmax ] , 2𝛼 + Θmax + Θmin (15)

then 𝜎(𝛼, 𝛽) < 1; that is, the GHSS iteration method (6) is convergent to the exact solution 𝑋⋆ ∈ Cπ‘šΓ—π‘› of the continuous Sylvester equation (1). Proof. By making use of the Kronecker product, we can reformulate the GHSS iteration (6) as the following matrixvector form: 𝑇

(𝐼 βŠ— (𝛼1 𝐼 + 𝐻 (𝐴)) + (𝛼2 𝐼 + 𝐻 (𝐡)) βŠ— 𝐼) π‘₯(π‘˜+1/2) 𝑇

Obviously, Θmax , Ξ₯max and Θmin , Ξ₯min are the upper and the lower bounds of the eigenvalues of the matrices H and S, respectively. By making use of Theorems 2.2 and 2.5 in [35], we can obtain the following convergence theorem about the GHSS iteration method for solving the continuous Sylvester equation (1).

βˆ’1

2 2 πœ™1 (𝛼, 𝛽) = (𝛽 βˆ’ 𝛼) (Θ2min βˆ’ Ξ₯max ) + 2π›Όπ›½Ξ˜min + 2Ξ₯max Θmin ,

󡄨 󡄨󡄨 󡄨󡄨𝛽 βˆ’ Ξ˜σ΅„¨σ΅„¨σ΅„¨ 𝛼2 + Ξ₯2 (12) . β‹… max √ 2 Θmin β‰€Ξ˜β‰€Ξ˜max |𝛼 + Θ| Ξ₯min ≀Ξ₯≀Ξ₯max 𝛽 + Ξ₯2

= (𝐼 βŠ— (𝛼1 𝐼 βˆ’ 𝑆 (𝐴)) + (𝛼2 𝐼 βˆ’ 𝑆 (𝐡)) βŠ— 𝐼) π‘₯(π‘˜) + 𝑐, 𝑇

(𝐼 βŠ— (𝛽1 𝐼 + 𝑆 (𝐴)) + (𝛽2 𝐼 + 𝑆 (𝐡)) βŠ— 𝐼) π‘₯(π‘˜+1) 𝑇

= (𝐼 βŠ— (𝛽1 𝐼 βˆ’ 𝐻 (𝐴)) + (𝛽2 𝐼 βˆ’ 𝐻 (𝐡)) βŠ— 𝐼) π‘₯(π‘˜+(π‘˜+1/2)) + 𝑐, (16) which can be arranged equivalently as (𝛼𝐼 + H) π‘₯(π‘˜+1/2) = (𝛼𝐼 βˆ’ S) π‘₯(π‘˜) + 𝑐, (𝛽𝐼 + S) π‘₯(π‘˜+1) = (𝛽𝐼 βˆ’ H) π‘₯(π‘˜+1/2) + 𝑐.

(17)

Evidently, the iteration scheme (17) is the GHSS iteration method for solving the system of linear equations (2), with A = H + S; see [34, 35]. After concrete operations, the GHSS iteration (17) can also be expressed as a stationary iteration as follows:

max

And, if the parameters 𝛼 and 𝛽 satisfy 4

(𝛼, 𝛽) ∈ ⋃Ωℓ ,

(13)

β„“=1

where

βˆ—

βˆ’1

Ξ©3 = {(𝛼, 𝛽) | 𝛽 (𝛼) ≀ 𝛽 < 𝛼} , Ξ©4 = {(𝛼, 𝛽) | 𝛽 < min {𝛼, π›½βˆ— (𝛼)} , πœ™2 (𝛼, 𝛽) > 0} ,

(14)

(18)

where 𝑀(𝛼, 𝛽) is the iteration matrix defined in (10), with H, S and 𝛼, 𝛽 being given in (9) and (11), respectively, and 𝑁 (𝛼, 𝛽) = (𝛼 + 𝛽) (𝛽𝐼 + S) (𝛼𝐼 + H)βˆ’1 .

Ξ©1 = {(𝛼, 𝛽) | 𝛼 ≀ 𝛽 < π›½βˆ— (𝛼)} , Ξ©2 = {(𝛼, 𝛽) | 𝛽 β‰₯ max {𝛼, π›½βˆ— (𝛼)} , πœ™1 (𝛼, 𝛽) > 0} ,

π‘₯(π‘˜+1) = 𝑀 (𝛼, 𝛽) π‘₯(π‘˜) + 𝑁 (𝛼, 𝛽) 𝑐,

(19)

We can easily verify that H is a Hermitian matrix, S is a skew-Hermitian matrix, 𝛼 is a nonnegative constant, and 𝛽 is a positive constant. Moreover, when either 𝐴 ∈ Cπ‘šΓ—π‘š or 𝐡 ∈ C𝑛×𝑛 is positive definite, the matrix H is Hermitian positive

4

Journal of Applied Mathematics

definite. The spectral radius of the iteration matrix 𝑀(𝛼, 𝛽) clearly satisfies βˆ’1

βˆ’1

𝜌 (𝑀 (𝛼, 𝛽)) = 𝜌 ((𝛽𝐼 + S) (𝛽𝐼 βˆ’ H) (𝛼𝐼 + H) (𝛼𝐼 βˆ’ S)) σ΅„©σ΅„© σ΅„© βˆ’1 σ΅„© ≀ σ΅„©σ΅„©σ΅„©σ΅„©(𝛽𝐼 βˆ’ H) (𝛼𝐼 + H)βˆ’1 σ΅„©σ΅„©σ΅„©σ΅„©2 σ΅„©σ΅„©σ΅„©σ΅„©(𝛼𝐼 βˆ’ S) (𝛽𝐼 + S) σ΅„©σ΅„©σ΅„©σ΅„©2 󡄨󡄨󡄨𝛽 βˆ’ πœ† βˆ’ πœ‡ 󡄨󡄨󡄨 󡄨 𝑗 π‘˜ 󡄨󡄨 = max 󡄨󡄨󡄨 󡄨 πœ† 𝑗 βˆˆπ‘ π‘(𝐻(𝐴)) 󡄨𝛼 + πœ† + πœ‡ 󡄨󡄨 󡄨󡄨 𝑗 π‘˜ 󡄨󡄨 πœ‡π‘˜ βˆˆπ‘ π‘(𝐻(𝐡))

β‹…

≀

max √

π‘–πœ‰π‘— βˆˆπ‘ π‘(𝑆(𝐴)) π‘–πœπ‘˜ βˆˆπ‘ π‘(𝑆(𝐡))

𝛼2 + (πœ‰π‘— + πœπ‘˜ )

2

𝛽2 + (πœ‰π‘— + πœπ‘˜ )

2 2 + Θ2 ) (Ξ₯2 + Θ2 ) Ξ₯min βˆ’ Θmax Θmin + √(Ξ₯min max min min

Θmax + Θmin

,

𝛾2 = √Θmax Θmin , 𝛾3 =

2 2 2 Ξ₯max βˆ’ Θmax Θmin + √(Ξ₯max + Θ2max ) (Ξ₯max + Θ2min )

Θmax + Θmin

2 , 𝜎 (𝛾1 ) , Θmax Θmin ≀ Ξ₯min { { ⋆ ⋆ 2 2 𝜎 (𝛼 , 𝛽 ) = {𝜎 (𝛾2 ) , Ξ₯min < Θmax Θmin < Ξ₯max , { 2 𝜎 (𝛾 ) , Θ Θ β‰₯ Ξ₯ , 3 max min max {

󡄨 󡄨󡄨 󡄨󡄨𝛽 βˆ’ Ξ˜σ΅„¨σ΅„¨σ΅„¨ 𝛼2 + Ξ₯2 ; β‹… max √ 2 Θmin β‰€Ξ˜β‰€Ξ˜max |𝛼 + Θ| Ξ₯min ≀Ξ₯≀Ξ₯max 𝛽 + Ξ₯2 (20)

then the bound for 𝜌(𝑀(𝛼, 𝛽)) is given by (12). Hence, by making use of Theorem 2.2 in [35] we know that 4

βˆ€ (𝛼, 𝛽) ∈ ⋃Ωℓ .

(21)

β„“=1

Therefore we obtain that if the parameters 𝛼 and 𝛽 satisfy the condition (13), the GHSS iteration method (17) converges to the exact solution π‘₯⋆ ∈ Cπ‘šπ‘› of the system of linear equations (2). This directly shows that the GHSS iteration method (6) is convergent to the exact solution 𝑋⋆ ∈ Cπ‘šΓ—π‘› of the continuous Sylvester equation (1) when 𝛼 and 𝛽 satisfy the condition (13), with the convergence factor 𝜌(𝑀(𝛼, 𝛽)) being bounded by 𝜎(𝛼, 𝛽). This completes the proof. Remark 5. When 𝛼1 = 𝛽1 and 𝛼2 = 𝛽2 , the GHSS method reduces to the HSS method, which is unconditionally convergent due to 𝛼 = 𝛽. Theorem 4 gives the convergence conditions of the GHSS iteration method for the continuous Sylvester equation (1) by analyzing the upper bound 𝜎(𝛼, 𝛽) of the spectral radius of the iteration matrix 𝑀(𝛼, 𝛽). Since the optimal parameters 𝛼 and 𝛽 that minimize the spectral radius 𝜌(𝑀(𝛼, 𝛽)) are hardly obtained, we instead give the parameters 𝛼⋆ and 𝛽⋆ , which minimize the upper bound 𝜎(𝛼, 𝛽) of the spectral radius 𝜌(𝑀(𝛼, 𝛽)), in the following corollary. Corollary 6. The theoretical quasi-optimal parameters that minimize the upper bound 𝜎(𝛼, 𝛽) are given by (𝛼⋆ , 𝛽⋆ ) ≑ arg min {𝜎 (𝛼, 𝛽)} 𝛼,𝛽

(𝛾 , π›½βˆ— (𝛾 )) , { { 1 βˆ— 1 = {(𝛾2 , 𝛽 (𝛾2 )) , { βˆ— {(𝛾3 , 𝛽 (𝛾3 )) ,

𝛾1 =

,

(23)

and the corresponding upper bound of the convergence factor is given by

2

max

𝜌 (𝑀 (𝛼, 𝛽)) ≀ 𝜎 (𝛼, 𝛽) < 1,

with

2 Θmax Θmin ≀ Ξ₯min , 2 2 Ξ₯min < Θmax Θmin < Ξ₯max , 2 Θmax Θmin β‰₯ Ξ₯max , (22)

(24)

where 𝜎 (𝛾) := 𝜎 (𝛾, π›½βˆ— (𝛾)) 2 π›½βˆ— (𝛾) βˆ’ Θmin 𝛾2 + Ξ₯min { { { β‹…βˆš , 𝛾 > 𝛾2 , { 2 { 2 { (25) π›½βˆ— (𝛾) + Ξ₯min { 𝛾 + Θmin ={ { 2 { { 𝛾2 + Ξ₯max π›½βˆ— (𝛾) βˆ’ Θmin { { β‹…βˆš , 𝛾 ≀ 𝛾2 . { 2 2 𝛾 + Θmin π›½βˆ— (𝛾) + Ξ₯max {

Proof. It is straightforward from Theorem 2.5 of [35]. Remark 7. We observe that 𝛼⋆ = 𝛽⋆ = √Θmax Θmin 2 2 when Ξ₯min < Θmax Θmin < Ξ₯max , which means that GHSS with the theoretical quasi-optimal parameters reduces to HSS with the theoretical quasi-optimal parameter [4] in this case. In other cases, the GHSS iteration is superior to the HSS iteration when both of them use the theoretical quasioptimal parameters. This phenomenon is also illustrated in the numerical results of Section 5. Remark 8. The actual iteration parameters 𝛼𝑖 and 𝛽𝑖 (𝑖 = 1, 2) Μ‚ (𝑖 = 1, 2) such that ̂𝑖 and 𝛽𝑖 = 𝛽 can be chosen as 𝛼𝑖 = 𝛼 𝑖 ⋆ ⋆ Μ‚+𝛽 Μ‚ = 𝛽 . For example, we may take Μ‚2 = 𝛼 and 𝛽 Μ‚1 + 𝛼 𝛼 1 2 Μ‚=𝛽 Μ‚ = (1/2)𝛽⋆ . Μ‚2 = (1/2)𝛼⋆ and 𝛽 Μ‚1 = 𝛼 𝛼 1 2

4. Inexact GHSS Iteration Methods In the process of GHSS iteration (6), two subproblems need to be solved exactly. This is a tough task which is costly and even impractical in actual implementations. To further improve computational efficiency of the GHSS iteration, we develop an inexact GHSS (IGHSS) iteration, which solves the two subproblems iteratively [18–24]. We write the IGHSS iteration scheme in the following algorithm for solving the continuous Sylvester equation (1). Algorithm 9 (the IGHSS iteration method). Given an initial guess 𝑋(0) ∈ Cπ‘šΓ—π‘› , then this algorithm leads to the solution of the continuous Sylvester equation (1):

Journal of Applied Mathematics

5

Table 1: Numerical results for GHSS and HSS with the experimental optimal iteration parameters. Method π‘ž

𝑛

GHSS 𝛼1exp 0.01 0.01 0.01 0.01 0.01 0.03 0.01 0.01 0.01 0.01 0.67 0.29 0.29 0.34 0.31 6.70 8.80 4.90 3.60 2.00 70.5 49.0 16.0 6.00 1.70

𝑛 = 10 𝑛 = 20 𝑛 = 40 𝑛 = 80 𝑛 = 160 𝑛 = 10 𝑛 = 20 𝑛 = 40 𝑛 = 80 𝑛 = 160 𝑛 = 10 𝑛 = 20 𝑛 = 40 𝑛 = 80 𝑛 = 160 𝑛 = 10 𝑛 = 20 𝑛 = 40 𝑛 = 80 𝑛 = 160 𝑛 = 10 𝑛 = 20 𝑛 = 40 𝑛 = 80 𝑛 = 160

π‘ž = 0.01

π‘ž = 0.1

π‘ž=1

π‘ž = 10

π‘ž = 100

𝛽1exp 0.83 0.90 0.77 0.06 0.03 1.00 0.95 0.61 0.21 0.10 1.00 0.96 0.59 0.43 0.33 2.00 1.90 1.50 1.30 1.00 2.58 2.70 2.45 1.70 1.05

π‘˜ = 0; while (not convergent) 𝑅(π‘˜) = 𝐢 βˆ’ 𝐴𝑋(π‘˜) βˆ’ 𝑋(π‘˜) 𝐡; approximately solve (𝛼1 𝐼 + 𝐻(𝐴))𝑍(π‘˜) + 𝑍(π‘˜) (𝛼2 𝐼 + 𝐻(𝐡)) = 𝑅(π‘˜) by employing an effective iteration method, such that the residual 𝑃(π‘˜) = 𝑅(π‘˜) βˆ’ (𝛼1 𝐼 + 𝐻(𝐴))𝑍(π‘˜) βˆ’ 𝑍(π‘˜) (𝛼2 𝐼 + 𝐻(𝐡)) of the iteration satisfies ‖𝑃(π‘˜) ‖𝐹 ≀ πœ€π‘˜ ‖𝑅(π‘˜) ‖𝐹 ; (π‘˜+1/2)

=𝑋

(π‘˜+1/2)

= 𝐢 βˆ’ 𝐴𝑋(π‘˜+1/2) βˆ’ 𝑋(π‘˜+1/2) 𝐡;

𝑋 𝑅

(π‘˜)

(π‘˜)

+𝑍 ;

approximately solve (𝛽1 𝐼 + 𝑆(𝐴))𝑍(π‘˜+1/2) + 𝑍(π‘˜+1/2) (𝛽2 𝐼 + 𝑆(𝐡)) = 𝑅(π‘˜+1/2) by employing an effective iteration method, such that the residual 𝑄(π‘˜+1/2) = 𝑅(π‘˜+1/2) βˆ’ (𝛽1 𝐼 + 𝑆(𝐴))𝑍(π‘˜+1/2) βˆ’ 𝑍(π‘˜+1/2) (𝛽2 𝐼 + 𝑆(𝐡)) of the iteration satisfies ‖𝑄(π‘˜+1/2) ‖𝐹 ≀ πœ‚π‘˜ ‖𝑅(π‘˜+1/2) ‖𝐹 ; 𝑋(π‘˜+1) = 𝑋(π‘˜+1/2) + 𝑍(π‘˜+1/2) ; π‘˜ = π‘˜ + 1; end.

IT 2 3 4 8 21 4 4 6 11 24 8 13 24 41 66 11 15 20 29 40 7 8 10 15 32

CPU 0.0006 0.0018 0.0141 0.1038 1.5376 0.0012 0.0025 0.0140 0.1265 1.6567 0.0025 0.0081 0.0563 0.4750 4.5142 0.0034 0.0097 0.0477 0.3395 2.7869 0.0021 0.0052 0.0243 0.1782 2.2333

𝛼exp 1.66 0.80 0.42 0.24 0.14 1.70 0.84 0.48 0.26 0.16 1.82 0.98 0.62 0.46 0.34 1.88 1.16 0.68 0.90 0.68 1.72 0.90 0.48 0.28 0.16

HSS IT 12 23 43 85 169 12 24 47 93 181 13 23 35 50 70 12 21 37 57 83 12 20 36 64 110

CPU 0.0038 0.0148 0.1018 0.9943 11.605 0.0036 0.0151 0.1102 1.0787 12.488 0.0039 0.0148 0.0821 0.5885 4.7983 0.0040 0.0135 0.0898 0.6780 5.8003 0.0036 0.0130 0.0885 0.7664 7.6504

Here, {πœ€π‘˜ } and {πœ‚π‘˜ } are prescribed tolerances used to control the accuracies of the inner iterations. We remark that when 𝛼1 = 𝛽1 and 𝛼2 = 𝛽2 , the IGHSS method reduces the inexact HSS (IHSS) method [4]. The convergence properties for the two-step iteration have been carefully studied in [27, 31]. By making use of Theorem 3.1 in [27], we can demonstrate the following convergence result about the above IGHSS iteration method. Theorem 10. Let the conditions of Theorem 4 be satisfied. If ∞ {𝑋(π‘˜) }π‘˜=0 βŠ† Cπ‘šΓ—π‘› is an iteration sequence generated by the IGHSS iteration method and if 𝑋⋆ ∈ Cπ‘šΓ—π‘› is the exact solution of the continuous Sylvester equation (1), then it holds that σ΅„©σ΅„© (π‘˜+1) σ΅„© 󡄩󡄩𝑋 βˆ’ 𝑋⋆ 󡄩󡄩󡄩󡄩𝑆 ≀ (𝜎 (𝛼, 𝛽) + πœ‡πœƒπœ€π‘˜ + πœƒ (𝜌 + πœƒ]πœ€π‘˜ ) πœ‚π‘˜ ) σ΅„© (26) σ΅„© σ΅„© β‹… 󡄩󡄩󡄩󡄩𝑋(π‘˜) βˆ’ 𝑋⋆ 󡄩󡄩󡄩󡄩𝑆 , π‘˜ = 0, 1, 2, . . . , where the norm β€– β‹… ‖𝑆 is defined as σ΅„© σ΅„© β€–π‘Œβ€–π‘† = σ΅„©σ΅„©σ΅„©(𝛽1 𝐼 + 𝑆 (𝐴)) π‘Œ + π‘Œ (𝛽2 𝐼 + 𝑆 (𝐡))󡄩󡄩󡄩𝐹

(27)

6

Journal of Applied Mathematics Table 2: Numerical results for GHSS and HSS with the theoretical quasi-optimal iteration parameters. Method

π‘ž

𝑛

π‘ž = 0.01

π‘ž = 0.1

π‘ž=1

π‘ž = 10

π‘ž = 100

GHSS

𝑛 = 10 𝑛 = 20 𝑛 = 40 𝑛 = 80 𝑛 = 160 𝑛 = 10 𝑛 = 20 𝑛 = 40 𝑛 = 80 𝑛 = 160 𝑛 = 10 𝑛 = 20 𝑛 = 40 𝑛 = 80 𝑛 = 160 𝑛 = 10 𝑛 = 20 𝑛 = 40 𝑛 = 80 𝑛 = 160 𝑛 = 10 𝑛 = 20 𝑛 = 40 𝑛 = 80 𝑛 = 160

𝛼1βˆ—

𝛽1βˆ—

0.0001 0.0002 0.0007 0.0028 0.0066 0.0060 0.0201 0.0555 0.0867 0.0983 0.5322 0.9733 0.5147 0.2593 0.1303 2.0752 1.0234 0.5147 0.2593 0.1303 72.911 26.701 8.6843 3.0610 1.2364

1.5236 0.4705 0.1294 0.0361 0.0151 1.5263 0.4861 0.1793 0.1151 0.1017 1.7300 1.0046 0.5147 0.2593 0.1303 2.0752 1.0234 0.5147 0.2593 0.1303 2.7778 2.0916 1.6894 1.2284 0.7699

IT 2 3 4 8 21 4 6 15 47 161 8 22 41 81 170 12 23 44 85 169 7 9 14 24 44

CPU 0.0007 0.0019 0.0094 0.0938 1.4448 0.0013 0.0038 0.0353 0.5458 11.069 0.0026 0.0142 0.0967 0.9405 11.867 0.0039 0.0259 0.1073 1.0074 11.913 0.0023 0.0060 0.0342 0.2858 3.0805

βˆ—

𝛼 2.0752 1.0234 0.5147 0.2593 0.1303 2.0752 1.0234 0.5147 0.2593 0.1303 2.0752 1.0234 0.5147 0.2593 0.1303 2.0752 1.0234 0.5147 0.2593 0.1303 2.0752 1.0234 0.5147 0.2593 0.1303

HSS IT 15 27 50 91 169 15 27 49 93 198 14 23 41 81 170 12 23 44 85 169 12 20 36 66 126

CPU 0.0282 0.0261 0.1343 1.0901 11.704 0.0060 0.0263 0.1291 1.0875 13.644 0.0045 0.0342 0.1106 0.9638 11.872 0.0040 0.0598 0.1473 1.0354 11.928 0.0039 0.0823 0.1035 0.8051 8.9284

for any matrix π‘Œ ∈ Cπ‘šΓ—π‘› , and the constants πœ‡, πœƒ, 𝜌, and ] are given by σ΅„© σ΅„© σ΅„© βˆ’1 σ΅„© πœ‡ = σ΅„©σ΅„©σ΅„©σ΅„©(𝛽𝐼 βˆ’ H) (𝛼𝐼 + H)βˆ’1 σ΅„©σ΅„©σ΅„©σ΅„©2 , πœƒ = σ΅„©σ΅„©σ΅„©σ΅„©A(𝛽𝐼 + S) σ΅„©σ΅„©σ΅„©σ΅„©2 , σ΅„© βˆ’1 σ΅„© (28) 𝜌 = σ΅„©σ΅„©σ΅„©σ΅„©(𝛽𝐼 + S) (𝛼𝐼 + H)βˆ’1 (𝛼𝐼 βˆ’ S) (𝛽𝐼 + S) σ΅„©σ΅„©σ΅„©σ΅„©2 , σ΅„© σ΅„© ] = σ΅„©σ΅„©σ΅„©σ΅„©(𝛽𝐼 + S) (𝛼𝐼 + H)βˆ’1 σ΅„©σ΅„©σ΅„©σ΅„©2 ,

with π‘Ÿ(π‘˜) = 𝑐 βˆ’ Aπ‘₯(π‘˜) and π‘Ÿ(π‘˜+1/2) = 𝑐 βˆ’ Aπ‘₯(π‘˜+1/2) , where 𝑧(π‘˜) is such that the residual

with the matrices H and S being defined in (9) and the constants 𝛼 and 𝛽 being defined in (11). In particular, when

π‘ž(π‘˜+1/2) = π‘Ÿ(π‘˜+1/2) βˆ’ (𝛽𝐼 + S) 𝑧(π‘˜+1/2)

𝜎 (𝛼, 𝛽) + πœ‡πœƒπœ€max + πœƒ (𝜌 + πœƒ]πœ€max ) πœ‚max < 1,

satisfies β€–π‘ž(π‘˜+1/2) β€–2 ≀ πœ‚π‘˜ β€–π‘Ÿ(π‘˜+1/2) β€–2 . Evidently, the iteration scheme (30) is the inexact GHSS iteration method for solving the system of linear equations (2), with A = H + S; see [34, 36]. Hence, by making use of Theorem 3.1 in [27] we can obtain the estimate

(29)

∞ {𝑋(π‘˜) }π‘˜=0

βŠ† Cπ‘šΓ—π‘› converges to 𝑋⋆ ∈ the iteration sequence π‘šΓ—π‘› C , where πœ€max = maxπ‘˜ {πœ€π‘˜ } and πœ‚max = maxπ‘˜ {πœ‚π‘˜ }. Proof. By making use of the Kronecker product and the notations introduced in Theorem 4, we can reformulate the above-described IGHSS iteration as the following matrixvector form: (𝛼𝐼 + H) 𝑧(π‘˜) = π‘Ÿ(π‘˜) , (𝛽𝐼 + S) 𝑧(π‘˜+1/2) = π‘Ÿ(π‘˜+1/2) ,

π‘₯(π‘˜+1/2) = π‘₯(π‘˜) + 𝑧(π‘˜) , π‘₯(π‘˜+1) = π‘₯(π‘˜+1/2) + 𝑧(π‘˜+1/2) , (30)

𝑝(π‘˜) = π‘Ÿ(π‘˜) βˆ’ (𝛼𝐼 + H) 𝑧(π‘˜)

(31)

satisfies ‖𝑝(π‘˜) β€–2 ≀ πœ€π‘˜ β€–π‘Ÿ(π‘˜) β€–2 , and 𝑧(π‘˜+1/2) is such that the residual (32)

󡄨󡄨󡄨󡄩󡄩󡄩π‘₯(π‘˜+1) βˆ’ π‘₯⋆ 󡄩󡄩󡄩󡄨󡄨󡄨 ≀ (𝜎 (𝛼, 𝛽) + πœ‡πœƒπœ€ + πœƒ (𝜌 + πœƒ]πœ€ ) πœ‚ ) 󡄩󡄩󡄨󡄨 󡄨󡄨󡄩󡄩 π‘˜ π‘˜ π‘˜ (33) 󡄨 󡄨󡄨󡄩󡄩 (π‘˜) σ΅„© β‹… 󡄨󡄨󡄨󡄩󡄩󡄩π‘₯ βˆ’ π‘₯⋆ 󡄩󡄩󡄩󡄩󡄨󡄨󡄨󡄨 , π‘˜ = 0, 1, 2, . . . , where the norm |β€– β‹… β€–| is defined as follows: for a vector 𝑦 ∈ Cπ‘šπ‘› , |‖𝑦‖| = β€–(𝛽𝐼 + S)𝑦‖2 ; and for a matrix π‘Œ ∈ Cπ‘šπ‘›Γ—π‘šπ‘› ,

Journal of Applied Mathematics

7 Table 3: Numerical results for IGHSS and IHSS. Method

π‘ž

π‘ž = 0.01

π‘ž = 0.1

π‘ž=1

π‘ž = 10

π‘ž = 100

𝑛 𝑛 = 10 𝑛 = 20 𝑛 = 40 𝑛 = 80 𝑛 = 160 𝑛 = 10 𝑛 = 20 𝑛 = 40 𝑛 = 80 𝑛 = 160 𝑛 = 10 𝑛 = 20 𝑛 = 40 𝑛 = 80 𝑛 = 160 𝑛 = 10 𝑛 = 20 𝑛 = 40 𝑛 = 80 𝑛 = 160 𝑛 = 10 𝑛 = 20 𝑛 = 40 𝑛 = 80 𝑛 = 160

IGHSS IT 2 3 4 6 18 4 4 5 10 20 7 12 21 35 52 10 13 17 24 30 6 7 8 13 27

|β€–π‘Œβ€–| = β€–(𝛽𝐼 + S)π‘Œ(𝛽𝐼 + S)βˆ’1 β€–2 is the correspondingly induced matrix norm. Note that σ΅„© 󡄨󡄨󡄩󡄩 󡄩󡄩󡄨󡄨 σ΅„©σ΅„© 󡄨󡄨󡄩󡄩𝑦󡄩󡄩󡄨󡄨 = σ΅„©σ΅„©(𝛽𝐼 + S) 𝑦󡄩󡄩󡄩2 (34) σ΅„© σ΅„© = σ΅„©σ΅„©σ΅„©(𝛽1 𝐼 + 𝑆 (𝐴)) π‘Œ + π‘Œ (𝛽2 𝐼 + 𝑆 (𝐡))󡄩󡄩󡄩𝐹 = β€–π‘Œβ€–π‘† . Hence, we can equivalently rewrite the estimate (33) as σ΅„©σ΅„© (π‘˜+1) σ΅„© 󡄩󡄩𝑋 βˆ’ 𝑋⋆ 󡄩󡄩󡄩󡄩𝑆 ≀ (𝜎 (𝛼, 𝛽) + πœ‡πœƒπœ€π‘˜ + πœƒ (𝜌 + πœƒ]πœ€π‘˜ ) πœ‚π‘˜ ) σ΅„© (35) σ΅„© σ΅„© β‹… 󡄩󡄩󡄩󡄩𝑋(π‘˜) βˆ’ 𝑋⋆ 󡄩󡄩󡄩󡄩𝑆 , π‘˜ = 0, 1, 2, . . . . This proves the theorem. We remark that Theorem 10 gives the choices of the tolerances {πœ€π‘˜ } and {πœ‚π‘˜ } for convergence. In general, Theorem 10 shows that in order to guarantee the convergence of the IGHSS iteration, it is not necessary for {πœ€π‘˜ } and {πœ‚π‘˜ } to approach to zero as π‘˜ is increasing. All we need is that the condition (29) be satisfied. However, the theoretical optimal tolerances {πœ€π‘˜ } and {πœ‚π‘˜ } are difficult to be analyzed.

5. Numerical Results In this section, we perform numerical tests to exhibit the superiority of GHSS and IGHSS to HSS and IHSS when

IHSS CPU 0.0006 0.0015 0.0120 0.1005 0.9076 0.0010 0.0019 0.0090 0.1065 0.8568 0.0021 0.0075 0.0462 0.4036 2.8140 0.0029 0.0081 0.0401 0.3059 1.5861 0.0019 0.0045 0.0192 0.1579 1.1335

IT 11 21 38 75 130 11 21 41 79 156 12 20 31 45 60 11 20 34 50 72 11 18 32 55 98

CPU 0.0031 0.0106 0.0818 0.8043 6.9022 0.0032 0.0105 0.0905 0.9082 7.0812 0.0032 0.0120 0.0691 0.5082 3.5928 0.0035 0.0120 0.0806 0.5889 4.5002 0.0032 0.0111 0.0832 0.6968 5.8509

they are used as solvers for solving the continuous Sylvester equation (1), in terms of iteration numbers (denoted as IT) and CPU times (in seconds, denoted as CPU). In our implementations, the initial guess is chosen to be the zero matrix, and the iteration is terminated once the current iterate 𝑋(π‘˜) satisfies σ΅„©σ΅„© σ΅„© 󡄩󡄩𝐢 βˆ’ 𝐴𝑋(π‘˜) βˆ’ 𝑋(π‘˜) 𝐡󡄩󡄩󡄩 σ΅„© 󡄩𝐹 ≀ 10βˆ’6 . (36) ‖𝐢‖𝐹 In addition, all sub-problems involved in each step of the HSS and GHSS iteration methods are solved exactly by the method in [16]. In HSS and IGHSS iteration methods, we set πœ€π‘˜ = πœ‚π‘˜ = 0.01, π‘˜ = 0, 1, 2, . . ., and use the Smith’s method [18] as the inner iteration scheme. We consider the continuous Sylvester equation (1) with π‘š = 𝑛 and the matrices 100 𝐼, 𝐴 = 𝐡 = 𝑀 + π‘žπ‘ + (37) (𝑛 + 1)2 where 𝑀, 𝑁 ∈ R𝑛×𝑛 are the tridiagonal matrices given by 𝑀 = tridiag (βˆ’1, 2, βˆ’1) , see also [4–6, 27, 40, 41].

𝑁 = tridiag (0.5, 0, βˆ’0.5) ; (38)

8 From [4] we known that the HSS iteration method considerably outperforms the SOR iteration method in both iteration step and CPU time, so here we just solve this continuous Sylvester equation by the GHSS and the HSS iteration methods and their inexact variants. In Table 1, numerical results for GHSS and HSS with the experimental optimal iteration parameters are listed, while 𝛼1 exp (with 𝛼2 exp = 𝛼1 exp ), 𝛽1 exp (with 𝛽2 exp = 𝛽1 exp ), and 𝛼exp (with 𝛽exp = 𝛼exp ) represent the experimentally found optimal values of the iteration parameters used for the GHSS and the HSS iterations, respectively. In Table 2, numerical results for GHSS and HSS with the theoretical quasi-optimal iteration parameters are listed, while 𝛼1⋆ (with 𝛼2⋆ = 𝛼1⋆ ), 𝛽1⋆ (with 𝛽2⋆ = 𝛽1⋆ ) and 𝛼⋆ (with 𝛽⋆ = 𝛼⋆ ) represent the theoretical quasi-optimal iteration parameters used for the GHSS and the HSS iterations, respectively. In Table 3, numerical results for IGHSS and IHSS are listed; here we adopt the iteration parameters in Table 1 for convenience and not the experimental optimal parameters. From Tables 1–3 we observe that GHSS and IGHSS methods performs better than HSS and IHSS methods in terms of iteration numbers and CPU times. Therefore, the GHSS and IGHSS methods proposed in this work are two powerful and attractive iterative approaches for solving large sparse continuous Sylvester equations.

6. Conclusions As a strategy for accelerating convergence of iteration for solving a broad class of continuous Sylvester equations, we have proposed a four-parameter generalized HSS (GHSS) method. This is obviously a type of generalization of the classical HSS method [4]. When we take 𝛼1 = 𝛽1 and 𝛼2 = 𝛽2 , we shall return to the HSS method. In our work we demonstrate that the iterative series produced by the GHSS method converge to the unique solution of the continuous Sylvester equation when the parameters satisfy some moderate conditions. The GHSS method takes HSS method as a special case. We also give a possible optimal upper bound for the iterative spectral radius. Moreover, to reduce the computational cost, an inexact variant of the GHSS (IGHSS) iteration method is developed and its convergence property is analyzed. Numerical results display that the new GHSS method and its inexact variant are typically more flexible than HSS and IHSS methods.

Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments This work is partially supported by the National Basic Research (973) Program of China under Grant no. 2011CB706903, the National Natural Science Foundation of China under Grant no. 11271173, the Mathematical Tianyuan

Journal of Applied Mathematics Foundation of China under Grant no. 11026064, and the CAPES and CNPq in Brazil.

References [1] P. Lancaster, β€œExplicit solutions of linear matrix equations,” SIAM Review, vol. 12, no. 4, pp. 544–566, 1970. [2] P. Lancaster and M. Tismenetsky, The Theory of Matrices, Computer Science and Applied Mathematics, Academic Press, Orlando, Fla, USA, 2nd edition, 1985. [3] Q.-W. Wang and Z.-H. He, β€œSolvability conditions and general solution for mixed Sylvester equations,” Automatica, vol. 49, no. 9, pp. 2713–2719, 2013. [4] Z.-Z. Bai, β€œOn Hermitian and skew-Hermitian splitting iteration methods for continuous Sylvester equations,” Journal of Computational Mathematics, vol. 29, no. 2, pp. 185–198, 2011. [5] Z.-Z. Bai, Y.-M. Huang, and M. K. Ng, β€œOn preconditioned iterative methods for Burgers equations,” SIAM Journal on Scientific Computing, vol. 29, no. 1, pp. 415–439, 2007. [6] Z.-Z. Bai and M. K. Ng, β€œPreconditioners for nonsymmetric block toeplitz-like-plus-diagonal linear systems,” Numerische Mathematik, vol. 96, no. 2, pp. 197–220, 2003. [7] P. Lancaster and L. Rodman, Algebraic Riccati Equations, Oxford Science Publications, The Clarendon Press, New York, NY, USA, 1995. [8] G. H. Golub and C. F. Van Loan, Matrix Computations, Johns Hopkins Studies in the Mathematical Sciences, Johns Hopkins University Press, Baltimore, Md, USA, 3rd edition, 1996. [9] A. Halanay and V. R˘asvan, Applications of Liapunov Methods in Stability, vol. 245 of Mathematics and Its Applications, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1993. [10] A. N. P. Liao, Z.-Z. Bai, and Y. Lei, β€œBest approximate solution of matrix equation 𝐴𝑋𝐡 + πΆπ‘Œπ· = 𝐸,” SIAM Journal on Matrix Analysis and Applications, vol. 27, no. 3, pp. 675–688, 2005. [11] Z.-Z. Bai, X.-X. Guo, and S.-F. Xu, β€œAlternately linearized implicit iteration methods for the minimal nonnegative solutions of the nonsymmetric algebraic Riccati equations,” Numerical Linear Algebra with Applications, vol. 13, no. 8, pp. 655–674, 2006. [12] X.-X. Guo and Z.-Z. Bai, β€œOn the minimal nonnegative solution of nonsymmetric algebraic Riccati equation,” Journal of Computational Mathematics, vol. 23, no. 3, pp. 305–320, 2005. [13] G.-P. Xu, M.-S. Wei, and D.-S. Zheng, β€œOn solutions of matrix equation 𝐴𝑋𝐡 + πΆπ‘Œπ· = 𝐹,” Linear Algebra and Its Applications, vol. 279, no. 1–3, pp. 93–109, 1998. [14] J.-T. Zhou, R.-R. Wang, and Q. Niu, β€œA preconditioned iteration method for solving Sylvester equations,” Journal of Applied Mathematics, vol. 2012, Article ID 401059, 12 pages, 2012. [15] F. Yin and G.-X. Huang, β€œAn iterative algorithm for the generalized reflexive solutions of the generalized coupled Sylvester matrix equations,” Journal of Applied Mathematics, vol. 2012, Article ID 152805, 28 pages, 2012. [16] R. H. Bartels and G. W. Stewart, β€œSolution of the matrix equation 𝐴𝑋 + 𝑋𝐡 = 𝐢[𝐹4],” Communications of the ACM, vol. 15, no. 9, pp. 820–826, 1972. [17] G. H. Golub, S. Nash, and C. Van Loan, β€œA Hessenberg-Schur method for the problem 𝐴𝑋 + 𝑋𝐡 = 𝐢,” IEEE Transactions on Automatic Control, vol. 24, no. 6, pp. 909–913, 1979. [18] R. A. Smith, β€œMatrix equation 𝑋𝐴 + 𝐡𝑋 = 𝐢,” SIAM Journal on Applied Mathematics, vol. 16, no. 1, pp. 198–201, 1968.

Journal of Applied Mathematics [19] D. Calvetti and L. Reichel, β€œApplication of ADI iterative methods to the restoration of noisy images,” SIAM Journal on Matrix Analysis and Applications, vol. 17, no. 1, pp. 165–186, 1996. [20] D. Y. Hu and L. Reichel, β€œKrylov-subspace methods for the Sylvester equation,” Linear Algebra and Its Applications, vol. 172, pp. 283–313, 1992. [21] N. Levenberg and L. Reichel, β€œA generalized ADI iterative method,” Numerische Mathematik, vol. 66, no. 1, pp. 215–233, 1993. [22] E. L. Wachspress, β€œIterative solution of the Lyapunov matrix equation,” Applied Mathematics Letters, vol. 1, no. 1, pp. 87–90, 1988. [23] G. Starke and W. Niethammer, β€œSOR for 𝐴𝑋 βˆ’ 𝑋𝐡 = 𝐢,” Linear Algebra and Its Applications, vol. 154–156, pp. 355–375, 1991. [24] D. J. Evans and E. Galligani, β€œA parallel additive preconditioner for conjugate gradient method for 𝐴𝑋 + 𝑋𝐡 = 𝐢,” Parallel Computing, vol. 20, no. 7, pp. 1055–1064, 1994. [25] L. JΒ΄odar, β€œAn algorithm for solving generalized algebraic Lyapunov equations in Hilbert space, applications to boundary value problems,” Proceedings of the Edinburgh Mathematical Society: Series 2, vol. 31, no. 1, pp. 99–105, 1988. [26] C.-Q. Gu and H.-Y. Xue, β€œA shift-splitting hierarchical identification method for solving Lyapunov matrix equations,” Linear Algebra and Its Applications, vol. 430, no. 5-6, pp. 1517–1530, 2009. [27] Z.-Z. Bai, G. H. Golub, and M. K. Ng, β€œHermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems,” SIAM Journal on Matrix Analysis and Applications, vol. 24, no. 3, pp. 603–626, 2003. [28] Z.-Z. Bai, β€œSplitting iteration methods for non-Hermitian positive definite systems of linear equations,” Hokkaido Mathematical Journal, vol. 36, no. 4, pp. 801–814, 2007. [29] Z.-Z. Bai, G. H. Golub, L.-Z. Lu, and J.-F. Yin, β€œBlock triangular and skew-Hermitian splitting methods for positive-definite linear systems,” SIAM Journal on Scientific Computing, vol. 26, no. 3, pp. 844–863, 2005. [30] Z.-Z. Bai, G. H. Golub, and M. K. Ng, β€œOn successiveoverrelaxation acceleration of the Hermitian and skewHermitian splitting iterations,” Numerical Linear Algebra with Applications, vol. 14, no. 4, pp. 319–335, 2007. [31] Z.-Z. Bai, G. H. Golub, and M. K. Ng, β€œOn inexact hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems,” Linear Algebra and Its Applications, vol. 428, no. 2-3, pp. 413–440, 2008. [32] Z.-Z. Bai, G. H. Golub, and J.-Y. Pan, β€œPreconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive semidefinite linear systems,” Numerische Mathematik, vol. 98, no. 1, pp. 1–32, 2004. [33] M. Benzi, β€œA Generalization of the he rmitian and skewhermitian splitting iteration,” SIAM Journal on Matrix Analysis and Applications, vol. 31, no. 2, pp. 360–374, 2009. [34] L. Li, T.-Z. Huang, and X.-P. Liu, β€œAsymmetric Hermitian and skew-Hermitian splitting methods for positive definite linear systems,” Computers and Mathematics with Applications, vol. 54, no. 1, pp. 147–159, 2007. [35] A.-L. Yang, J. An, and Y.-J. Wu, β€œA generalized preconditioned HSS method for non-Hermitian positive definite linear systems,” Applied Mathematics and Computation, vol. 216, no. 6, pp. 1715–1722, 2010. [36] J.-F. Yin and Q.-Y. Dou, β€œGeneralized preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian

9

[37]

[38]

[39]

[40]

[41]

positive-definite linear systems,” Journal of Computational Mathematics, vol. 30, no. 4, pp. 404–417, 2012. Z.-Z. Bai and G. H. Golub, β€œAccelerated Hermitian and skewHermitian splitting iteration methods for saddle-point problems,” IMA Journal of Numerical Analysis, vol. 27, no. 1, pp. 1–23, 2007. X. Li, A.-L. Yang, and Y.-J. Wu, β€œParameterized preconditioned Hermitian and skew-Hermitian splitting iteration method for saddle-point problems,” International Journal of Computer Mathematics, 2013. W. Li, Y.-P. Liu, and X.-F. Peng, β€œThe generalized HSS method for solving singular linear systems,” Journal of Computational and Applied Mathematics, vol. 236, no. 9, pp. 2338–2353, 2012. O. Axelsson, Z.-Z. Bai, and S.-X. Qiu, β€œA class of nested iteration schemes for linear systems with a coefficient matrix with a dominant positive definite symmetric part,” Numerical Algorithms, vol. 35, no. 2-4, pp. 351–372, 2004. Z.-Z. Bai, β€œA class of two-stage iterative methods for systems of weakly nonlinear equations,” Numerical Algorithms, vol. 14, no. 4, pp. 295–319, 1997.

Advances in

Operations Research Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Advances in

Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Journal of

Applied Mathematics

Algebra

Hindawi Publishing Corporation http://www.hindawi.com

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Journal of

Probability and Statistics Volume 2014

The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

International Journal of

Differential Equations Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Volume 2014

Submit your manuscripts at http://www.hindawi.com International Journal of

Advances in

Combinatorics Hindawi Publishing Corporation http://www.hindawi.com

Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Journal of

Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

International Journal of Mathematics and Mathematical Sciences

Mathematical Problems in Engineering

Journal of

Mathematics Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Discrete Mathematics

Journal of

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Discrete Dynamics in Nature and Society

Journal of

Function Spaces Hindawi Publishing Corporation http://www.hindawi.com

Abstract and Applied Analysis

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

International Journal of

Journal of

Stochastic Analysis

Optimization

Hindawi Publishing Corporation http://www.hindawi.com

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Volume 2014