Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2014, Article ID 578102, 9 pages http://dx.doi.org/10.1155/2014/578102
Research Article A Generalized HSS Iteration Method for Continuous Sylvester Equations Xu Li,1 Yu-Jiang Wu,1 Ai-Li Yang,1 and Jin-Yun Yuan2 1 2
School of Mathematics and Statistics, Lanzhou University, Lanzhou 730000, Gansu, China Department of Mathematics, Federal University of ParanΒ΄a, Centro PolitΒ΄ecnico, CP 19.081, 81531-980 Curitiba, PR, Brazil
Correspondence should be addressed to Xu Li;
[email protected] and Yu-Jiang Wu;
[email protected] Received 20 August 2013; Accepted 13 December 2013; Published 12 January 2014 Academic Editor: Qing-Wen Wang Copyright Β© 2014 Xu Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Based on the Hermitian and skew-Hermitian splitting (HSS) iteration technique, we establish a generalized HSS (GHSS) iteration method for solving large sparse continuous Sylvester equations with non-Hermitian and positive definite/semidefinite matrices. The GHSS method is essentially a four-parameter iteration which not only covers the standard HSS iteration but also enables us to optimize the iterative process. An exact parameter region of convergence for the method is strictly proved and a minimum value for the upper bound of the iterative spectrum is derived. Moreover, to reduce the computational cost, we establish an inexact variant of the GHSS (IGHSS) iteration method whose convergence property is discussed. Numerical experiments illustrate the efficiency and robustness of the GHSS iteration method and its inexact variant.
1. Introduction
Before giving its numerical scheme, we rewrite the continuous Sylvester equation (1) in the mathematically equivalent system of linear equations
Consider the following continuous Sylvester equation: π΄π + ππ΅ = πΆ,
(1)
where π΄ β CπΓπ , π΅ β CπΓπ , and πΆ β CπΓπ are given complex matrices. Assume that (i) π΄, π΅, and πΆ are large and sparse matrices; (ii) at least one of π΄ and π΅ is non-Hermitian; (iii) both π΄ and π΅ are positive semi-definite, and at least one of them is positive definite. Since under assumptions (i)β(iii) there is no common eigenvalue between π΄ and βπ΅, we obtain from [1, 2] that the continuous Sylvester equation (1) has a unique solution. Obviously, the continuous Lyapunov equation is a special case of the continuous Sylvester equation (1) with π΅ = π΄β and πΆ Hermitian, where π΄β represents the conjugate transpose of the matrix π΄. This continuous Sylvester equation arises in several areas of applications. For more details about the practical backgrounds of this class of problems, we refer to [2β15] and the references therein.
Aπ₯ = π,
(2)
where A = πΌ β π΄ + π΅π β πΌ; the vectors π₯ and π contain the concatenated columns of the matrices π and πΆ, respectively, with β being the Kronecker product symbol and π΅π representing the transpose of the matrix π΅. However, it is quite expensive and ill-conditioned to use the iteration method to solve this variation of the continuous Sylvester equation (1). There is a large number of numerical methods for solving the continuous Sylvester equation (1). The Bartels-Stewart and the Hessenberg-Schur methods [16, 17] are direct algorithms, which can only be applied to problems of reasonably small sizes. When the matrices π΄ and π΅ become large and sparse, iterative methods are usually employed for efficiently and accurately solving the continuous Sylvester equation (1), for instance, the Smithβs method [18], the alternating direction implicit (ADI) method [19β22], and others [23β26]. Recently, Bai established the Hermitian and skewHermitian splitting (HSS) [4] iterative method for solving the continuous Sylvester equation (1), which is based on
2
Journal of Applied Mathematics
the Hermitian and skew-Hermitian splitting of the matrices π΄ and π΅. This HSS iteration method is a matrix variant of the original HSS iteration method firstly proposed by Bai et al. [27] for solving systems of linear equations; see [28β39] for more detailed descriptions about the HSS iteration method and its variants. To further improve the convergence efficiency, in this paper we present a new generalized Hermitian and skewHermitian splitting (GHSS) method for solving the continuous Sylvester equation (1). It is a four-parameter iteration which enables the optimization of the iterative process, thereby achieving high efficiency and robustness. Similar approaches of using the parameterized acceleration technique in the algorithmic designs of the iterative methods can be seen in [34β39]. In the remainder of this paper, a matrix sequence (π) β {π }π=0 β CπΓπ is said to be convergent to a matrix π β β CπΓπ if the corresponding vector sequence {π¦(π) }π=0 β Cππ is ππ convergent to the corresponding vector π¦ β C , where the vectors π¦(π) and π¦ contain the concatenated columns of the β matrices π(π) and π, respectively. If {π(π) }π=0 is convergent, then its convergence factor and convergence rate are defined β as those of {π¦(π) }π=0 , correspondingly. In addition, we use π π(π), βπβ2 and βπβπΉ to denote the spectrum, the spectral norm, and the Frobenius norm of the matrix π β CπΓπ , respectively. Note that β β
β2 is also used to represent the 2norm of a vector. The rest of this paper is organized as follows. In Section 2, we present the GHSS method for solving the continuous Sylvester equation (1), in which we use four parameters instead of two parameters in the HSS method [4]. An exact parameter region of convergence for the method is strictly proved and a minimum value for the upper bound of the iterative spectrum is derived in Section 3. In Section 4, an inexact variant of the GHSS (IGHSS) iteration method is presented and its convergence property is studied. Numerical examples are given to illustrate the theoretical results and the effectiveness of the GHSS method in Section 5. Finally, we draw our conclusions.
2. The GHSS Method Here and in the sequel, we use π»(π) := (1/2)(π + πβ ) and π(π) := (1/2)(π β πβ ) to denote the Hermitian part and the skew-Hermitian part of the matrix π β CπΓπ , respectively. Obviously, the matrix π naturally possesses the Hermitian and skew-Hermitian splitting (HSS): π = π» (π) + π (π) ;
(3)
see [4, 27, 28]. Similar to the HSS method [4], we obtain the following splitting of π΄ and π΅: π΄ = (πΌ1 πΌ + π» (π΄)) β (πΌ1 πΌ β π (π΄)) = (π½1 πΌ + π (π΄)) β (π½1 πΌ β π» (π΄)) ,
π΅ = (πΌ2 πΌ + π» (π΅)) β (πΌ2 πΌ β π (π΅)) = (π½2 πΌ + π (π΅)) β (π½2 πΌ β π» (π΅)) , (4) where πΌπ (π = 1, 2) are given nonnegative constants and π½π (π = 1, 2) are given positive constants and πΌ is the identity matrix of suitable dimension. Then the continuous Sylvester equation (1) can be equivalently reformulated as (πΌ1 πΌ + π» (π΄)) π + π (πΌ2 πΌ + π» (π΅)) = (πΌ1 πΌ β π (π΄)) π + π (πΌ2 πΌ β π (π΅)) + πΆ, (π½1 πΌ + π (π΄)) π + π (π½2 πΌ + π (π΅))
(5)
= (π½1 πΌ β π» (π΄)) π + π (π½2 πΌ β π» (π΅)) + πΆ. Under assumptions (i)β(iii), we observe that there is no common eigenvalue between the matrices πΌ1 πΌ + π»(π΄) and β(πΌ2 πΌ + π»(π΅)), as well as between the matrices π½1 πΌ + π(π΄) and β(π½2 πΌ + π(π΅)), so that the above two fixed-point matrix equations have unique solutions for all given right-hand side matrices. This leads to the following generalized Hermitian and skew-Hermitian splitting (GHSS) iteration method for solving the continuous Sylvester equation (1). Algorithm 1 (the GHSS iteration method). Given an initial guess π(0) β CπΓπ , compute π(π+1) β CπΓπ for π = 0, 1, 2, . . . β using the following iteration scheme until {π(π) }π=0 satisfies the stopping criterion: (πΌ1 πΌ + π» (π΄)) π(π+1/2) + π(π+1/2) (πΌ2 πΌ + π» (π΅)) = (πΌ1 πΌ β π (π΄)) π(π) + π(π) (πΌ2 πΌ β π (π΅)) + πΆ, (π½1 πΌ + π (π΄)) π(π+1) + π(π+1) (π½2 πΌ + π (π΅)) = (π½1 πΌ β π» (π΄)) π(π+1/2) + π(π+1/2) (π½2 πΌ β π» (π΅)) + πΆ, (6) where πΌπ (π = 1, 2) are given nonnegative constants and π½π (π = 1, 2) are given positive constants and πΌ is the identity matrix of suitable dimension. Remark 2. The GHSS method has the same algorithmic structure as the HSS method [4], and thus two methods have the same computational cost in each iteration step. It is easy to see that the former reduces to the latter when πΌ1 = π½1 and πΌ2 = π½2 . Remark 3. When π΅ is a zero matrix, and π(π) and πΆ reduce to column vectors, the GHSS iteration method becomes the one for systems of linear equations; see [34β36]. In addition, when π΅ = π΄β and πΆ is Hermitian, it leads to an GHSS iteration method for the continuous Lyapunov equations.
3. Convergence Analysis of the GHSS Method Let π»(π΄), π»(π΅), and π(π΄), π(π΅) be the Hermitian and the skew-Hermitian parts of the matrices π΄ and π΅, respectively.
Journal of Applied Mathematics
3 with functions π1 (πΌ, π½), π2 (πΌ, π½) and π½β (πΌ) denoted by
Denote π(π»(π΄)) = max π(π»(π΄)) min
=
(π(π΄)) = πmax (π(π΄)) πmin =
{π π } ,
(π»(π΅)) πmax =
min
{π π } ,
(π»(π΅)) πmin
max
σ΅¨ σ΅¨ {σ΅¨σ΅¨σ΅¨σ΅¨ππ σ΅¨σ΅¨σ΅¨σ΅¨} ,
(π(π΅)) πmax =
min
σ΅¨ σ΅¨ {σ΅¨σ΅¨σ΅¨σ΅¨ππ σ΅¨σ΅¨σ΅¨σ΅¨} ,
(π(π΅)) πmin =
max
π π βπ π(π»(π΄))
π π βπ π(π»(π΄))
πππ βπ π(π(π΄))
πππ βπ π(π(π΄))
=
max
ππ βπ π(π»(π΅))
min
ππ βπ π(π»(π΅))
{ππ } , {ππ } ,
max
σ΅¨ σ΅¨ {σ΅¨σ΅¨σ΅¨ππ σ΅¨σ΅¨σ΅¨} ,
min
σ΅¨ σ΅¨ {σ΅¨σ΅¨σ΅¨ππ σ΅¨σ΅¨σ΅¨} ,
πππ βπ π(π(π΅))
πππ βπ π(π(π΅))
(7) with π = ββ1 and (π»(π΅)) + πmax , Ξmax = π(π»(π΄)) max
(π(π΄)) (π(π΅)) Ξ₯max = πmax + πmax ,
(π»(π΅)) + πmin , Ξmin = π(π»(π΄)) min
(π(π΄)) (π(π΅)) Ξ₯min = πmin + πmin .
(8)
In addition, denote by A = H + S, with H = π» (A) = πΌ β π» (π΄) + π»(π΅)π β πΌ, π
S = π (A) = πΌ β π (π΄) + π(π΅) β πΌ.
(9)
Theorem 4. Assume that π΄ β CπΓπ and π΅ β CπΓπ are positive semi-definite matrices, and at least one of them is positive definite. Let πΌπ (π = 1, 2) be nonnegative constants and π½π (π = 1, 2) be positive constants. Denote by β1
π (πΌ, π½) = (π½πΌ + S) (π½πΌ β H) (πΌπΌ + H) (πΌπΌ β S) , (10) πΌ = πΌ1 + πΌ2 ,
π½ = π½1 + π½2 .
(11)
Then the convergence factor of the GHSS iteration method (6) is given by the spectral radius π(π(πΌ, π½)) of the matrix π(πΌ, π½), which is bounded by π (πΌ, π½) :=
2 2 ) + 2πΌπ½Ξmax + 2Ξ₯min Ξmax , π2 (πΌ, π½) = (π½ β πΌ) (Ξ2max β Ξ₯min
π½β (πΌ) =
πΌ (Ξmax + Ξmin ) + 2Ξmax Ξmin β [Ξmin , Ξmax ] , 2πΌ + Ξmax + Ξmin (15)
then π(πΌ, π½) < 1; that is, the GHSS iteration method (6) is convergent to the exact solution πβ β CπΓπ of the continuous Sylvester equation (1). Proof. By making use of the Kronecker product, we can reformulate the GHSS iteration (6) as the following matrixvector form: π
(πΌ β (πΌ1 πΌ + π» (π΄)) + (πΌ2 πΌ + π» (π΅)) β πΌ) π₯(π+1/2) π
Obviously, Ξmax , Ξ₯max and Ξmin , Ξ₯min are the upper and the lower bounds of the eigenvalues of the matrices H and S, respectively. By making use of Theorems 2.2 and 2.5 in [35], we can obtain the following convergence theorem about the GHSS iteration method for solving the continuous Sylvester equation (1).
β1
2 2 π1 (πΌ, π½) = (π½ β πΌ) (Ξ2min β Ξ₯max ) + 2πΌπ½Ξmin + 2Ξ₯max Ξmin ,
σ΅¨ σ΅¨σ΅¨ σ΅¨σ΅¨π½ β Ξσ΅¨σ΅¨σ΅¨ πΌ2 + Ξ₯2 (12) . β
max β 2 Ξmin β€Ξβ€Ξmax |πΌ + Ξ| Ξ₯min β€Ξ₯β€Ξ₯max π½ + Ξ₯2
= (πΌ β (πΌ1 πΌ β π (π΄)) + (πΌ2 πΌ β π (π΅)) β πΌ) π₯(π) + π, π
(πΌ β (π½1 πΌ + π (π΄)) + (π½2 πΌ + π (π΅)) β πΌ) π₯(π+1) π
= (πΌ β (π½1 πΌ β π» (π΄)) + (π½2 πΌ β π» (π΅)) β πΌ) π₯(π+(π+1/2)) + π, (16) which can be arranged equivalently as (πΌπΌ + H) π₯(π+1/2) = (πΌπΌ β S) π₯(π) + π, (π½πΌ + S) π₯(π+1) = (π½πΌ β H) π₯(π+1/2) + π.
(17)
Evidently, the iteration scheme (17) is the GHSS iteration method for solving the system of linear equations (2), with A = H + S; see [34, 35]. After concrete operations, the GHSS iteration (17) can also be expressed as a stationary iteration as follows:
max
And, if the parameters πΌ and π½ satisfy 4
(πΌ, π½) β βΞ©β ,
(13)
β=1
where
β
β1
Ξ©3 = {(πΌ, π½) | π½ (πΌ) β€ π½ < πΌ} , Ξ©4 = {(πΌ, π½) | π½ < min {πΌ, π½β (πΌ)} , π2 (πΌ, π½) > 0} ,
(14)
(18)
where π(πΌ, π½) is the iteration matrix defined in (10), with H, S and πΌ, π½ being given in (9) and (11), respectively, and π (πΌ, π½) = (πΌ + π½) (π½πΌ + S) (πΌπΌ + H)β1 .
Ξ©1 = {(πΌ, π½) | πΌ β€ π½ < π½β (πΌ)} , Ξ©2 = {(πΌ, π½) | π½ β₯ max {πΌ, π½β (πΌ)} , π1 (πΌ, π½) > 0} ,
π₯(π+1) = π (πΌ, π½) π₯(π) + π (πΌ, π½) π,
(19)
We can easily verify that H is a Hermitian matrix, S is a skew-Hermitian matrix, πΌ is a nonnegative constant, and π½ is a positive constant. Moreover, when either π΄ β CπΓπ or π΅ β CπΓπ is positive definite, the matrix H is Hermitian positive
4
Journal of Applied Mathematics
definite. The spectral radius of the iteration matrix π(πΌ, π½) clearly satisfies β1
β1
π (π (πΌ, π½)) = π ((π½πΌ + S) (π½πΌ β H) (πΌπΌ + H) (πΌπΌ β S)) σ΅©σ΅© σ΅© β1 σ΅© β€ σ΅©σ΅©σ΅©σ΅©(π½πΌ β H) (πΌπΌ + H)β1 σ΅©σ΅©σ΅©σ΅©2 σ΅©σ΅©σ΅©σ΅©(πΌπΌ β S) (π½πΌ + S) σ΅©σ΅©σ΅©σ΅©2 σ΅¨σ΅¨σ΅¨π½ β π β π σ΅¨σ΅¨σ΅¨ σ΅¨ π π σ΅¨σ΅¨ = max σ΅¨σ΅¨σ΅¨ σ΅¨ π π βπ π(π»(π΄)) σ΅¨πΌ + π + π σ΅¨σ΅¨ σ΅¨σ΅¨ π π σ΅¨σ΅¨ ππ βπ π(π»(π΅))
β
β€
max β
πππ βπ π(π(π΄)) πππ βπ π(π(π΅))
πΌ2 + (ππ + ππ )
2
π½2 + (ππ + ππ )
2 2 + Ξ2 ) (Ξ₯2 + Ξ2 ) Ξ₯min β Ξmax Ξmin + β(Ξ₯min max min min
Ξmax + Ξmin
,
πΎ2 = βΞmax Ξmin , πΎ3 =
2 2 2 Ξ₯max β Ξmax Ξmin + β(Ξ₯max + Ξ2max ) (Ξ₯max + Ξ2min )
Ξmax + Ξmin
2 , π (πΎ1 ) , Ξmax Ξmin β€ Ξ₯min { { β β 2 2 π (πΌ , π½ ) = {π (πΎ2 ) , Ξ₯min < Ξmax Ξmin < Ξ₯max , { 2 π (πΎ ) , Ξ Ξ β₯ Ξ₯ , 3 max min max {
σ΅¨ σ΅¨σ΅¨ σ΅¨σ΅¨π½ β Ξσ΅¨σ΅¨σ΅¨ πΌ2 + Ξ₯2 ; β
max β 2 Ξmin β€Ξβ€Ξmax |πΌ + Ξ| Ξ₯min β€Ξ₯β€Ξ₯max π½ + Ξ₯2 (20)
then the bound for π(π(πΌ, π½)) is given by (12). Hence, by making use of Theorem 2.2 in [35] we know that 4
β (πΌ, π½) β βΞ©β .
(21)
β=1
Therefore we obtain that if the parameters πΌ and π½ satisfy the condition (13), the GHSS iteration method (17) converges to the exact solution π₯β β Cππ of the system of linear equations (2). This directly shows that the GHSS iteration method (6) is convergent to the exact solution πβ β CπΓπ of the continuous Sylvester equation (1) when πΌ and π½ satisfy the condition (13), with the convergence factor π(π(πΌ, π½)) being bounded by π(πΌ, π½). This completes the proof. Remark 5. When πΌ1 = π½1 and πΌ2 = π½2 , the GHSS method reduces to the HSS method, which is unconditionally convergent due to πΌ = π½. Theorem 4 gives the convergence conditions of the GHSS iteration method for the continuous Sylvester equation (1) by analyzing the upper bound π(πΌ, π½) of the spectral radius of the iteration matrix π(πΌ, π½). Since the optimal parameters πΌ and π½ that minimize the spectral radius π(π(πΌ, π½)) are hardly obtained, we instead give the parameters πΌβ and π½β , which minimize the upper bound π(πΌ, π½) of the spectral radius π(π(πΌ, π½)), in the following corollary. Corollary 6. The theoretical quasi-optimal parameters that minimize the upper bound π(πΌ, π½) are given by (πΌβ , π½β ) β‘ arg min {π (πΌ, π½)} πΌ,π½
(πΎ , π½β (πΎ )) , { { 1 β 1 = {(πΎ2 , π½ (πΎ2 )) , { β {(πΎ3 , π½ (πΎ3 )) ,
πΎ1 =
,
(23)
and the corresponding upper bound of the convergence factor is given by
2
max
π (π (πΌ, π½)) β€ π (πΌ, π½) < 1,
with
2 Ξmax Ξmin β€ Ξ₯min , 2 2 Ξ₯min < Ξmax Ξmin < Ξ₯max , 2 Ξmax Ξmin β₯ Ξ₯max , (22)
(24)
where π (πΎ) := π (πΎ, π½β (πΎ)) 2 π½β (πΎ) β Ξmin πΎ2 + Ξ₯min { { { β
β , πΎ > πΎ2 , { 2 { 2 { (25) π½β (πΎ) + Ξ₯min { πΎ + Ξmin ={ { 2 { { πΎ2 + Ξ₯max π½β (πΎ) β Ξmin { { β
β , πΎ β€ πΎ2 . { 2 2 πΎ + Ξmin π½β (πΎ) + Ξ₯max {
Proof. It is straightforward from Theorem 2.5 of [35]. Remark 7. We observe that πΌβ = π½β = βΞmax Ξmin 2 2 when Ξ₯min < Ξmax Ξmin < Ξ₯max , which means that GHSS with the theoretical quasi-optimal parameters reduces to HSS with the theoretical quasi-optimal parameter [4] in this case. In other cases, the GHSS iteration is superior to the HSS iteration when both of them use the theoretical quasioptimal parameters. This phenomenon is also illustrated in the numerical results of Section 5. Remark 8. The actual iteration parameters πΌπ and π½π (π = 1, 2) Μ (π = 1, 2) such that Μπ and π½π = π½ can be chosen as πΌπ = πΌ π β β Μ+π½ Μ = π½ . For example, we may take Μ2 = πΌ and π½ Μ1 + πΌ πΌ 1 2 Μ=π½ Μ = (1/2)π½β . Μ2 = (1/2)πΌβ and π½ Μ1 = πΌ πΌ 1 2
4. Inexact GHSS Iteration Methods In the process of GHSS iteration (6), two subproblems need to be solved exactly. This is a tough task which is costly and even impractical in actual implementations. To further improve computational efficiency of the GHSS iteration, we develop an inexact GHSS (IGHSS) iteration, which solves the two subproblems iteratively [18β24]. We write the IGHSS iteration scheme in the following algorithm for solving the continuous Sylvester equation (1). Algorithm 9 (the IGHSS iteration method). Given an initial guess π(0) β CπΓπ , then this algorithm leads to the solution of the continuous Sylvester equation (1):
Journal of Applied Mathematics
5
Table 1: Numerical results for GHSS and HSS with the experimental optimal iteration parameters. Method π
π
GHSS πΌ1exp 0.01 0.01 0.01 0.01 0.01 0.03 0.01 0.01 0.01 0.01 0.67 0.29 0.29 0.34 0.31 6.70 8.80 4.90 3.60 2.00 70.5 49.0 16.0 6.00 1.70
π = 10 π = 20 π = 40 π = 80 π = 160 π = 10 π = 20 π = 40 π = 80 π = 160 π = 10 π = 20 π = 40 π = 80 π = 160 π = 10 π = 20 π = 40 π = 80 π = 160 π = 10 π = 20 π = 40 π = 80 π = 160
π = 0.01
π = 0.1
π=1
π = 10
π = 100
π½1exp 0.83 0.90 0.77 0.06 0.03 1.00 0.95 0.61 0.21 0.10 1.00 0.96 0.59 0.43 0.33 2.00 1.90 1.50 1.30 1.00 2.58 2.70 2.45 1.70 1.05
π = 0; while (not convergent) π
(π) = πΆ β π΄π(π) β π(π) π΅; approximately solve (πΌ1 πΌ + π»(π΄))π(π) + π(π) (πΌ2 πΌ + π»(π΅)) = π
(π) by employing an effective iteration method, such that the residual π(π) = π
(π) β (πΌ1 πΌ + π»(π΄))π(π) β π(π) (πΌ2 πΌ + π»(π΅)) of the iteration satisfies βπ(π) βπΉ β€ ππ βπ
(π) βπΉ ; (π+1/2)
=π
(π+1/2)
= πΆ β π΄π(π+1/2) β π(π+1/2) π΅;
π π
(π)
(π)
+π ;
approximately solve (π½1 πΌ + π(π΄))π(π+1/2) + π(π+1/2) (π½2 πΌ + π(π΅)) = π
(π+1/2) by employing an effective iteration method, such that the residual π(π+1/2) = π
(π+1/2) β (π½1 πΌ + π(π΄))π(π+1/2) β π(π+1/2) (π½2 πΌ + π(π΅)) of the iteration satisfies βπ(π+1/2) βπΉ β€ ππ βπ
(π+1/2) βπΉ ; π(π+1) = π(π+1/2) + π(π+1/2) ; π = π + 1; end.
IT 2 3 4 8 21 4 4 6 11 24 8 13 24 41 66 11 15 20 29 40 7 8 10 15 32
CPU 0.0006 0.0018 0.0141 0.1038 1.5376 0.0012 0.0025 0.0140 0.1265 1.6567 0.0025 0.0081 0.0563 0.4750 4.5142 0.0034 0.0097 0.0477 0.3395 2.7869 0.0021 0.0052 0.0243 0.1782 2.2333
πΌexp 1.66 0.80 0.42 0.24 0.14 1.70 0.84 0.48 0.26 0.16 1.82 0.98 0.62 0.46 0.34 1.88 1.16 0.68 0.90 0.68 1.72 0.90 0.48 0.28 0.16
HSS IT 12 23 43 85 169 12 24 47 93 181 13 23 35 50 70 12 21 37 57 83 12 20 36 64 110
CPU 0.0038 0.0148 0.1018 0.9943 11.605 0.0036 0.0151 0.1102 1.0787 12.488 0.0039 0.0148 0.0821 0.5885 4.7983 0.0040 0.0135 0.0898 0.6780 5.8003 0.0036 0.0130 0.0885 0.7664 7.6504
Here, {ππ } and {ππ } are prescribed tolerances used to control the accuracies of the inner iterations. We remark that when πΌ1 = π½1 and πΌ2 = π½2 , the IGHSS method reduces the inexact HSS (IHSS) method [4]. The convergence properties for the two-step iteration have been carefully studied in [27, 31]. By making use of Theorem 3.1 in [27], we can demonstrate the following convergence result about the above IGHSS iteration method. Theorem 10. Let the conditions of Theorem 4 be satisfied. If β {π(π) }π=0 β CπΓπ is an iteration sequence generated by the IGHSS iteration method and if πβ β CπΓπ is the exact solution of the continuous Sylvester equation (1), then it holds that σ΅©σ΅© (π+1) σ΅© σ΅©σ΅©π β πβ σ΅©σ΅©σ΅©σ΅©π β€ (π (πΌ, π½) + ππππ + π (π + π]ππ ) ππ ) σ΅© (26) σ΅© σ΅© β
σ΅©σ΅©σ΅©σ΅©π(π) β πβ σ΅©σ΅©σ΅©σ΅©π , π = 0, 1, 2, . . . , where the norm β β
βπ is defined as σ΅© σ΅© βπβπ = σ΅©σ΅©σ΅©(π½1 πΌ + π (π΄)) π + π (π½2 πΌ + π (π΅))σ΅©σ΅©σ΅©πΉ
(27)
6
Journal of Applied Mathematics Table 2: Numerical results for GHSS and HSS with the theoretical quasi-optimal iteration parameters. Method
π
π
π = 0.01
π = 0.1
π=1
π = 10
π = 100
GHSS
π = 10 π = 20 π = 40 π = 80 π = 160 π = 10 π = 20 π = 40 π = 80 π = 160 π = 10 π = 20 π = 40 π = 80 π = 160 π = 10 π = 20 π = 40 π = 80 π = 160 π = 10 π = 20 π = 40 π = 80 π = 160
πΌ1β
π½1β
0.0001 0.0002 0.0007 0.0028 0.0066 0.0060 0.0201 0.0555 0.0867 0.0983 0.5322 0.9733 0.5147 0.2593 0.1303 2.0752 1.0234 0.5147 0.2593 0.1303 72.911 26.701 8.6843 3.0610 1.2364
1.5236 0.4705 0.1294 0.0361 0.0151 1.5263 0.4861 0.1793 0.1151 0.1017 1.7300 1.0046 0.5147 0.2593 0.1303 2.0752 1.0234 0.5147 0.2593 0.1303 2.7778 2.0916 1.6894 1.2284 0.7699
IT 2 3 4 8 21 4 6 15 47 161 8 22 41 81 170 12 23 44 85 169 7 9 14 24 44
CPU 0.0007 0.0019 0.0094 0.0938 1.4448 0.0013 0.0038 0.0353 0.5458 11.069 0.0026 0.0142 0.0967 0.9405 11.867 0.0039 0.0259 0.1073 1.0074 11.913 0.0023 0.0060 0.0342 0.2858 3.0805
β
πΌ 2.0752 1.0234 0.5147 0.2593 0.1303 2.0752 1.0234 0.5147 0.2593 0.1303 2.0752 1.0234 0.5147 0.2593 0.1303 2.0752 1.0234 0.5147 0.2593 0.1303 2.0752 1.0234 0.5147 0.2593 0.1303
HSS IT 15 27 50 91 169 15 27 49 93 198 14 23 41 81 170 12 23 44 85 169 12 20 36 66 126
CPU 0.0282 0.0261 0.1343 1.0901 11.704 0.0060 0.0263 0.1291 1.0875 13.644 0.0045 0.0342 0.1106 0.9638 11.872 0.0040 0.0598 0.1473 1.0354 11.928 0.0039 0.0823 0.1035 0.8051 8.9284
for any matrix π β CπΓπ , and the constants π, π, π, and ] are given by σ΅© σ΅© σ΅© β1 σ΅© π = σ΅©σ΅©σ΅©σ΅©(π½πΌ β H) (πΌπΌ + H)β1 σ΅©σ΅©σ΅©σ΅©2 , π = σ΅©σ΅©σ΅©σ΅©A(π½πΌ + S) σ΅©σ΅©σ΅©σ΅©2 , σ΅© β1 σ΅© (28) π = σ΅©σ΅©σ΅©σ΅©(π½πΌ + S) (πΌπΌ + H)β1 (πΌπΌ β S) (π½πΌ + S) σ΅©σ΅©σ΅©σ΅©2 , σ΅© σ΅© ] = σ΅©σ΅©σ΅©σ΅©(π½πΌ + S) (πΌπΌ + H)β1 σ΅©σ΅©σ΅©σ΅©2 ,
with π(π) = π β Aπ₯(π) and π(π+1/2) = π β Aπ₯(π+1/2) , where π§(π) is such that the residual
with the matrices H and S being defined in (9) and the constants πΌ and π½ being defined in (11). In particular, when
π(π+1/2) = π(π+1/2) β (π½πΌ + S) π§(π+1/2)
π (πΌ, π½) + πππmax + π (π + π]πmax ) πmax < 1,
satisfies βπ(π+1/2) β2 β€ ππ βπ(π+1/2) β2 . Evidently, the iteration scheme (30) is the inexact GHSS iteration method for solving the system of linear equations (2), with A = H + S; see [34, 36]. Hence, by making use of Theorem 3.1 in [27] we can obtain the estimate
(29)
β {π(π) }π=0
β CπΓπ converges to πβ β the iteration sequence πΓπ C , where πmax = maxπ {ππ } and πmax = maxπ {ππ }. Proof. By making use of the Kronecker product and the notations introduced in Theorem 4, we can reformulate the above-described IGHSS iteration as the following matrixvector form: (πΌπΌ + H) π§(π) = π(π) , (π½πΌ + S) π§(π+1/2) = π(π+1/2) ,
π₯(π+1/2) = π₯(π) + π§(π) , π₯(π+1) = π₯(π+1/2) + π§(π+1/2) , (30)
π(π) = π(π) β (πΌπΌ + H) π§(π)
(31)
satisfies βπ(π) β2 β€ ππ βπ(π) β2 , and π§(π+1/2) is such that the residual (32)
σ΅¨σ΅¨σ΅¨σ΅©σ΅©σ΅©π₯(π+1) β π₯β σ΅©σ΅©σ΅©σ΅¨σ΅¨σ΅¨ β€ (π (πΌ, π½) + πππ + π (π + π]π ) π ) σ΅©σ΅©σ΅¨σ΅¨ σ΅¨σ΅¨σ΅©σ΅© π π π (33) σ΅¨ σ΅¨σ΅¨σ΅©σ΅© (π) σ΅© β
σ΅¨σ΅¨σ΅¨σ΅©σ΅©σ΅©π₯ β π₯β σ΅©σ΅©σ΅©σ΅©σ΅¨σ΅¨σ΅¨σ΅¨ , π = 0, 1, 2, . . . , where the norm |β β
β| is defined as follows: for a vector π¦ β Cππ , |βπ¦β| = β(π½πΌ + S)π¦β2 ; and for a matrix π β CππΓππ ,
Journal of Applied Mathematics
7 Table 3: Numerical results for IGHSS and IHSS. Method
π
π = 0.01
π = 0.1
π=1
π = 10
π = 100
π π = 10 π = 20 π = 40 π = 80 π = 160 π = 10 π = 20 π = 40 π = 80 π = 160 π = 10 π = 20 π = 40 π = 80 π = 160 π = 10 π = 20 π = 40 π = 80 π = 160 π = 10 π = 20 π = 40 π = 80 π = 160
IGHSS IT 2 3 4 6 18 4 4 5 10 20 7 12 21 35 52 10 13 17 24 30 6 7 8 13 27
|βπβ| = β(π½πΌ + S)π(π½πΌ + S)β1 β2 is the correspondingly induced matrix norm. Note that σ΅© σ΅¨σ΅¨σ΅©σ΅© σ΅©σ΅©σ΅¨σ΅¨ σ΅©σ΅© σ΅¨σ΅¨σ΅©σ΅©π¦σ΅©σ΅©σ΅¨σ΅¨ = σ΅©σ΅©(π½πΌ + S) π¦σ΅©σ΅©σ΅©2 (34) σ΅© σ΅© = σ΅©σ΅©σ΅©(π½1 πΌ + π (π΄)) π + π (π½2 πΌ + π (π΅))σ΅©σ΅©σ΅©πΉ = βπβπ . Hence, we can equivalently rewrite the estimate (33) as σ΅©σ΅© (π+1) σ΅© σ΅©σ΅©π β πβ σ΅©σ΅©σ΅©σ΅©π β€ (π (πΌ, π½) + ππππ + π (π + π]ππ ) ππ ) σ΅© (35) σ΅© σ΅© β
σ΅©σ΅©σ΅©σ΅©π(π) β πβ σ΅©σ΅©σ΅©σ΅©π , π = 0, 1, 2, . . . . This proves the theorem. We remark that Theorem 10 gives the choices of the tolerances {ππ } and {ππ } for convergence. In general, Theorem 10 shows that in order to guarantee the convergence of the IGHSS iteration, it is not necessary for {ππ } and {ππ } to approach to zero as π is increasing. All we need is that the condition (29) be satisfied. However, the theoretical optimal tolerances {ππ } and {ππ } are difficult to be analyzed.
5. Numerical Results In this section, we perform numerical tests to exhibit the superiority of GHSS and IGHSS to HSS and IHSS when
IHSS CPU 0.0006 0.0015 0.0120 0.1005 0.9076 0.0010 0.0019 0.0090 0.1065 0.8568 0.0021 0.0075 0.0462 0.4036 2.8140 0.0029 0.0081 0.0401 0.3059 1.5861 0.0019 0.0045 0.0192 0.1579 1.1335
IT 11 21 38 75 130 11 21 41 79 156 12 20 31 45 60 11 20 34 50 72 11 18 32 55 98
CPU 0.0031 0.0106 0.0818 0.8043 6.9022 0.0032 0.0105 0.0905 0.9082 7.0812 0.0032 0.0120 0.0691 0.5082 3.5928 0.0035 0.0120 0.0806 0.5889 4.5002 0.0032 0.0111 0.0832 0.6968 5.8509
they are used as solvers for solving the continuous Sylvester equation (1), in terms of iteration numbers (denoted as IT) and CPU times (in seconds, denoted as CPU). In our implementations, the initial guess is chosen to be the zero matrix, and the iteration is terminated once the current iterate π(π) satisfies σ΅©σ΅© σ΅© σ΅©σ΅©πΆ β π΄π(π) β π(π) π΅σ΅©σ΅©σ΅© σ΅© σ΅©πΉ β€ 10β6 . (36) βπΆβπΉ In addition, all sub-problems involved in each step of the HSS and GHSS iteration methods are solved exactly by the method in [16]. In HSS and IGHSS iteration methods, we set ππ = ππ = 0.01, π = 0, 1, 2, . . ., and use the Smithβs method [18] as the inner iteration scheme. We consider the continuous Sylvester equation (1) with π = π and the matrices 100 πΌ, π΄ = π΅ = π + ππ + (37) (π + 1)2 where π, π β RπΓπ are the tridiagonal matrices given by π = tridiag (β1, 2, β1) , see also [4β6, 27, 40, 41].
π = tridiag (0.5, 0, β0.5) ; (38)
8 From [4] we known that the HSS iteration method considerably outperforms the SOR iteration method in both iteration step and CPU time, so here we just solve this continuous Sylvester equation by the GHSS and the HSS iteration methods and their inexact variants. In Table 1, numerical results for GHSS and HSS with the experimental optimal iteration parameters are listed, while πΌ1 exp (with πΌ2 exp = πΌ1 exp ), π½1 exp (with π½2 exp = π½1 exp ), and πΌexp (with π½exp = πΌexp ) represent the experimentally found optimal values of the iteration parameters used for the GHSS and the HSS iterations, respectively. In Table 2, numerical results for GHSS and HSS with the theoretical quasi-optimal iteration parameters are listed, while πΌ1β (with πΌ2β = πΌ1β ), π½1β (with π½2β = π½1β ) and πΌβ (with π½β = πΌβ ) represent the theoretical quasi-optimal iteration parameters used for the GHSS and the HSS iterations, respectively. In Table 3, numerical results for IGHSS and IHSS are listed; here we adopt the iteration parameters in Table 1 for convenience and not the experimental optimal parameters. From Tables 1β3 we observe that GHSS and IGHSS methods performs better than HSS and IHSS methods in terms of iteration numbers and CPU times. Therefore, the GHSS and IGHSS methods proposed in this work are two powerful and attractive iterative approaches for solving large sparse continuous Sylvester equations.
6. Conclusions As a strategy for accelerating convergence of iteration for solving a broad class of continuous Sylvester equations, we have proposed a four-parameter generalized HSS (GHSS) method. This is obviously a type of generalization of the classical HSS method [4]. When we take πΌ1 = π½1 and πΌ2 = π½2 , we shall return to the HSS method. In our work we demonstrate that the iterative series produced by the GHSS method converge to the unique solution of the continuous Sylvester equation when the parameters satisfy some moderate conditions. The GHSS method takes HSS method as a special case. We also give a possible optimal upper bound for the iterative spectral radius. Moreover, to reduce the computational cost, an inexact variant of the GHSS (IGHSS) iteration method is developed and its convergence property is analyzed. Numerical results display that the new GHSS method and its inexact variant are typically more flexible than HSS and IHSS methods.
Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments This work is partially supported by the National Basic Research (973) Program of China under Grant no. 2011CB706903, the National Natural Science Foundation of China under Grant no. 11271173, the Mathematical Tianyuan
Journal of Applied Mathematics Foundation of China under Grant no. 11026064, and the CAPES and CNPq in Brazil.
References [1] P. Lancaster, βExplicit solutions of linear matrix equations,β SIAM Review, vol. 12, no. 4, pp. 544β566, 1970. [2] P. Lancaster and M. Tismenetsky, The Theory of Matrices, Computer Science and Applied Mathematics, Academic Press, Orlando, Fla, USA, 2nd edition, 1985. [3] Q.-W. Wang and Z.-H. He, βSolvability conditions and general solution for mixed Sylvester equations,β Automatica, vol. 49, no. 9, pp. 2713β2719, 2013. [4] Z.-Z. Bai, βOn Hermitian and skew-Hermitian splitting iteration methods for continuous Sylvester equations,β Journal of Computational Mathematics, vol. 29, no. 2, pp. 185β198, 2011. [5] Z.-Z. Bai, Y.-M. Huang, and M. K. Ng, βOn preconditioned iterative methods for Burgers equations,β SIAM Journal on Scientific Computing, vol. 29, no. 1, pp. 415β439, 2007. [6] Z.-Z. Bai and M. K. Ng, βPreconditioners for nonsymmetric block toeplitz-like-plus-diagonal linear systems,β Numerische Mathematik, vol. 96, no. 2, pp. 197β220, 2003. [7] P. Lancaster and L. Rodman, Algebraic Riccati Equations, Oxford Science Publications, The Clarendon Press, New York, NY, USA, 1995. [8] G. H. Golub and C. F. Van Loan, Matrix Computations, Johns Hopkins Studies in the Mathematical Sciences, Johns Hopkins University Press, Baltimore, Md, USA, 3rd edition, 1996. [9] A. Halanay and V. RΛasvan, Applications of Liapunov Methods in Stability, vol. 245 of Mathematics and Its Applications, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1993. [10] A. N. P. Liao, Z.-Z. Bai, and Y. Lei, βBest approximate solution of matrix equation π΄ππ΅ + πΆππ· = πΈ,β SIAM Journal on Matrix Analysis and Applications, vol. 27, no. 3, pp. 675β688, 2005. [11] Z.-Z. Bai, X.-X. Guo, and S.-F. Xu, βAlternately linearized implicit iteration methods for the minimal nonnegative solutions of the nonsymmetric algebraic Riccati equations,β Numerical Linear Algebra with Applications, vol. 13, no. 8, pp. 655β674, 2006. [12] X.-X. Guo and Z.-Z. Bai, βOn the minimal nonnegative solution of nonsymmetric algebraic Riccati equation,β Journal of Computational Mathematics, vol. 23, no. 3, pp. 305β320, 2005. [13] G.-P. Xu, M.-S. Wei, and D.-S. Zheng, βOn solutions of matrix equation π΄ππ΅ + πΆππ· = πΉ,β Linear Algebra and Its Applications, vol. 279, no. 1β3, pp. 93β109, 1998. [14] J.-T. Zhou, R.-R. Wang, and Q. Niu, βA preconditioned iteration method for solving Sylvester equations,β Journal of Applied Mathematics, vol. 2012, Article ID 401059, 12 pages, 2012. [15] F. Yin and G.-X. Huang, βAn iterative algorithm for the generalized reflexive solutions of the generalized coupled Sylvester matrix equations,β Journal of Applied Mathematics, vol. 2012, Article ID 152805, 28 pages, 2012. [16] R. H. Bartels and G. W. Stewart, βSolution of the matrix equation π΄π + ππ΅ = πΆ[πΉ4],β Communications of the ACM, vol. 15, no. 9, pp. 820β826, 1972. [17] G. H. Golub, S. Nash, and C. Van Loan, βA Hessenberg-Schur method for the problem π΄π + ππ΅ = πΆ,β IEEE Transactions on Automatic Control, vol. 24, no. 6, pp. 909β913, 1979. [18] R. A. Smith, βMatrix equation ππ΄ + π΅π = πΆ,β SIAM Journal on Applied Mathematics, vol. 16, no. 1, pp. 198β201, 1968.
Journal of Applied Mathematics [19] D. Calvetti and L. Reichel, βApplication of ADI iterative methods to the restoration of noisy images,β SIAM Journal on Matrix Analysis and Applications, vol. 17, no. 1, pp. 165β186, 1996. [20] D. Y. Hu and L. Reichel, βKrylov-subspace methods for the Sylvester equation,β Linear Algebra and Its Applications, vol. 172, pp. 283β313, 1992. [21] N. Levenberg and L. Reichel, βA generalized ADI iterative method,β Numerische Mathematik, vol. 66, no. 1, pp. 215β233, 1993. [22] E. L. Wachspress, βIterative solution of the Lyapunov matrix equation,β Applied Mathematics Letters, vol. 1, no. 1, pp. 87β90, 1988. [23] G. Starke and W. Niethammer, βSOR for π΄π β ππ΅ = πΆ,β Linear Algebra and Its Applications, vol. 154β156, pp. 355β375, 1991. [24] D. J. Evans and E. Galligani, βA parallel additive preconditioner for conjugate gradient method for π΄π + ππ΅ = πΆ,β Parallel Computing, vol. 20, no. 7, pp. 1055β1064, 1994. [25] L. JΒ΄odar, βAn algorithm for solving generalized algebraic Lyapunov equations in Hilbert space, applications to boundary value problems,β Proceedings of the Edinburgh Mathematical Society: Series 2, vol. 31, no. 1, pp. 99β105, 1988. [26] C.-Q. Gu and H.-Y. Xue, βA shift-splitting hierarchical identification method for solving Lyapunov matrix equations,β Linear Algebra and Its Applications, vol. 430, no. 5-6, pp. 1517β1530, 2009. [27] Z.-Z. Bai, G. H. Golub, and M. K. Ng, βHermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems,β SIAM Journal on Matrix Analysis and Applications, vol. 24, no. 3, pp. 603β626, 2003. [28] Z.-Z. Bai, βSplitting iteration methods for non-Hermitian positive definite systems of linear equations,β Hokkaido Mathematical Journal, vol. 36, no. 4, pp. 801β814, 2007. [29] Z.-Z. Bai, G. H. Golub, L.-Z. Lu, and J.-F. Yin, βBlock triangular and skew-Hermitian splitting methods for positive-definite linear systems,β SIAM Journal on Scientific Computing, vol. 26, no. 3, pp. 844β863, 2005. [30] Z.-Z. Bai, G. H. Golub, and M. K. Ng, βOn successiveoverrelaxation acceleration of the Hermitian and skewHermitian splitting iterations,β Numerical Linear Algebra with Applications, vol. 14, no. 4, pp. 319β335, 2007. [31] Z.-Z. Bai, G. H. Golub, and M. K. Ng, βOn inexact hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems,β Linear Algebra and Its Applications, vol. 428, no. 2-3, pp. 413β440, 2008. [32] Z.-Z. Bai, G. H. Golub, and J.-Y. Pan, βPreconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive semidefinite linear systems,β Numerische Mathematik, vol. 98, no. 1, pp. 1β32, 2004. [33] M. Benzi, βA Generalization of the he rmitian and skewhermitian splitting iteration,β SIAM Journal on Matrix Analysis and Applications, vol. 31, no. 2, pp. 360β374, 2009. [34] L. Li, T.-Z. Huang, and X.-P. Liu, βAsymmetric Hermitian and skew-Hermitian splitting methods for positive definite linear systems,β Computers and Mathematics with Applications, vol. 54, no. 1, pp. 147β159, 2007. [35] A.-L. Yang, J. An, and Y.-J. Wu, βA generalized preconditioned HSS method for non-Hermitian positive definite linear systems,β Applied Mathematics and Computation, vol. 216, no. 6, pp. 1715β1722, 2010. [36] J.-F. Yin and Q.-Y. Dou, βGeneralized preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian
9
[37]
[38]
[39]
[40]
[41]
positive-definite linear systems,β Journal of Computational Mathematics, vol. 30, no. 4, pp. 404β417, 2012. Z.-Z. Bai and G. H. Golub, βAccelerated Hermitian and skewHermitian splitting iteration methods for saddle-point problems,β IMA Journal of Numerical Analysis, vol. 27, no. 1, pp. 1β23, 2007. X. Li, A.-L. Yang, and Y.-J. Wu, βParameterized preconditioned Hermitian and skew-Hermitian splitting iteration method for saddle-point problems,β International Journal of Computer Mathematics, 2013. W. Li, Y.-P. Liu, and X.-F. Peng, βThe generalized HSS method for solving singular linear systems,β Journal of Computational and Applied Mathematics, vol. 236, no. 9, pp. 2338β2353, 2012. O. Axelsson, Z.-Z. Bai, and S.-X. Qiu, βA class of nested iteration schemes for linear systems with a coefficient matrix with a dominant positive definite symmetric part,β Numerical Algorithms, vol. 35, no. 2-4, pp. 351β372, 2004. Z.-Z. Bai, βA class of two-stage iterative methods for systems of weakly nonlinear equations,β Numerical Algorithms, vol. 14, no. 4, pp. 295β319, 1997.
Advances in
Operations Research Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Journal of
Applied Mathematics
Algebra
Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Journal of
Probability and Statistics Volume 2014
The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at http://www.hindawi.com International Journal of
Advances in
Combinatorics Hindawi Publishing Corporation http://www.hindawi.com
Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
International Journal of Mathematics and Mathematical Sciences
Mathematical Problems in Engineering
Journal of
Mathematics Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Discrete Dynamics in Nature and Society
Journal of
Function Spaces Hindawi Publishing Corporation http://www.hindawi.com
Abstract and Applied Analysis
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
International Journal of
Journal of
Stochastic Analysis
Optimization
Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014