Relaxation Newton Iteration for A Class of Algebraic Nonlinear Systems

7 downloads 0 Views 1MB Size Report
Liu, Newton waveform relaxation method for solving algebraic nonlinear equations, ... the classical Newton's method and the waveform relaxation iteration.
ISSN 1749-3889 (print), 1749-3897 (online) International Journal of Nonlinear Science Vol.8 (2009) No. 2, pp. 243-256

Relaxation Newton Iteration for A Class of Algebraic Nonlinear Systems★ Shulin Wu∗ , Baochang Shi, Chengming Huang School of Mathematics and Statistics, Huazhong University of Science and Technology, Wuhan 430074, P. R. China (Received 18 November 2008, accepted 17 January 2009)

Abstract. Relaxation Newton algorithm, introduced in [Shulin Wu, Chengming Huang, Yong Liu, Newton waveform relaxation method for solving algebraic nonlinear equations, Applied Mathematics and Computation, 201 (2008), pp. 553–560], is a method derived by combining the classical Newton’s method and the waveform relaxation iteration. It has been shown that, with a special choice of the so called splitting function, this algorithm takes advantages of global convergence, less storage and absolute stability and can be implemented simultaneously. In this paper, we investigate a class of nonlinear equations which are well suited to be solved by this algorithm. These nonlinear equations are derived from the implicit discretization of nonlinear ordinary differential equations and nonlinear reaction diffusion equations. Several examples are tested to illustrate our theoretical analysis and the results show the advantages of this algorithm in the sense of iterative number and CPU time very well.

Keywords: relaxation Newton algorithm; waveform relaxation methods; Newton’s method; nonlinear algebraic equations; reaction diffusion equations AMS (MOS) subject classifications: 65Y05, 65Y10, 65Y20, 68Q60.

1

Introduction

Consider the following equations 𝑓 (𝑥) = 0,

(1.1)

where 𝑓 : 𝐷 ⊆ ℝ𝑛 → ℝ𝑛 . It is well known that to solve (1.1) efficiently is a very important problem in many fields, such as management science, industrial and financial research, data mining and numerical simulation of nonlinear systems, etc. There are numerous methods to solve (1.1) and the fundamental one is the classical Newton’s method and its modifications. The classical Newton’s method is an iterative method which is written as 𝑓 ′ (𝑥𝑘 )△𝑥𝑘 = −𝑓 (𝑥𝑘 ), 𝑥𝑘+1 = △𝑥𝑘 + 𝑥𝑘 , 𝑘 = 0, 1, . . . ,

(1.2)

where 𝑥0 is the initial approximation of the solution 𝑥∗ . It is known long that method (1.2) converges quadratically and locally. We know that two severe drawbacks counteract the direct application of method (1.2) in practice. One is that this algorithm converges only locally and this implies that one should choose the initial approximation 𝑥0 which is sufficiently close to the unknown solution 𝑥∗ ; the other is that the Jacobian matrix 𝑓 ′ (𝑥) ★

This work was supported by NSF of China(No.10671078, 60773195) and by Program for NCET, the State Education Ministry of China. ∗ Corresponding author(Shulin Wu). E-mail address: wushulin [email protected] (Shulin Wu), [email protected] (Baochang Shi), chengming [email protected] (Chengming Huang). c Copyright⃝World Academic Press, World Academic Union IJNS.2009.10.15/275

244

International Journal of Nonlinear Science,Vol.8(2009),No.2,pp. 243-256

must be nonsingular in 𝐷 and one needs to calculate [𝑓 ′ (𝑥)]−1 for every iteration. The latter will bring in unacceptable burden in both storage and computation time. To overcome these drawbacks, many modifications of the classical Newton’s method have been investigated and many excellent results have been obtained. For example, we can treat (1.2) as linear problem 𝐴𝑥 = 𝑏 to obtain Δ𝑥𝑘 ; there are many methods to treat this linear problem, such as Jacobi, Gauss–Seidel, Conjugate–Gradient[26, 27], GMRES[23], AOR iteration [30], etc. There are so many prominent results in this field that we can not recount them detailedly. For a description of state–of–the–art, we refer the reader to the classical books [3, 19, 21] and papers [7, 20, 25, 29, 31, 32], etc. In [24] the authors introduce another variation of the classical Newton’s method, the relaxation Newton algorithm, to solve equations (1.1). This method is called Newton waveform relaxation in [24], but here we call it relaxation Newton, since through each iteration we obtain a set of discrete values but not a set of continuous functions which is an important character of the waveform relaxation methods [10–13, 15– 17, 28]. The key idea of the relaxation Newton algorithm is to choose a splitting function 𝐹 : 𝐷 × 𝐷 → ℝ𝑛 which is minimally assumed to satisfy the consistency condition 𝐹 (𝑥, 𝑥) = 𝑓 (𝑥)

(1.3)

for any 𝑥 ∈ ℝ𝑛 . Then with an initial guess 𝑥0 of the unknown solution 𝑥∗ at hand, we start with the previous approximation 𝑥𝑘 to compute the next approximation 𝑥𝑘+1 by solving the problem 𝐹 (𝑥𝑘 , 𝑥𝑘+1 ) = 0, 𝑘 = 0, 1, . . . .

(1.4)

with some conventional method, such as the classical Newton’s method, quasi–Newton methods, Conjugate– Gradient method, etc. In [24] and this paper, we adopt the classical Newton’s method to solve (1.4), which explains the name relaxation Newton. The combination of the notion of the waveform relaxation iteration with other methods for (1.4) will lead to new algorithms and this is one of the future directions. The deduced algorithm written compactly is shown in Figure 1.1. In figure 1.1 and hereafter

for k =0, 1, 2, . . . with a given initial approximation x˜0 of xk +1 ; for m =0, 1, . . . , M F2 (xk , x˜m )∆x˜m =−F(xk , x˜m ) , x˜m +1 = x˜m +∆x˜m , end xk +1 = x˜M , end

Figure 1.1: The relaxation Newton method

𝐹2 (𝑥, 𝑦) =

∂𝐹 (𝑥, 𝑧) . ∂𝑧 𝑧=𝑦

(1.5)

If we set 𝑥 ˜0 = 𝑥𝑘 and 𝑀 = 1 in figure 1.1, by consistency condition (1.3), the iterative scheme is equivalent to 𝐹2 (𝑥𝑘 , 𝑥𝑘 )△𝑥𝑘 = −𝑓 (𝑥𝑘 ), 𝑥𝑘+1 = 𝑥𝑘 + △𝑥𝑘 , 𝑘 = 0, 1, . . . . (1.6) With a special choice of 𝐹 , the Jacobi matrix 𝐹2 (𝑥, 𝑥) will be a diagonal or block diagonal matrix and invertible in ℝ𝑛 , and thus iterative method (1.6) can be processed stably and simultaneously with less storage compared with the classical Newton’s method. IJNS email for contribution: [email protected]

S.L. Wu, B.C. Shi, C.M. Huang: Relaxation Newton Iteration for A Class of Algebraic⋅ ⋅ ⋅

245

In [24], an affine covariant Lipschitz condition imposed on the splitting function 𝐹 (𝑥, 𝑦) is given to guarantee the global convergence of method (1.6) for general nonlinear equations (1.1). However, the authors say nothing about what class of nonlinear equations is suited to be solved by the relaxation Newton method; this is the topic of present paper. We will see that these nonlinear equations play an important role in the field of implicit discretization of the nonlinear ordinary differential equations(ODEs) and the reaction diffusion equations with nonlinear reaction term. We show in this paper that with a special choice of splitting function 𝐹 (𝑥, 𝑦), these nonlinear equations can be solved efficiently with much less storage, CPU time and iterations. The remainder of this paper is organized as follows. In section 2, we recall the affine covariant convergence lemma proved in [24] and then we introduce the splitting function 𝐹 (𝑥, 𝑦) and the nonlinear equations discussed in this paper. In section 3, we apply this algorithm to the reaction diffusion equations with nonlinear reaction term. In section 4, we test a set of examples to illustrate our theoretical analysis and the efficiency of this algorithm.

2

The nonlinear equations and convergence analysis

For nonlinear equations (1.1), the global convergence of the relaxation Newton algorithm was proven in [24] provided the splitting function 𝐹 (𝑥, 𝑦) satisfies an affine covariant Lipschitz condition. Lemma 2.1 Let 𝐹 : 𝐷 × 𝐷 → ℝ𝑛 be a continuous mapping with 𝐷 ⊂ ℝ𝑛 open and convex and satisfy the consistent condition (1.3). Function 𝐹 (𝑥, 𝑦) is Fr´ 𝑒chet differentiable with aspect to the second variable 𝑦 and the Jacobi matrix 𝐹2 (𝑥, 𝑦) is invertible for any 𝑥, 𝑦 ∈ 𝐷. Assume that

and

∥𝐹2−1 (𝑥, 𝑥)(𝐹 (𝑦, 𝑦) − 𝐹 (𝑥, 𝑦))∥ ≤ 𝛼∥𝑥 − 𝑦∥ with 𝛼 < 1,

(2.1)

∥𝐹2−1 (𝑥, 𝑥)(𝐹2 (𝑥, 𝑥) − 𝐹2 (𝑥, 𝑦))∥ = 0

(2.2)

for any 𝑥, 𝑦 ∈ 𝐷. Then with an arbitrary starting approximation 𝑥0 , the sequence {𝑥𝑘 } obtained by (1.6) is well defined, remains in the open ball 𝐵(𝑥∗ , ∥𝑥0 − 𝑥∗ ∥) ⊆ 𝐷 and converges to 𝑥∗ with 𝐹 (𝑥∗ , 𝑥∗ ) = 0(i.e., 𝑓 (𝑥∗ ) = 0). Moreover, the error 𝑥𝑘 − 𝑥∗ satisfies ∥𝑥𝑘 − 𝑥∗ ∥ ≤ 𝛼𝑘 ∥𝑥0 − 𝑥∗ ∥.

(2.3)

Note that condition (2.2) is satisfied if the matrix 𝐹2 (𝑥, 𝑦) is independent of the second variable 𝑦. In present paper, we assume that the function 𝑓 in (1.1) satisfies the following conditions. Condition 1 Suppose that the function 𝑓 in (1.1) has a form 𝑓 (𝑥) = 𝑥 + 𝑔(𝑥) + ℎ(𝑥),

(2.4)

where 𝑔(𝑥) = (𝑔1 (𝑥1 ), 𝑔2 (𝑥2 ), . . . , 𝑔𝑛 (𝑥𝑛 ))𝑇 ; moreover, each component of functions 𝑔(𝑥), ℎ(𝑥) satisfies ′ ′ 𝑔𝑖,max < +∞, ∣ℎ𝑖 (𝑥) − ℎ𝑖 (𝑦)∣ ≤ 𝑐𝑖 ∥𝑥 − 𝑦∥∞ , and 𝑐𝑖 < 1 + 𝑔𝑖,min ,

(2.5)

for any 𝑥, 𝑦 ∈ 𝐷 and 𝑥𝑖 ∈ ℝ, here and in what follows ′ 𝑔𝑖,min = inf

𝑥𝑖 ∈ℝ

𝑑𝑔𝑖 (𝑥𝑖 ) ′ 𝑑𝑔𝑖 (𝑥𝑖 ) , 𝑔𝑖,max = sup , 𝑖 = 1, 2, . . . , 𝑛. 𝑑𝑥𝑖 𝑥𝑖 ∈ℝ 𝑑𝑥𝑖

Remark 1 Consider the following differential system { 𝑦 ′ (𝑡) + 𝐺(𝑡, 𝑦(𝑡)) + 𝐻(𝑡, 𝑦(𝑡)) = 0, 𝑦(0) = 𝑦0 ,

𝑡 > 0, 𝑡 = 0,

where 𝐺(𝑡, 𝑦(𝑡)) = (𝐺1 (𝑡, 𝑦1 (𝑡)), . . . , 𝐺𝑛 (𝑡, 𝑦𝑛 (𝑡)))𝑇 . IJNS homepage:http://www.nonlinearscience.org.uk/

(2.6)

(2.7)

246

International Journal of Nonlinear Science,Vol.8(2009),No.2,pp. 243-256 By applying the backward Euler method to (2.7) we arrive at 𝑦𝑛+1 = 𝑦𝑛 − 𝜏 𝐺(𝑡𝑛+1 , 𝑦𝑛+1 ) − 𝜏 𝐻(𝑡𝑛+1 , 𝑦𝑛+1 ), 𝑛 = 0, 1, . . . , 𝑁,

(2.8)

where 𝜏 is the discretization step-size. Clearly, equations (2.8) can be written into the general form 𝑥 + 𝑔(𝑥) + ℎ(𝑥) = 0.

(2.9)

In fact, many implicit numerical methods applied to differential system (2.7) will lead to nonlinear equations (2.9). Under Condition 1, we consider the following splitting function ⎛ ⎞ 𝑎1 𝑦1 + (1 − 𝑎1 )𝑥1 + [𝑏1 (𝑥1 )(𝑦1 − 𝑥1 ) + 1]𝑔1 (𝑥1 ) + ℎ1 (𝑥) ⎜ 𝑎2 𝑦2 + (1 − 𝑎2 )𝑥2 + [𝑏2 (𝑥2 )(𝑦2 − 𝑥2 ) + 1]𝑔2 (𝑥2 ) + ℎ2 (𝑥) ⎟ ⎜ ⎟ 𝐹 (𝑥, 𝑦) = ⎜ (2.10) ⎟, .. ⎝ ⎠ . 𝑎𝑛 𝑦𝑛 + (1 − 𝑎𝑛 )𝑥𝑛 + [𝑏𝑛 (𝑥𝑛 )(𝑦𝑛 − 𝑥𝑛 ) + 1]𝑔𝑛 (𝑥𝑛 ) + ℎ𝑛 (𝑥) where 𝑎𝑖 , 𝑏𝑖 (𝑥𝑖 ) ∈ ℝ . It is clear that the splitting function 𝐹 (𝑥, 𝑦) satisfies the consistency condition 𝐹 (𝑥, 𝑥) = 𝑓 (𝑥) with any 𝑎𝑖 and 𝑏𝑖 (𝑥𝑖 ), 𝑖 = 1, 2, . . . , 𝑛. Moreover, 𝐹2 (𝑥, 𝑦) = diag(𝑎1 +𝑏1 (𝑥1 )𝑔1 (𝑥1 ), . . . , 𝑎𝑛 + 𝑏𝑛 (𝑥𝑛 )𝑔𝑛 (𝑥𝑛 )) and this implies that condition (2.2) in Lemma 2.1 holds and the Jacobi matrix 𝐹2 will be nonsingular with 𝑎𝑖 and 𝑏𝑖 (𝑥𝑖 ) chosen properly. Theorem 2.1 Assume the function 𝑓 in (1.1) satisfies conditions (2.4) and (2.5). Then relaxation Newton method (1.6) with splitting function (2.10) converges globally and the Jacobi matrix 𝐹2 will be nonsingular, provided 𝑎𝑖 , 𝑏𝑖 (𝑥𝑖 ) satisfy ′ 𝑎𝑖 + 𝑏𝑖 (𝑥𝑖 )𝑔𝑖 (𝑥𝑖 ) ≥ 1 + 𝑔𝑖,max , 𝑖 = 1, 2, . . . , 𝑛.

(2.11)

′ Proof. It is clear that the Jacobi matrix 𝐹2 will be nonsingular since 𝑎𝑖 + 𝑏𝑖 (𝑥𝑖 )𝑔𝑖 (𝑥𝑖 ) ≥ 1 + 𝑔𝑖,max ≥ ′ 1 + 𝑔𝑖,min > 𝑐𝑖 > 0. Moreover, since the Jacobi matrix 𝐹2 (𝑥, 𝑦) is independent of the second variable 𝑦, we just need to prove that condition (2.1) in Lemma 2.1 holds for any 𝑥, 𝑦 ∈ 𝐷. By routine calculation, we have

∥𝐹2−1 (𝑥, 𝑥)(𝐹 (𝑥, 𝑦) − 𝐹 (𝑦, 𝑦))∥∞

∣1 − 𝑎𝑖 − 𝑏𝑖 (𝑥𝑖 )𝑔𝑖 (𝑥𝑖 ) + 𝑔𝑖′ (𝜉𝑖 )(𝑥𝑖 − 𝑦𝑖 ) + ℎ𝑖 (𝑥) − ℎ𝑖 (𝑦)∣ 1≤𝑖≤𝑛 𝑎𝑖 + 𝑏𝑖 (𝑥𝑖 )𝑔𝑖 (𝑥𝑖 ) ∣1 − 𝑎𝑖 − 𝑏𝑖 (𝑥𝑖 )𝑔𝑖 (𝑥𝑖 ) + 𝑔𝑖′ (𝜉𝑖 ∣ + 𝑐𝑖 ≤ max ∥𝑥 − 𝑦∥∞ 1≤𝑖≤𝑛 𝑎𝑖 + 𝑏𝑖 (𝑥𝑖 )𝑔𝑖 (𝑥𝑖 ) } { 𝑐𝑖 − 1 − 𝑔𝑖′ (𝜉𝑖 ) = max 1 + ∥𝑦 − 𝑥∥∞ , 1≤𝑖≤𝑛 𝑎𝑖 + 𝑏𝑖 (𝑥𝑖 )𝑔𝑖 (𝑥𝑖 )

= max

where in the first equality we used Lagrange’s mean theorem 𝑔𝑖′ (𝜉𝑖 ) = 𝑔𝑖 (𝑥𝑖 ) − 𝑔𝑖 (𝑦𝑖 ) with some 𝜉𝑖 between 𝑥𝑖 and 𝑦𝑖 ; in the second inequality we used condition (2.5) and in the last equality we used hypothesis (2.11). ′ ′ Since 𝑐𝑖 < 1 + 𝑔𝑖,min and 𝑎𝑖 + 𝑏𝑖 (𝑥𝑖 )𝑔𝑖 (𝑥𝑖 ) ≥ 1 + 𝑔𝑖,max > 0, we have ′ 𝑐𝑖 − 1 − 𝑔𝑖,min 𝑐𝑖 − 1 − 𝑔𝑖′ (𝜉𝑖 ) ≤ 0. We apply the method of lines to discretize the diffusion term 𝑢𝑥𝑥 using the central difference discretization on 𝑀 − 1 points 𝑥𝑗 = 𝑗/𝑀, 𝑗 = 1, . . . , 𝑀 − 1, Δ𝑥 = 𝐿/𝑀 . Then we obtain a system of 𝑀 − 1 differential equations { 𝜈 𝑢′𝑗 (𝑡) − Δ𝑥 2 [𝑢𝑗+1 (𝑡) − 2𝑢𝑗 (𝑡) + 𝑢𝑗−1 (𝑡)] + 𝑅(𝑢𝑗 (𝑡), 𝑥𝑗 , 𝑡) = 0, 𝑡 ∈ (0, 𝑇 ), (3.2) 𝑢𝑗 (0) = 𝜓0 (𝑥𝑗 ), 𝑗 = 1, 2, . . . , 𝑀 − 1, where 𝑢𝑗 (𝑡) = 𝑢(𝑥𝑗 , 𝑡), 𝑢0 (𝑡) = 𝜓1 (𝑡), 𝑢𝑀 (𝑡) = 𝜓2 (𝑡). Define ( 𝜈 )𝑇 𝜈 ˜ (𝑡) = 𝑈 (𝑡) = (𝑢1 (𝑡), 𝑢2 (𝑡), . . . , 𝑢𝑀 −1 (𝑡))𝑇 , 𝑈 𝜓 (𝑡), 0, . . . , 0, 𝜓 (𝑡) , 1 2 Δ𝑥2 Δ𝑥2 )𝑇 ( 2𝜈 2𝜈 𝐺(Δ𝑥, 𝑈 ) = 𝑢 + 𝑅(𝑢 , 𝑥 , 𝑢 + 𝑅(𝑢 , 𝑥 , 𝑡) , 𝑡), . . . , 1 1 1 𝑀 −1 𝑀 −1 𝑀 −1 Δ𝑥2 Δ𝑥2 ⎛ ⎞ 0 1 0 ... 0 ⎜ 1 0 1 . . . 0⎟ ⎟ 𝜈 ⎜ ⎜ 0 1 0 . . . 0⎟ 𝐻=− ⎟ , 𝑈0 = (𝜓0 (𝑥1 ), . . . , 𝜓0 (𝑥𝑀 −1 ))𝑇 . ⎜ Δ𝑥2 ⎜ .. . . . . .. . . 1⎟ ⎠ ⎝ . . 0 ... 0 1 0 Then we have

{ ˜ (𝑡) = 0, 𝑈 ′ (𝑡) + 𝐺(Δ𝑥, 𝑈 (𝑡)) + 𝐻𝑈 (𝑡) − 𝑈 𝑈 (0) = 𝑈0 ,

(3.3)

By applying some implicit method, for example the backward Euler method, to discrete system (3.3) in time we obtain ˜ (𝑡𝑛+1 ) − 𝑈𝑛 = 0. 𝑈𝑛+1 + Δ𝑡𝐺(Δ𝑥, 𝑈𝑛+1 ) + Δ𝑡𝐻𝑈𝑛+1 − Δ𝑡𝑈 (3.4) IJNS homepage:http://www.nonlinearscience.org.uk/

248

International Journal of Nonlinear Science,Vol.8(2009),No.2,pp. 243-256

Let 𝑔(𝑈𝑛+1 ) = Δ𝑡𝐺(Δ𝑥, 𝑈𝑛+1 ), ˜ (𝑡𝑛+1 ) − 𝑈𝑛 . ℎ(𝑈𝑛+1 ) = Δ𝑡𝐻𝑈𝑛+1 − Δ𝑡𝑈

(3.5)

Then equations (3.4) can be rewritten as 𝑈𝑛+1 + 𝑔(𝑈𝑛+1 ) + ℎ(𝑈𝑛+1 ) = 0.

(3.6)

Clearly, nonlinear equations (3.6) takes the form of (2.4). Therefore, we may apply the relaxation Newton method with the splitting function given in (2.10) to solve (3.6) from 𝑡𝑛 to 𝑡𝑛+1 . The following result guarantees the convergence of the relaxation Newton method for equations (3.6). ∂𝑅(𝑢,𝑥𝑖 ,𝑡𝑛+1 ) . ∂𝑢 𝑢∈ℝ

Theorem 3.1 Let 𝛾(𝑥𝑖 , 𝑡𝑛+1 ) = inf

Then for nonlinear equations (3.4) the relaxation Newton

method with the splitting function defined in (2.10) is convergent from 𝑡𝑛 to 𝑡𝑛+1 provided Δ𝑡𝛾(𝑥𝑖 , 𝑡𝑛+1 ) + 1 > 0 holds for 𝑖 = 1, 2, . . . , 𝑀 − 1. Proof. With definitions (3.5), it is easy to get 𝑑𝑔𝑖 (𝑢) 2𝜈Δ𝑡 ∂𝑅(𝑢, 𝑥𝑖 , 𝑡𝑛+1 ) 2𝜈Δ𝑡 = + Δ𝑡 inf = + Δ𝑡𝛾(𝑥𝑖 , 𝑡𝑛+1 ) 2 𝑢∈ℝ 𝑑𝑢 𝑢∈ℝ Δ𝑥 ∂𝑢 Δ𝑥2

′ 𝑔𝑖,min = inf

and

2𝜈Δ𝑡 ∥𝑥 − 𝑦∥∞ . Δ𝑥2 Thus, we may state by applying Theorem 2.1 that the relaxation Newton method applied to nonlinear equations (3.4) is convergent if Δ𝑡𝛾(𝑥𝑖 , 𝑡𝑛+1 ) + 1 > 0 holds for 𝑖 = 1, 2, . . . , 𝑀 − 1. ■ Similar to the analysis given in the end of section 3, we know that the optimal Jacobi matrix 𝐹2 of the ( ) ∥ℎ(𝑥) − ℎ(𝑦)∥∞ ≤

relaxation Newton algorithm should be diagonal matrix with elements 1 + Δ𝑡

2𝜈 Δ𝑥2

+ sup ∂𝑅(𝑢,𝑥∂𝑢𝑖 ,𝑡𝑛+1 ) , 𝑢∈ℝ

𝑖 = 1, 2, . . . , 𝑀 − 1. In practical implementation of the relaxation Newton algorithm, if we can not get sup ∂𝑅(𝑢,𝑥∂𝑢𝑖 ,𝑡𝑛+1 ) accurately or a reliable upper bound, we may roughly replace sup ∂𝑅(𝑢,𝑥∂𝑢𝑖 ,𝑡𝑛+1 ) by 𝜙𝑖 + 𝑢∈ℝ 𝑢∈ℝ ∂𝑅(𝑢𝑖,𝑛+1 ,𝑥𝑖 ,𝑡𝑛+1 ) 𝜑𝑖 with some 𝜙𝑖 , 𝜑𝑖 ≥ 1, where 𝑢𝑖,𝑛+1 is the 𝑖−th component of the vector 𝑈𝑛+1 . In our ∂𝑢 numerical experiments presented in the next section, we choose 𝜙𝑖 = 𝜑𝑖 = 1, 𝑖 = 1, 2, . . . , 𝑀 − 1.

4

Numerical results

In this section, we test some problems to illustrate the efficiency of the relaxation Newton algorithm in the sense of iterative number and CPU time.

4.1

Relaxation Newton algorithm for nonlinear ODEs

Consider the following system consisting of 𝑀 differential equations { 𝑦 ′ (𝑡) + 𝐺(𝑡, 𝑦(𝑡)) + 𝐻(𝑡, 𝑦(𝑡)) = 0, 𝑡 ≥ 0, 𝑦(0) = 0, 𝑡 = 0, ⎛

with

𝐻𝑗 (𝑡, 𝑦(𝑡)) = cos ⎝𝑡 +

𝑀 ∑

⎞ 𝑦𝑖 (𝑡)⎠

𝑖=1,𝑖∕=𝑗

and √

𝐺𝑗 (𝑡, 𝑦(𝑡)) = 𝑒𝑀 +

𝑗

( 𝑀 +1 √ ) arctan 𝑒− 𝑀 𝑗 𝑦𝑗 (𝑡)

IJNS email for contribution: [email protected]

(4.1)

S.L. Wu, B.C. Shi, C.M. Huang: Relaxation Newton Iteration for A Class of Algebraic⋅ ⋅ ⋅

249

𝑗 = 1, 2, . . . , 𝑀 . Applying the implicit Euler method to (4.1) we get 𝑦𝑛+1 = 𝑦𝑛 − 𝜏 𝐺(𝑡𝑛+1 , 𝑦𝑛+1 ) − 𝜏 𝐻(𝑡𝑛+1 , 𝑦𝑛+1 ), 𝑛 = 0, 1, . . . , 𝑁.

(4.2)

Therefore, from 𝑡𝑛 to 𝑡𝑛+1 we need to solve the following nonlinear equations 𝑥 + 𝑔(𝑥) + ℎ(𝑥) = 0, with 𝑔𝑗 (𝑥𝑗 ) = 𝜏 𝑒𝑀 + ⎛

and



𝑗

( 𝑀 +1 √ ) arctan 𝑒− 𝑀 𝑗 𝑥𝑗

𝑀 ∑

ℎ𝑗 (𝑥) = 𝜏 cos ⎝𝑡 +

(4.3)

⎞ 𝑥𝑖 ⎠ − 𝑦𝑛,𝑗 , 𝑗 = 1, 2 . . . , 𝑀.

𝑖=1,𝑖∕=𝑗

Routine calculation yields 𝑑𝑔𝑗 = 𝑑𝑥𝑗



𝑗

𝜏 𝑒𝑀 − 𝑀 ( 𝑀 +1 √ )2 . 1 + 𝑒− 𝑀 𝑗 𝑥𝑗 √

𝑗

′ ′ From this we have 𝑔𝑗,min = 0, 𝑔𝑗,max = 𝜏 𝑒𝑀 − 𝑀 . With Lagrange’s mean theorem, it is easy to get

∣ℎ𝑖 (𝑥) − ℎ𝑖 (𝑦)∣ ≤ 𝜏 (𝑀 − 1)∥𝑥 − 𝑦∥∞ ,

(4.4)

𝑖 = 1, . . . , 𝑀 . Thus, by Theorem 2.1 we know that 𝜏 < 𝑀1−1 is sufficient to guarantee the global convergence of the relaxation Newton algorithm. Experiment A Let 𝑀 = 10. We know that, to solve nonlinear equations (4.3), fixed–point iteration method is a good choice provided the functions 𝑔, ℎ satisfy the following contraction Lipschitz condition ∣𝑔𝑖 (𝑥𝑖 ) − 𝑔𝑖 (𝑦𝑖 )∣ + ∣ℎ𝑖 (𝑥) − ℎ𝑖 (𝑦)∣ ≤ 𝜂𝑖 ∥𝑥 − 𝑦∥∞ , 𝜂𝑖 < 1, 𝑖 = 1, . . . , 10. 1 ≈ 5 × 10−5 1 𝑒10− 10 +9 point 𝑡𝑛 . Therefore, if

This contraction condition theoretically indicates that 𝜏
2.

(4.9)

We choose mesh parameters Δ𝑥 = 0.08 and Δ𝑡 = 0.01 to solve (4.7)–(4.9) numerically. In figure 4.7, we plot the profiles of the solution 𝑢(𝑥, 𝑡) computed by the finite difference method coupled with the relaxation Newton iteration, where one can see that (4.7)–(4.9) is really a challenge problem.

15

10

5 10 5

0 300

250

200

150 100 Space: x

50

0

Time: t

Figure 4.7: Profiles of the solution of problme(4.7)–(4.9) computed by the relaxation Newton method

We compare the computational efficiency of the relaxation Newton method with the Newton type methods by showing the CPU time and the iterative number at every time point on the left and right panels of IJNS homepage:http://www.nonlinearscience.org.uk/

254

International Journal of Nonlinear Science,Vol.8(2009),No.2,pp. 243-256

figure 4.8, respectively. In tables 4.3 and 4.4 we list the ratios about average CPU time and iterative number of the Newton type methods to the relaxation Newton method, respectively. It is shown clearly in figure 4.8 and these two tables that the relaxation Newton algorithm has significantly advantages in the sense of CPU time and iteration number. 5

12 11

4.5

10 4 9

Iterative Number

CPU Time [s]

3.5 3 2.5 2

8 7 6 5 4

1.5

3

1

2 1

2

3

4 5 6 Time point: tn

7

8

9

10

1

1

2

3

4 5 6 Time point: tn

7

8

9

10

Figure 4.8: CPU time (left panel) and iterative number (right panel) of the methods at every time point

Table 4.3: Ratios of average CPU time 𝛾𝐺𝑀 𝑅𝐸𝑆

𝛾𝑆𝑌 𝑀 𝑀 𝐿𝑄

𝛾𝑀 𝐼𝑁 𝑅𝐸𝑆

𝛾𝐵𝐼𝐶𝐺

𝛾𝐵𝐼𝐶𝐺𝑆𝑇 𝐴𝐵

𝛾𝐶𝐺𝑆

𝛾𝐿𝑆𝑄𝑅

𝛾𝐷𝐼𝑅𝐸𝐶𝑇

3.9

3.4

3.3

3.8

3.6

3.4

3.9

2.6

𝜅𝐺𝑀 𝑅𝐸𝑆

𝜅𝑆𝑌 𝑀 𝑀 𝐿𝑄

𝜅𝑀 𝐼𝑁 𝑅𝐸𝑆

𝜅𝐵𝐼𝐶𝐺

𝜅𝐵𝐼𝐶𝐺𝑆𝑇 𝐴𝐵

𝜅𝐶𝐺𝑆

𝜅𝐿𝑆𝑄𝑅

𝜅𝐷𝐼𝑅𝐸𝐶𝑇

2.5

1.1

1.1

1.8

1.3

1.4

2.1

0.7

Table 4.4: Ratios of average iterative number

Acknowledgements The authors are grateful to the anonymous referee for the careful reading of a preliminary version of the manuscript and their valuable suggestions and comments, which really improve the quality of this paper.

References [1] M. Ablowitz, A. Zepetella: Explicit solution of Fisher’s equation for a special wave speed. Bull. Math. Biol. 41: 835–840 (1979) [2] N. F. Britton: Reaction–diffusion equations and their applications to biology. Academic Press, New York.(1986) [3] P. Deulfhard: Newton Methods for Nonlinear Problems: Affine Invariant and Adaptive Algorithms. Springer, Berlin. (2004) [4] P. C. Fife: Mathematical aspects of reacting and diffusing systems, Lectures Notes in Biomathematics, vol. 28. Springer, Berlin. (1979) [5] A. Greenbaum, M. Rozloˇzn´ık, Z. Strakoˇs: Numerical behavior of the modified Gram–Schmidt GMRES implementation. BIT. 37: 706–719 (1997) IJNS email for contribution: [email protected]

S.L. Wu, B.C. Shi, C.M. Huang: Relaxation Newton Iteration for A Class of Algebraic⋅ ⋅ ⋅

255

[6] J. K. Hale, Jos´e Domingo Salazar Gonz´alez: Attractors of some reaction diffusion problems. SIAM J. Math. Anal. 30: 963–984 (1999) [7] S. Hakkaev, K. Kirchev: On the well–posedness and stability of Peakons for a generalized Camassa– Holm equation. International Joural of Nonlinear Science. 3: 139–148 (2006) [8] R. W. Hockney: A fast direct solution of Poisson’s equation using Fourier analysis. Journal of the ACM(JACM). 12: 95–113 (1965) [9] A. N. Kolmogorov, I. G. Petrovskii, N. S. Piskunov: A study of the diffusion equation with increase in the quantity of matter and its application to a biological problem. Bull. Moscow State Univ. 17: 1–72 (1937) [10] E. Lelarasmee, A. E. Ruehli, A. L. Sangiovanni–Vincentelli. The waveform relaxation methods for time–domain analysis of large scale integrated circuits. IEEE Trans. Computer–Aided Design. 1: 131– 145 (1982) [11] U. Miekkala, O. Nevanlinna: Convergence of dynamic iteration methods for initial value problems. SIAM J. Sci. Statist. Comput. 8: 459–482 (1987) [12] U. Miekkala, O. Nevanlinna: Sets of convergence and stability regions. BIT. 27: 557–584 (1987) [13] U. Miekkala: Dynamic iteration methods applied to linear DAE systems. J. Comput. Appl. Math. 25: 131–151 (1989) [14] J. D. Murray: Mathematical biology. New York: Springer. (1993) [15] O. Nevanlinna: Remarks on Picard-Lindel of iteration, Part I. BIT. 29: 328–346 (1989) [16] O. Nevanlinna: Remarks on Picard–Lindel¨ 𝑜f of iteration, Part II. BIT. 29: 535–562 (1989) [17] O. Nevanlinna, Linear acceleration of Picard–Lindel¨ 𝑜f iteration. Numer. Math. 57: 147–156 (1990) [18] C. C. Paige, M. A. Saunders: Solution of sparse indefinite systems of linear equations. SIAM J. Numer. Anal. 12: 617–629 (1975) [19] H. O. Peitgen (Ed.): Newton’s method and complex dynamics systems. Springer, Berlin. (1989) [20] B. T. Polyak: Newton’s method and its use in optimization. European Journal of Operational Research. 181: 1086–1096 (2007) [21] W. C. Rheinboldt: Methods for Solving Systems of Nonlinear Equations. SIAM, Philadelphia. (1998) [22] C. Rocha: Generic Properties of Equilibria of Reaction–Diffusion Equations. Proc. of the Roy. Soc. Edinburgh Sect. A. 101: 45–55 (1985) [23] Y. Saad, M. H. Schultz: GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Comput. 7: 856–869(1986) [24] S. L. Wu, C. M. Huang, Y. Liu: Newton waveform relaxation method for solving algebraic nonlinear equations. Appl. Math. Comput. 201:553–560 (2008) [25] Ali R. Soheili, S.A. Ahmadian, J. Naghipoor: A Family of Predictor–Corrector Methods Based onWeight Combination of Quadratures for Solving Nonlinear Equations. International Journal of Nonlinear Science. 6:29–33 (2008) [26] H. A. van der Vorst: Bi–CGStab: a fast and smoothly convergent variant of Bi–CG for the solution of non–symmetric linear systems. SIAM J. Sci. Statist. Comput. 13:631–644 (1992) [27] H. A. van der Vorst: Iterative Krylov Methods for Large Linear Systems. Cambridge. (2003) IJNS homepage:http://www.nonlinearscience.org.uk/

256

International Journal of Nonlinear Science,Vol.8(2009),No.2,pp. 243-256

[28] S. Vandewalle: Parallel multigrid waveform relaxation for parabolic problems. B. G. Teubner, Stuttgart. (1993) [29] Y. Wang, L. Wang, W. Zhang: Application of the Adomian Decomposition Method to Fully Nonlinear Sine–Gordon Equation. International Joural of Nonlinear Science. 2:29–38 (2006) [30] L. Wang: Comparison Results for AOR Iterative Method with a New Preconditioner. International Journal of Nonlinear Science. 2:16–28 (2006) [31] T. J. Ypma: Historical development of the Newton-Raphson method. SIAM Rev. 37:531–551 (1995) [32] H. Zhu, S. Wen: A Class of Generalized Quasi–Newton Algorithms with Superlinear Convergence. International Journal of Nonlinear Science. 2:140–146 (2006)

IJNS email for contribution: [email protected]