Iterative Methods

252 downloads 303 Views 90KB Size Report
ITERATION METHODS. These are methods which compute a sequence of pro- gressively accurate iterates to approximate the solu- tion of Ax = b. We need ...
ITERATION METHODS

These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes the matrix is too large to be stored in the computer memory, making a direct method too difficult to use. More importantly, the operations cost of 23 n3 for Gaussian elimination is too large for most large systems. With iteration methods, the ³ cost ´ can often be reduced to something of cost O n2 or less. Even when a special form for A can be used to reduce the cost of elimination, iteration will often be faster. There are other, more subtle, reasons, which we do not discuss here.

JACOBI’S ITERATION METHOD We begin with an example. Consider the linear system x2 + x3 = b1 9x1 + 2x1 + 10x2 + 3x3 = b2 3x1 + 4x2 + 11x3 = b3 In equation #k, solve for xk : x1 = 19 [b1 − x2 − x3] 1 [b − 2x − 3x ] x2 = 10 2 1 3 ·

1 [b − 3x − 4x ] x3 = 11 3 1 2

¸T (0) (0) (0) be an initial guess to the Let x(0) = x1 , x2 , x3

solution x. Then define (k+1) = x1 (k+1)

x2

(k+1)

x3

= =

·

¸ (k) (k) 1 b −x 2 − x3 9 1 · ¸ 1 b − 2x(k) − 3x(k) 1 3 10 2 · ¸ 1 b − 3x(k) − 4x(k) 1 2 11 3

for k = 0, 1, 2, . . . . This is called the Jacobi iteration method or the method of simultaneous replacements.

NUMERICAL EXAMPLE. Let b = [10, 19, 0]T . The solution is x = [1, 2, −1]T . To measure the error, we use

k 0 1 2 3 4 5 6 7 8 9 10 30 31

¯ ¯ ¯ (k)¯¯ (k) ¯ Error = kx − x k = max ¯xi − xi ¯ i (k)

x1

0 1.1111 0.9000 1.0351 0.9819 1.0074 0.9965 1.0015 0.9993 1.0003 0.9999 1.0000 1.0000

(k)

x2

0 1.9000 1.6778 2.0182 1.9496 2.0085 1.9915 2.0022 1.9985 2.0005 1.9997 2.0000 2.0000

(k)

x3 0 0 −0.9939 −0.8556 −1.0162 −0.9768 −1.0051 −0.9960 −1.0012 −0.9993 −1.0003 −1.0000 −1.0000

Error 2.00E + 0 1.00E + 0 3.22E − 1 1.44E − 1 5.06E − 2 2.32E − 2 8.45E − 3 4.03E − 3 1.51E − 3 7.40E − 4 2.83E − 4 3.01E − 11 1.35E − 11

Ratio 0.500 0.322 0.448 0.349 0.462 0.364 0.477 0.375 0.489 0.382 0.447 0.447

GAUSS-SEIDEL ITERATION METHOD Again consider the linear system 9x1 + x2 + x3 = b1 2x1 + 10x2 + 3x3 = b2 3x1 + 4x2 + 11x3 = b3 and solve for xk in equation #k: x1 = 19 [b1 − x2 − x3] 1 [b − 2x − 3x ] x2 = 10 2 1 3

1 [b − 3x − 4x ] x3 = 11 3 1 2 Now immediately use every new iterate: (k+1)

x1

(k) (k) = 19 b1 − x2 − x3

(k+1) = x2 (k+1)

x3

·

=

·

¸

¸ (k+1) (k) 1 b − 2x − 3x3 1 10 2 · ¸ (k+1) (k+1) 1 b − 3x − 4x2 1 11 3

for k = 0, 1, 2, . . . . This is called the Gauss-Seidel iteration method or the method of successive replacements.

NUMERICAL EXAMPLE. Let b = [10, 19, 0]T . The solution is x = [1, 2, −1]T . To measure the error, we use ¯ ¯ ¯ (k)¯ Error = kx − x(k)k = max ¯¯xi − xi ¯¯ i

k 0 1 2 3 4 5 6

(k)

x1

0 1.1111 1.0262 1.0030 1.0002 1.0000 1.0000

(k)

x2

0 1.6778 1.9687 1.9981 2.0000 2.0000 2.0000

(k)

x3 0 −0.9131 −0.9958 −1.0001 −1.0001 −1.0000 −1.0000

Error 2.00E + 0 3.22E − 1 3.13E − 2 3.00E − 3 2.24E − 4 1.65E − 5 2.58E − 6

Ratio 0.161 0.097 0.096 0.074 0.074 0.155

The values of Ratio do not approach a limiting value with larger values of the iteration index k.

A GENERAL SCHEMA

Rewrite Ax = b as Nx = b + P x

(1)

with A = N − P a splitting of A. Choose N to be nonsingular. Usually we want N z = f to be easily solvable for arbitray f . The iteration method is N x(k+1) = b + P x(k),

k = 0, 1, 2, . . . ,

(2)

EXAMPLE. Let N be the diagonal of A, and let P = N − A. The iteration method is the Jacobi method: n X (k+1) (k) = bi − ai,j xj , ai,ixi j=1 j6=i

for k = 0, 1, . . . .

1≤i≤n

EXAMPLE. Let N be the lower triangular part of A, including its diagonal, and let P = N − A. The iteration method is the Gauss-Seidel method: i X

j=1

(k+1) = bi − ai,j xj

n X

j=i+1

(k)

ai,j xj ,

1≤i≤n

for k = 0, 1, . . . . EXAMPLE. Another method could be defined by letting N be the tridiagonal matrix formed from the diagonal, super-diagonal, and sub-diagonal of A, with P = N − A: 

0 ··· 0 a a1,2  1,1 .. ...  a2,1 a2,2 a 2,3  ... N = 0  0  .. ... a  n−1,n−2 an−1,n−1 an−1,n 0 ··· 0 an,n−1 an,n

       

Solving N x(k+1) = b + P x(k) uses the algorithm for tridiagonal systems from §6.4.

CONVERGENCE

When does the iteration method (2) converge? Subtract (2) from (1), obtaining ³

´ (k) = P x−x ³ ´ (k+1) (k) −1 = N P x−x x−x

N x − x(k+1)

´

³

e(k+1) = Me(k),

M = N −1P (3)

with e(k) ≡ x − x(k) Return now to the matrix and vector norms of §6.5. Then ° ° ° ° ° (k+1)° ° (k)° °e ° ≤ kM k °e ° ,

k≥0

Thus the error e(k) converges to zero if kM k < 1, with ° ° ° ° ° (k)° k ° (0)° °e ° ≤ kMk °e ° ,

k≥0

EXAMPLE. For the earlier example with the Jacobi method, (k+1)

x1

(k) (k) = 19 b1 − x2 − x3

(k+1) = x2 (k+1)

x3

·

=

·

¸

¸ (k) (k) 1 b − 2x 1 − 3x3 10 2 · ¸ (k) (k) 1 b − 3x 1 − 4x2 11 3



0

− 19

  2 M =  − 10 0  3 −4 − 11 11

− 19 3 − 10 0

    

7 . = 0.636 kM k = 11 This is consistent with the earlier table of values, although the actual convergence rate was better than predicted by (3).

EXAMPLE. For the earlier example with the GaussSeidel method, (k+1)

x1

(k+1)

x2

·

(k) (k) = 19 b1 − x2 − x3

·

(k+1)

1 b − 2x = 10 2 1

(k+1) = x3



·

(k)

− 3x3

¸

1 b − 3x(k+1) − 4x(k+1) 1 2 11 3

¸

−1   0 0 −1 −1    0  0 0 −3 

9 0  M =  2 10 3 4 11 

¸



0

0

0

0 − 19 − 19     1 5 =  0 45 − 18    1 13 0 45 99 kMk = 0.3

This too is consistent with the earlier numerical results.

DIAGONALLY DOMINANT MATRICES Matrices A for which n ¯ ¯ ¯ ¯ X ¯ ¯ ¯ ¯ ¯ai,i¯ > ¯ai,j ¯ , j=1 j6=i

i = 1, . . . , n

are called diagonally dominant. For the Jacobi iteration method,  a1,n  a1,2 ··· − 0 −  a1,1 a1,1     a a 2,n   − 2,1  0 −  a2,2  M =  a2,2    .. .. ...     an,n−1 an,1   ··· − 0 − an,n an,n With diagonally dominant matrices A, kMk = max

1≤i≤n

n X

j=1 j6=i

¯ ¯ ¯a ¯ ¯ i,j ¯ ¯ ¯