Richardson extrapolation and Romberg integration

111 downloads 162 Views 155KB Size Report
Maria Cameron. 1. Richardson Extrapolation and Romberg Integration. Romberg integration is Richardson extrapolation applied to integration. 1.1. Richardson ...
Maria Cameron 1. Richardson Extrapolation and Romberg Integration Romberg integration is Richardson extrapolation applied to integration. 1.1. Richardson extrapolation. Example As we know, Z b n−1 X f (x)dx = Th (x) + EhT (x) = 21 h[f (a) + f (b)] + h fj − a

1 12 (b

− a)h2 f 00 (τ )

j=1

i.e., the error is proportional to h2 . Thus, using steps h and 2h and applying the EulerMaclarin summation formula we have Th (f ) = I(f ) + k2 h2 + k4 h4 + . . . , T2h (f ) = I(f ) + k2 (2h)2 + k4 (2h)4 + . . . , Multiplying the first equation by 4 and subtracting from it the second one we get 4Th (f ) − T2h (f ) = 3I(f ) + k˜4 h4 + . . . . Therefore, I(f ) = 31 [4Th (f ) − T2h (f )] + kˆ4 h4 + . . . . Thus, we have obtained an integration rule with error proportional to h4 out the one with error proportional to h2 . This example of the Romberg integration illustrate the idea of the Richardson extrapolation that is the following. Suppose we would like to estimate a certain quantity F (h), e.g., f 0 (x) ≈ F (h) := f 00 (x) ≈ F (h) := Z a

f (x + h) − f (x − h) , 2h

f (x + h) − 2f (x) + f (x − h) , h2

b

f (x)dx ≈ F (h) := Th (f ) = 12 h[f (a) + f (b)] + h

n−1 X

fj ,

j=1

etc. Our estimate depends on the step size h, and the true value corresponds to the limit h → 0. In practice, we cannot set h very small. On one hand, the effects of the round off error set a practical bound of how small h can be. On the other hand, typically the amount of cpu increases sharply as h → 0. Often, one has some knowledge of how the truncation error F (h) − F (0) behaves as h → 0. If F (h) = F (0) + a1 hp + O(hr ), 1

as h → 0, r > p,

2

where a0 := F (0) is the quantity we are trying to find and a1 is unknown, then a0 and a1 can be estimated as follows. We compute F (h) and F (qh), (q > 1): F (h) = a0 + a1 hp + O(hr ), F (qh) = a0 + a1 (qh)p + O(hr ). Then multiplying the first equation by q p and subtracting the second one from it we get (q p − 1)a0 = q p F (h) − F (qh) + O(hr ). Hence F (h) − F (qh) + O(hr ). qp − 1 This process is called Richardson extrapolation, or the deferred approach to the limit. Suppose that the form of the expansion of F (h) is powers of h is known. Then one can, even if the values of the coefficients in the expansion are unknown, repeat the use of Richardson extrapolation in a way described below. This process is, in many numerical problems – especially in numerical treatment of integrals and differential equations – the simplest way to get results which ave negligible truncation error. The application of this process becomes especially simple when the step lengths form a geometric series, a0 = F (0) = F (h) +

h0 , q −1 h0 , q −2 h0 , . . . . Theorem 1. Suppose that F (h) = a0 + a1 hp1 + a2 hp2 + a3 hp3 + . . . ,

(1)

where p1 < p2 < p3 < . . ., and set (2)

F1 (h) = F (h),

Fk+1 (h) = Fk (h) +

Fk (h) − Fk (qh) . q pk − 1

Then Fn (h) has an expansion of the form (n)

pn + an+1 hpn+1 + . . . . Fn (h) = a0 + a(n) n h

(3)

Proof. The proof is by induction. Eq. (3) holds for n = 1. Suppose Eq. (3) holds for n = k, i.e., (k) (k) Fk (h) = a0 + ak hpk + ak+1 hpk+1 + . . . . Then (k)

(k)

Fk (qh) = a0 + ak (qh)pk + ak+1 (qh)pk+1 + . . . . Hence q pk Fk (h) − Fk (qh) = a0 (q pk − 1) + bhpk+1 + . . . . Therefore, dividing this equation by q pk − 1 we get Fk+1 (h) := Fk (h) + that is of desired form.

Fk (h) − Fk (qh) (k+1) = a0 + ak+1 (qh)pk+1 + . . . p k q −1 

3

Example Let us evaluate the derivative of f (x) = log x at x = 3. We choose the finite difference estimate f (x + h) − f (x − h) . F (h) = 2h Using Taylor expansion we get the following series expansion for F (h): ∞ X f (2n+1) (x) 2n f 000 (x) 2 f (4) (x) F (h) = h = f 0 (x) + h + + .... (2n + 1)! 6 24 n=0

We pick q = 2 and steps 0.8, 0.4and 0.2 and write log(3.8) − log(2.2) = a0 + a1 · (22 )2 h2 + a2 · (22 )4 h4 + . . . = 0.341589816480044, F00 = 1.6 log(3.4) − log(2.6) F10 = = a0 + a1 · 22 h2 + a2 · 24 h4 + . . . = 0.335329983243349, 0.8 log(3.2) − log(2.8) = a0 + a1 · h2 + a2 · h4 + . . . = 0.333828481561307. F20 = 0.4 Then we calculate taking into account that p1 = 2, p2 = 4, ..., F10 − F00 F11 = F10 + p1 = 34 F10 − 13 F00 = 0.333243372164451, 2 −1 F20 − F10 F21 = F20 + p1 = 34 F20 − 13 F10 = 0.333327981000626, 2 −1 F21 − F11 1 16 F22 = F21 + p2 F21 − 15 F11 = 0.333333621589704. = 15 2 −1 The absolute error of the result is 2.8e − 7. For comparison, if we take h = 0.005, the error of of the estimate is 3.1e − 7. References [1] A. Gil, J. Segura, N. Temme, Numerical Methods for Special Functions, SIAM, 2007 [2] Germund Dahlquist, Ake Bjoekr, Numerical Methods, Prentice Hall, 1974