Study on different numerical methods for solving

0 downloads 0 Views 5MB Size Report
non-linear problem and singular problem have discussed here. Chapter-6 contains the ...... portfolio management. Numerical linear algebra is important for data analysis, ...... Numerical analysis (Schaum's Outline. Series McGraw-Hill.). P.471.
Study on different numerical methods for solving differential equations.

M.S (Thesis) in Pure Mathematics A thesis submitted for the partial fulfillment of the requirement of the degree of Master of Science in mathematics. Submitted by:

Mahtab Uddin Exam Roll No: 2009/08 Class Roll No: 10736 Registration No: 10781 Session: 2008-2009 M. S (Final) Exam: 2009 (Held in 2010-11)

Thesis supervisor: Dr. Munshi Nazrul Islam Professor of Mathematics, University of Chittagong.

Chittagong December, 2011

Department of mathematics, University of Chittagong. Chittagong-4331 Bangladesh

DEDICATION This thesis paper is dedicated to my beloved grandfather for his cordial inspiration in my childhood.

ACKNOWLEDGEMENT At first I am giving all praise to almighty Allah for enabling me to complete this thesis work. With great pleasure I would like to express my heartiest gratitude, cordial thanks, deepest sense of respect and appreciation to my reverend teacher and thesis supervisor Dr. Munshi Nazrul Islam, Professor, Department of mathematics, University of Chittagong for his indispensable guidance, sympathetic encouragement, valuable suggestions and generous help during the course of study and the progress of this thesis work. I also acknowledge my gratefulness to Dr. Musleh Uddin Ahmed and Dr. Nil Raton Bhattacharjee, Professor and Ex-Chairman, Department of mathematics, University of Chittagong for their valuable suggestions and kind inspiration in carrying out this thesis work. I am indebted to my respected teacher Mr. Milon Kanti Dhar, Professor, Department of mathematics, University of Chittagong for his help and discussion throughout the progress of this thesis and generous help of my University life. I express my profuse thanks to Dr. Talky Bhattacharjee, Professor, Department of mathematics, Dr. Ganesh Chandra Roy, Professor, Department of mathematics, Mr. Forkan Uddin, Assistant Professor, Department of mathematics, University of Chittagong for their positive support to prepare my thesis paper. I would like to express my deep sense of gratitude to Dr. Abul Kalam Azad, Chairman & Professor, Department of mathematics, University of Chittagong and all of my honorable teachers of this department for their fruitful advices and encouragement. Cordial thanks are also extended to all my classmates, especially to Mustafij, Khondoker, Forhad, Sumon, Thowhid, Uttam, Khorshed, Masud and Maksud. Also thanks to office staffs and Seminar man of the department of mathematics, University of Chittagong for their co-operation and assistance during the study time. A special note of appreciation goes to Moniruzzaman Khan, Major, B.M.A, Army education core and Mr. Khalilur Rahman, Lab assistant, B.M.A, Army education core for their kind advices and inspiration during the thesis work. Finally, I am highly grateful to my immediate senior brothers Md. Shahidul Islam, Dewan Ferdous Wahid and Md.Rashedul Islam for their indispensable guidance, academic and others support to complete my thesis work as well as my university life.

Chittagong December, 2011.

AUTHOR

i

ABSTRACT This thesis paper is mainly analytic and comparative among various numerical methods for solving differential equations but chapter-VI contains two proposed numerical methods based on (i) Predictor-Corrector formula for solving ordinary differential equation of first order and first degree (ii) Finite-difference approximation formula for solving partial differential equation of elliptic type. Two types of problems are discussed in details in this thesis work, namely ordinary differential equation in chapters-2 & chapter-3 and partial differential equation in chapter-4. Also chapter-V highlighted the boundary value problems. The various chapters of this thesis paper are organized as follows Chapter-1 of the thesis is an overview of differential equations and their solutions by numerical methods. Chapter-2 deals with the solution of ordinary differential equations by Taylor’s series method, Picard’s method of successive approximation and Euler’s method. Derivation of Taylor’s series method with truncation error and application are discussed here. Solution of ordinary differential equations by Picard’s method of successive approximations and its application are discussed in details. The definition of Euler’s method is mentioned, the simple pendulum problem is solved to demonstrate Euler’s method. Error estimations and geometrical representation of Euler’s method and the improved Euler’s method are mentioned as a Predictor-Corrector form, which form being discussed in chapter-3 next. Also in it the comparison between Taylor’s series method and Picard’s method of successive approximation has given. Moreover advantages and disadvantages of these three methods narrated in it. Chapter-3 provides a complete idea of the Predictor-Corrector method. Derivation of Milne’s Predictor-Corrector formula and Adams-Moulton Predictor-Corrector formula with their local truncation errors and applications are discussed here. Solutions of ordinary differential equations by the Runge-Kutta method with error estimation are studied in this chapter. Some improved extensions of Runge-Kutta method are explained. Also, the general form of the Runge-Kutta method has given here. The law of the rate of nuclear decay is solved in this chapter by means of standard fourth-order Runge-Kutta method and then the obtained solution is compared with the exact solution, which is an application of numerical method to the nuclear physics. Comparison between Predictor-Corrector method and Runge-Kutta method discussed in details. Also, advantages and disadvantages of these two methods discussed in it.

ii

Chapter-4 gives a review of the solution of partial differential equations. Three types of partial differential equations such as elliptic equations, parabolic equations and hyperbolic equations with methods of their solutions are discussed at length. To solve elliptic equations method of iterations and relaxation are discussed. Schmidt method and Crank-Nicholson method are discussed to solve parabolic equations. Solution of vibrations of stretched string is mentioned as a method of solution of hyperbolic equations. Solution of vibrations of rectangular membrane by Rayleigh-Ritz method has given here. Comparison between iterative method and relaxation method has highlighted and then a total discussion of Rayleigh-Ritz with methods of iteration and relaxation reviewed in this chapter. Chapter-5 deals with the solution of the boundary value problems in both ordinary differential equations and partial differential equations. It provides a brief discussion of finite-difference approximation method and shooting method with their applications. Also, the applications of Green’s function to solve boundary value problems are discussed in details with application. Moreover, B-Spline method for solving two Point boundary value problems of order Four is introduced in this chapter at length. Derivations of cubic B-splines have represented. Cubic B-spline solutions of the special linear fourth order boundary value problems, general case of the boundary value problem, treatment of non-linear problem and singular problem have discussed here. Chapter-6 contains the proposal of modification of two numerical methods. One of which proposed a modification of Milne’s Predictor-Corrector formula for solving ordinary differential equations of first order and first degree, namely Milne’s (modified) Predictor-Corrector formula. One more step-length and one more term in Newton’s interpolation formula being calculated for deriving the predictor and corrector formulae of Milne’s (modified) Predictor-Corrector formula. Also, a modified formula for solving elliptic equation by finite-difference approximation is proposed, namely surrounding 9-point formula. This formula is obtained by combining standard 5-point formula and diagonal 5-point formula, which gives a more contributive to find mesh points of a given domain in a certain region. Moreover, advantages of proposed methods over previous methods are mentioned at the end of this chapter. Chapter-7 provides us the conclusions of this thesis paper. In this chapter we have chosen the better methods in every chapter by comparing them. Also, the advantages and limitations of Milne’s (modified) predictor-corrector formulae and surrounding 9-point formula are given here. Finally, recommendations for future research and a list of few further works have mentioned.

iii

CONTENTS ARTICLES

PAGES

Acknowledgement………………………………………………………………………

i

Abstract…………………………………………………………………………………. ii-iii Content………………………………………………………………………………….. iv-viii

CHAPTER-1: BASIC CONCEPTS OF DIFFERENTIAL EQUATIONS AND NUMERICAL METHODS…………………………………………………………….

01-11

1.1

Introduction……………………………………………………………............

01

1.2

Definition of differential equation……………………………………….........

01

1.3

Order and degree of differential equations………………………………........

02

1.4

Classification of differential equation Ordinary differential equationsi.

Ordinary differential equations........................................................

02

ii.

Partial differential equations………..…………..............................

03

1.5

Reduction of a differential equation to first order system……………….........

04

1.6

Physical examples of differential equations-

1.7

i.

Laplace’s equation…………..……………………………….........

04

ii.

Electrical circuit………..…………………………………….........

05

Linearity of differential equationsi.

Linear differential equations…………………………………........

05

ii.

Non-linear differential equations…………………………….........

06

1.8

Initial value problems……………………………………………………......... 07

1.9

Boundary value problems………………………………………………..........

07

1.10

Numerical methods………………………………………………………........

08

1.11

Why numerical methods are preferable? ………………………………..........

09

1.12

Contributions of numerical methods…………………………………….......... 10

iv

CHAPTER-2: SOLUTION OF DIFFERNTIAL EQUATIONS OF FIRST ORDER AND FIRST DRGREE BY NUMERICAL METHODS OF EARLY STAGE.............................................................................................................................. 2.1

Introduction………………………………………………………....................

2.2

Taylor’s series method-

2.3

12-35 12

i.

Derivation………………………………………………................. 12

ii.

Truncation error……………………………………………...........

14

Application of Taylor’s series methodi.

Approximation by Taylor’s series method…………………........... 14

ii.

Exact result……………………………………………………....... 16

2.4

Derivation of Picard’s method of successive approximation……………......... 17

2.5

Application of the Picard’s method of successive approximation-

2.6

i.

Approximation by Picard’s method of successive approximation... 18

ii.

Exact result……………………………………………………....... 21

iii.

Graphical representation……………………………………..........

Comparison between Taylor’s series method and Picard’s method of successive approximation………………………………………………..........

2.7

2.8

2.9

2.10

22

23

Euler’s methodi.

Derivation………………………………………………................. 23

ii.

Truncation error……………………………………………...........

24

Physical application of Euler’s methodi.

Approximation by Euler’s method………………………..…......... 25

ii.

Graphical representation of the application………..………….......

28

Modification of Euler’s methodiii.

Derivation………………………………………………................. 29

iv.

Truncation error……………………………………………...........

31

Application of modified Euler’s methodiii.

Approximation by modified Euler’s method ……………..............

iv.

Exact result……………………………………………………....... 34

32

CHAPTER-3: SOLUTION OF ORDINARY DIFFERENTIAL EQUATIONS BY PREDICTOR-CORRECTOR METHOD AND RUNGE-KUTTA METHOD......... 3.1

Introduction………………………………………………………………........

36-63 36

v

3.2

Definition of predictor-corrector method…………………………………....... 36

3.3

Milne’s predictor-corrector methodi.

Derivation of Milne’s predictor formula..……………………........ 37

ii.

Derivation of Milne’s corrector formula………………….…......... 38

iii.

Local truncation error…………..…………………......................... 39

3.4

Application of Milne’s predictor-corrector method………………………....... 39

3.5

Adams-Moulton predictor-corrector methodi.

Derivation of Adams-Moulton predictor formula…………............ 41

ii.

Derivation of Adams-Moulton corrector formula……………........ 42

iii.

Local truncation error…………..…………………......................... 43

3.6

Application of Adams-Moulton predictor-corrector method……………......... 45

3.7

Comments on predictor-corrector methods………………………………........ 47

3.8

Runge-Kutta method-

3.9

i.

Derivation of Runge-Kutta formulae………………………..........

48

ii.

Error estimation in Runge-Kutta formulae…………………..........

52

Physical application of Runge-Kutta methodi.

Approximation by Runge-Kutta method…………………….......... 53

ii.

Exact result……………………………………………………....... 58

3.10

Extensions of Runge-Kutta formulae……………………………………......... 59

3.11

Generalized formula for Runge-Kutta method…………………………..........

62

3.12

Comparison between predictor-corrector method and Runge-Kutta method....

62

CHAPTER-4: SOLUTION OF PARTIAL DIFFERENTIAL EQUATIONS............ 64-90 4.1

Introduction………………………………………………………………........

64

4.2

Classification of partial differential equation……………………………......... 64

4.3

Finite-difference approximations to partial derivatives………………….........

4.4

Solution of elliptic equations-

64

i.

Solution of Laplace’s equation………………………………........

ii.

Solution of Poisson’s equation………………………………......... 68

iii.

Solution by relaxation method……………………………........….

69

4.5

Application of solving elliptic equations………………………….......……....

71

4.6

Solution of parabolic equations (one dimensional heat equation)i.

Schmidt method……………………………………………..........

66

76

vi

ii.

Crank-Nicholson method…………………………………….........

77

iii.

Iterative method……………………………………………….......

78

4.7

Application of solving parabolic equation…………………………….............

79

4.8

Solution of hyperbolic equations (wave equation) ……………………….......

81

4.9

Application of solving hyperbolic equation………………………………....... 82

4.10

Comparison between iterative method and relaxation method…………..........

4.11

The Rayleigh-Ritz method-

4.12

84

i.

Introduction……………………………………………..................

85

ii.

Vibration of a rectangular membrane………………………..........

85

Comparative discussion of the Rayleigh-Ritz method with iterative method and relaxation method…………………………………………........................

90

CHAPTER-5: SOLUTION OF BOUNDARY THE VALUE PROBLEM WITH APPLICATIONS. ………………………………………………………………...........

91-106

5.1

Introduction………………………………………………………………........

91

5.2

Finite-difference method………………………………………………............ 91

5.3

Application of finite-difference method…………………………………........

94

5.4

Shooting method…………………………………………………………........

96

5.5

Application of shooting method…………………………………………......... 97

5.6

Green’s function to solve boundary value problem……………………….......

98

5.7

Application of Green’s function……………………………………................

99

5.8

Cubic B-Spline method for solving two Point boundary value problems of order fouri.

Introduction………………………………….............………….....

101

ii.

Derivations for Cubic B-spline………………………………........

102

iii.

Solution of special case fourth order boundary value problem.......

103

iv.

General linear fourth order boundary value problem....................... 104

v.

Non-linear fourth order boundary value problem…………………

vi.

Singular fourth order boundary value problem…………………… 105

104

CHAPTER-6: TWO PROPOSED METHODS FOR SOLVING DIFFERENTIAL EQUATIONS…………………………………………………….................................... 107-117 6.1

Introduction………………………………………………………………........

107

vii

6.2

6.3

Milne’s (modified) predictor-corrector methodi.

Derivation of Milne’s (modified) predictor formula……………… 107

ii.

Derivation of Milne’s (modified) corrector formula…………........ 109

Application of Milne’s (modified) predictor-corrector methodi.

Approximation

by

Milne’s

(modified)

predictor-corrector

formulae……………………………………….………………......

6.4

6.5

6.6

110

ii.

Exact result……………………………………….……………….. 111

iii.

Comment………………………………………………………….. 112

Surrounding 9-point formulai.

Derivation of Surrounding 9-point formula…………………….…

112

ii.

Algorithm………………….....…………………………………....

114

Application of surrounding 9-point formulai.

Approximation by surrounding 9-point formula……….......……... 114

ii.

Comment………………………………………………………….. 117

Advantages of proposed methods over previous methods………………......... 117

CHAPTER-7: CONCLUSIONS………………………………………………………

118-120

REFERENCES……………………………………………………………….……….... 121-122

viii

CHAPTER-1

BASIC CONCEPTS OF DIFFERENTIAL EQUATIONS AND NUMERICAL METHODS.

Chapter-1: Basic Concepts Of Differential Equations And Numerical Methods.

CHAPTER-1 BASIC CONCEPTS OF DIFFERENTIAL EQUATIONS AND NUMERICAL METHODS. 1.1 INTRODUCTION Differential equations arise in many areas of science and technology, specifically whenever a deterministic relation involving some continuously varying quantities and their rates of change in the space and/or time (expressed as derivatives) is known or postulated. This is illustrated in classical mechanics, where the motion of a body is described by its position and velocity as the time varies. Newton’s laws allow one to relate the position, velocity, acceleration and various forces acting on a body and state the relation as a differential equation for unknown position of the body as a function of time. An example of modeling a real world problem using differential equations is determination of the velocity of a ball falling through the air, considering only gravity and air resistance. The balls acceleration towards the ground is the acceleration due to the gravity minus the deceleration due to the air resistance. Gravity is constant but air resistance may be modeled as proportional to the balls velocity. This means the balls acceleration, which is derivative of its velocity, depends on the velocity. Finding the velocity as a function of time involves solving a differential equation. The study of differential equations is a wide field in pure and applied mathematics, physics, meteorology, and engineering. All of these disciplines are concerned with the properties of differential equations of various types. Pure mathematics focuses on the existence and uniqueness of solutions, while applied mathematics emphasizes the rigorous justification of the methods for approximating solutions. Differential equations play an important role in modeling virtually every physical, technical, or biological process, from celestial motion, to bridge design, to interactions between neurons. Differential equations such as those used to solve reallife problems may not necessarily be directly solvable, i.e. do not have closed form solutions. Instead, solutions can be approximated using numerical methods. 1.2 DEFINITION OF DIFFERENTIAL EQUATION A differential equation is a mathematical equation for an unknown function of one or more variables that relates the values of the function itself and its derivatives of various orders. The general form of a differential equation is [5,22] as follows

Study On Different Numerical Methods For Solving Differential Equations.

[Page 1]

Chapter-1: Basic Concepts Of Differential Equations And Numerical Methods. ( )

( )

( )

Here ( ) is an unknown function in terms of .

( )

( )

and ( ) is the function of

(1.2.1) ( )

and

1.3 ORDER & DEGREE OF DIFFERENTIAL EQUATIONS Let us consider the following differential equations (1.3.1) (1.3.2) √ (

)

(1.3.3) ( )

(1.3.4)

The order [22] of a differential equation is the order of the highest order derivative appearing in the equation. For example, orders of the differential equations (1.3.1), (1.3.2), (1.3.3) and (1.3.4) are respectively. The degree of a differential equation is the degree of the highest order derivative involved in it, when the derivatives are free from radicals and fractions. i,e, write the differential equations as polynomials in derivatives. For example, degrees of the differential equations (1.3.1), (1.3.2), (1.3.3) and (1.3.4) are respectively. 1.4 CLASSIFICATION OF DIFFERENTIAL EQUATIONS Depending on number of the independent variables differential equations can be classified in two categories. Ordinary differential equation: In mathematics an ordinary differential equation is a relation that contains functions of only one independent variable and one or more of their derivatives with respect to that variable. Because the derivative is the rate of change, such an equation states how a function changes but does not specify the function itself. Given a sufficient initial conditions, however, such as a specific function value, the function can be found by various methods, most based on integration. An implicit ordinary differential equation [24] of order has the form

depending on

Study On Different Numerical Methods For Solving Differential Equations.

( )

[Page 2]

Chapter-1: Basic Concepts Of Differential Equations And Numerical Methods. (

( )

( )

( )

(

)

( )

)

(1.4.1)

To distinguish the above case from this one, an equation of the form (

( )

( )

( )

(

)

)

( )

)

(1.4.2)

which is called an explicit ordinary differential equation. A simple example of ordinary differential equation is Newton’s second law of motion of the form ( )

( ( ))

(1.4.3)

for the motion of a particle of constant mass

.

In general, the force depends upon the position ( ) of the particle at time , and thus the unknown function ( ) appears on both sides of (1.4.3), as it indicated in the notation ( ( )) Ordinary differential equations arise in many different contexts including geometry, mechanics, astronomy and population modeling. Partial differential equation: In mathematics partial differential equations are relations involving unknown functions of several independent variables and their partial derivatives with respect to those variables. Partial differential equations are used to formulate and thus aid to solution of problems involving several variables [10], such as the propagation of sound or heat, electrostatics, electrodynamics, fluid flow and electricity. Seemingly distinct physical phenomena may have indicated mathematical formulations and thus governed by the same underlying dynamic. They find their generalization in stochastic partial differential equations. A partial differential equation for the function is of the form (

(

) where

is a linear function of

)

(1.4.4)

and its derivatives.

Example of partial differential equation, for a scalar function ( ) and velocity of the wave at any time can be mentioned by a wave equation in the certesian co-ordinates as follows (1.4.5)

Study On Different Numerical Methods For Solving Differential Equations.

[Page 3]

Chapter-1: Basic Concepts Of Differential Equations And Numerical Methods. 1.5 REDUCTION OF A DIFFERENTIAL EQUATION TO THE FIRST ORDER SYSTEM OF EQUATIONS Any differential equation of order can be written as a system of order differential equations. Given an explicit ordinary differential equation of order with dimension one as follows ( )

(

( )

( )

(

)

( )

)

(1.5.1)

Define a new family of unknown functions (

)

(1.5.2)

The original differential equation can be re-written as a system of differential equations with order one and dimension given by

...................................... ...................................... ( )

(

( )

( )

(

)

)

(1.5.3)

This can be written concisely in vector notation as given by (

) with

(

)

(1.5.4)

Then we get ( (

) (

( )

( )

( )

(

)

)) (1.5.5)

1.6 PHYSICAL EXAMPLES OF DIFFERENTIAL EQUATIONS Laplace’s equation: In mathematics, the Laplace’s equation is a second order partial differential equation as follows  (1.6.1) Here

is the Laplace’s operator and scalar function.

Laplace’s equation is the simplest example of the elliptic partial differential equation. Solutions of Laplace’s equation are all harmonic function and are important. Study On Different Numerical Methods For Solving Differential Equations.

[Page 4]

Chapter-1: Basic Concepts Of Differential Equations And Numerical Methods. In many fields of the science, notable the fields of electromagnetism, astronomy and fluid dynamics, as they can be used accurately describe the behavior of electric, gravitational and fluid potentials. In the study of heat conduction, the Laplace’s equation is the steady-state heat equation. Laplace’s equation has several forms as follows (1.6.2) in certesian co-ordinates. (

)

(1.6.3)

in cylindrical co-ordinates. (

)

(

)

(1.6.4)

in spherical co-ordinates. Electrical circuit: In an electrical circuit that contains resistance, inductance and capacitance, the voltage drop across the resistance is ( is the current in Amperes, is resistance in Ohms), across the capacitance is ( is inductance in Henries), and across voltage is ( is charge in the capacitor in Coulombs, is the capacitance in Farads). We can write, for the voltage difference [4] between points is (1.6.5) Now, differentiating (1.6.5) with respect to and remembering have a second order differential equation

, we

(1.6.6) If the voltage is suddenly brought to an upper level by connecting a battery across the terminals and maintained steadily at that upper level, current will flow through the circuit, then by (1.6.6) we can determine how the current varies with a given range of time. 1.7 LINEATITY OF DIFFERENTIAL EQUATIONS Linear differential equation: In mathematics a linear differential equation [22] is of the form (1.7.1) Here the differential operator is a linear operator, is an unknown function, and the right hand side is a given function of same nature of . Study On Different Numerical Methods For Solving Differential Equations.

[Page 5]

Chapter-1: Basic Concepts Of Differential Equations And Numerical Methods. For a function, which is dependent on time, we may write the equation more expressively as ( ) The linear operator ( )

( )

( )

(1.7.2)

may be considered as the form

( )

( )

( )

(1.7.3)

The linearity condition of rules out operations such as taking the square of the derivatives of , but permits. It is convenient to re-write above equation in all operator form such as ( ) Here functions.

( )

( ) is the differential operator and

( )

( )

( )

(1.7.4) are given

Such an equation is said to have order , the index of the highest order derivative of that is involved. A typical simple example of the linear differential equation used to model radioactive decay. Let ( ) denote the number of material of time . Then for some , the number of radioactive atoms which decay can be modeled by the constant following equation (1.7.5) If is assumed to be a function of only one variable in (1.7.1), it called an ordinary linear differential equation. Otherwise it called partial linear differential equation, which involves derivatives with respect to several variables. If ( ) are all constants then (1.7.1) is called a linear differential equation with constant co-efficient, where is any function of given variable or variables. For example (1.7.6) (1.7.7) Again if , then (1.7.1) is called a homogeneous linear differential equation, such an equation shown in (1.7.7). But (1.7.1) is called a non-homogeneous linear differential equation if , as shown as in (1.7.6). Non-linear differential equation: In mathematics a differential equation consisting of dependent variable and its derivatives occur as terms of degree more than one is called a non-linear differential equation. i.e., if a differential equation does not satisfy (1.7.1), (1.7.2), (1.7.3) and (1.7.4) is called a non-linear differential equation. Study On Different Numerical Methods For Solving Differential Equations.

[Page 6]

Chapter-1: Basic Concepts Of Differential Equations And Numerical Methods. In other words a non-linear differential equation is an equation where the variable (or variables) to be solved cannot be written as a linear combination of themselves and their derivatives. Furthermore a differential equation consisting the terms degree one but two of derivatives appears as multiple form in it also considered as a non-linear differential equation. For example ( )

(1.7.8) (1.7.9)

1.8 INITIAL VALUE PROBLEMS In the field of differential equation an initial value problem is an ordinary differential equation together with a specified value, called the initial condition, of the unknown function at a given point in a domain of the solution. In scientific fields, modeling a system frequently amounts to solving an initial value problem. An initial value problem is a differential equation such as ( )

(

( ))

(1.8.1)

This satisfies the following initial conditions ( ) Here

( )

, for some open interval

(1.8.2) .

For example, ( )

( )

(1.8.3)

1.9 BOUNDARY VALUE PROBLEMS In the field of the differential equation a boundary value problem is a differential equation together with a set of additional restraints, called the boundary conditions. A solution to a boundary value problem is a solution to the differential equation which also satisfies given boundary conditions. The basic two point boundary value problem is given by ( ) with

(

( ))

( ( ) ( ))

(1.9.1) (1.9.2)

For the function ( ) when the boundary conditions are linear, then for some constant we get, for some square matrices and

Study On Different Numerical Methods For Solving Differential Equations.

[Page 7]

Chapter-1: Basic Concepts Of Differential Equations And Numerical Methods. ( )

( )

(1.9.3)

In general, for both linear and non-linear boundary conditions we can define (1.9.4) Boundary value problems arise in many branches of physics as physical differential equation will have them. Problems involving the wave equation, as the determination of normal modes, are often stated as boundary value problem. A class of very important boundary value problems is Stum-Liouville problem. For example, if a string is stretched between two points and and denotes the amplitude of the displacement of the string, then satisfies the one dimensional wave equation in the region and is limited. Since the string is tied down at the ends, must satisfy the boundary condition (

)

(

)

(1.9.5)

The method of separation of variables for the wave equation (1.9.6) Leads to the solution of the form, (

)

(

) ( )

where

(1.9.7) (1.9.8)

The constant must be determined and to be given. The boundary conditions then imply that is a multiple of and must have the form (1.9.9) Each term in (1.9.9) corresponds to a mode of vibration of the string. 1.10 NUMERICAL METHODS In mathematics, numerical methods of mathematical problems through the performance of a finite number of elementary operations on numbers. The elementary operations used are arithmetic operations, generally carried approximately and subsidiary operations, such as recording intermediate result and extracting information from tables. Numbers are expressed by a limited set of digits in some positional numeration system. The number line is thus replaced by a discrete system of numbers, called a net. A function of a continuous variable accordingly is replaced by a table of the function value in this discrete system of numbers, operations of analysis that act on continuous functions are replaced by algebraic operations on the function values in the table.

Study On Different Numerical Methods For Solving Differential Equations.

[Page 8]

Chapter-1: Basic Concepts Of Differential Equations And Numerical Methods. Numerical methods reduce the solution of mathematical problems to computations that can be performed manually or by means of calculating machines. The development of new numerical methods and their use in computers have lead to the rise of computer mathematics. Numerical methods designed for the constructive solution of mathematical problems requiring particular numerical results, usually on a computer. A numerical method is a complete and unambiguous set of procedures for the solution of a problem, together with computable error estimate. The study and implementation of such methods is the province of numerical analysis. Numerical methods continue a long tradition of practical mathematical calculation. Modern numerical methods do not seek exact answers, because exact answers are often impossible to obtain in practice. Instead, much of numerical methods are concerned with obtaining approximate solutions while maintaining reasonable bounds on errors. Numerical methods naturally find applications in all fields of engineering and physical sciences, but in the 21st century, the life sciences and even the arts have adopted elements of scientific computations. Ordinary differential equations appear in the movement of heavenly bodies (planets, stars, and galaxies), optimization occurs in portfolio management. Numerical linear algebra is important for data analysis, stochastic differential equations and Markov chains are essential in simulating living cells for medicine and biology. 1.11 WHY NUMEERICAL METHODS ARE PREFARABLE? Many problems in science and engineering can be reduced to the problem of solving differential equations under certain conditions [2,6]. The analytical methods of solution can be applied to solve only a selected class of differential equations. Those equations which govern physical systems do not process; in general closed form solutions and hence recourse must be made to numerical methods for solving such differential equations. The analytical methods are limited to certain special forms of equations; elementary courses normally treat only linear equations with constant co-efficient, when the degree of equation is not higher than first. Neither of these examples is linear. Numerical methods have no such limitations. Let us consider a second order differential equation of the form (

)

(1.11.1)

this represents the acceleration of a body at time . Sometimes a differential equation cannot be solved at all or gives solutions; are so difficult to obtain. For solving such differential equations numerical methods are required. In numerical methods we do not need to know the relationship between

Study On Different Numerical Methods For Solving Differential Equations.

[Page 9]

Chapter-1: Basic Concepts Of Differential Equations And Numerical Methods. variables but we need the numerical value of dependent variable for certain values of independent variable or variables. Now by solving (1.11.1) by analytical method we get followings (

)

(1.11.2)

(

)

(1.11.3)

Here is the velocity and is the displacement of the body at time . The functions ( ) and ( ) are the first and second integral form of ( )) with respect to . Also & are arbitrary constants, which are to be determined.

&

Then (1.11.3) is called the general solution of (1.11.1). For particular value of , (1.11.3) represent a curve.

( ) Now, if with (1.11.1) we also give the conditions ( ) at a particular value of the time in (1.11.2) and (1.11.3) respectively. Then we can find the values of & . These extra conditions are called the initial or boundary conditions. Then (1.11.1) becomes an initial value problem as (

)

( )

( )

(1.11.4)

We can find several points on curve of the family given by (1.11.3) pass through certain points under the given initial conditions with the different values of & . Such a solution is called the numerical solution of differential equations having numerical co-efficient and given initial conditions, by which we can find a solution of any desired degree of accuracy. 1.12 CONTRIBUTIONS OF NUMERICAL METHODS The overall goal of numerical methods is the design and analysis of techniques to give approximate but accurate solutions to hard problems, the variety of which are given below a. Advanced numerical methods are essential in making numerical weather prediction feasible. b. Computing the trajectory of a spacecraft requires the accurate numerical solution of a system of ordinary differential equations. c. Car companies can improve the crash safety of their vehicles by using computer simulations of car crashes. Such simulations essentially consist of solving partial differential equations numerically.

Study On Different Numerical Methods For Solving Differential Equations. [Page 10]

Chapter-1: Basic Concepts Of Differential Equations And Numerical Methods. d. Hedge funds (private investment funds) use tools from all fields of numerical analysis to calculate the value of stocks and derivatives more precisely than other market participants. e. Airlines use sophisticated optimization algorithms to decide ticket prices, airplane and crew assignments and fuel needs. This field is also called operations research. f. Insurance companies use numerical programs for actuarial analysis. All of above requires a better technique which will minimize the computation error. There are several methods for solving differential equations having numerical co-efficient with initial or boundary conditions. Some well-known of those will be discuss in next chapters.

Study On Different Numerical Methods For Solving Differential Equations. [Page 11]

CHAPTER-2

SOLUTION OF DIFFERNTIAL EQUATIONS OF FIRST ORDER AND FIRST DRGREE BY NUMERICAL METHODS OF EARLY STAGE.

Chapter-2: Solution Of Differential Equations Of First Order And First Degree By Numerical Methods Of Early Stage.

CHAPTER-2 SOLUTION OF DIFFERNTIAL EQUATIONS OF FIRST ORDER AND FIRST DRGREE BY NUMERICAL METHODS OF EARLY STAGE. 2.1 INTRODUCTION The solution of ordinary differential equation means to find an explicit expression for the dependent variable in terms of a finite number of elementary functions of . Such a solution of differential equation is called closed or finite form of the solution. In most numerical methods we replace the differential equation by a difference equation and then solve it. The methods developed and applied to solve ordinary differential equations of first order and first degree will yield the solution [23] in one of the following forms (i)

A power series in for , from which the values of direct substitution.

(ii)

A set of tabulated values of

can be obtained by

and .

In single step methods such as, Taylor’s series method and Picard’s approximation method; the information about the curve represented by a differential equation at one point is utilized and the solution is not iterated. The methods of Euler, Milne, Adams-Moulton and Runge-Kutta belong to step by step method or marching method. In these methods the next point on the curve is evaluated in short steps ahead for equal intervals of width of the dependent variable, by performing iterations till the desired level of accuracy achieved. In this chapter we will discuss the Taylor’s series method, Picard’s approximation and Euler’s method (with modified), whose are considered as the numerical methods of early stage. 2.2 TAYLOR’S SERIES METHOD Derivation: Let us consider the initial value problem (

)

( )

(2.2.1)

Study On Different Numerical Methods For Solving Differential Equations. [Page 12]

Chapter-2: Solution Of Differential Equations Of First Order And First Degree By Numerical Methods Of Early Stage. Let ( ) be the exact solution of (2.2.1) such that ( ) expanding (2.2.1) by Taylor’s series [12] about the point we get ( )

(

)

(

)

(

. Now

)

(2.2.2)

are not explicitly In the expression (2.2.2), the derivatives known. However, if ( ) is differentiable several times, the following expression in terms of ( ) and its partial derivatives as the followings (

) (

) (

(

)

By similar manner a derivative of any order of ) and its partial derivatives.

can be expressed in terms of

As the equation of higher order total derivatives creates a hard stage of computation, to overcome the problem we are to truncate the Taylor’s expansion to a first few convenient terms of the series. This truncation in the series leads to a restriction on the value of for which the expansion is a reasonable approximation. Now, for suitable small step length – the function . Then the Tailor’s expansion (2.2.2) becomes evaluated at (

)

( )

( )

( )

( ) is

( ) (2.2.3)

The derivatives substituted in (2.2.3) to obtain the value of (

)

(

)

(

)

are evaluated at at given by (

)

(

, and then

) (2.2.4)

By similar manner we get

Thus the general form obtained as

Study On Different Numerical Methods For Solving Differential Equations. [Page 13]

Chapter-2: Solution Of Differential Equations Of First Order And First Degree By Numerical Methods Of Early Stage. (2.2.5) This equation can be used to obtain the value of approximate value to the actual value of ( ) at the value

,

which is the ( ) .

Truncation error: Equation (2.2.5) can be written as (

)

(2.2.6)

Here ( ) denotes all the remaining terms which are contain the third and higher powers of . Now we can omit the terms ( ), which gives us an approximation error of (2.2.6). For some constant , the local truncation error in this approximation of is . Then, for the better approximation of we may choose the terms upto or , so we obtain as (

) (

)

(

)

Now (2.2.6) becomes (

)

(

)

(2.2.7)

Again for better approximation with less truncation error, we are to utilize higher order derivatives. Then with truncation error ( ), (2.2.6) becomes ( (

)

(

)

)

(2.2.9)

Thus from the Taylor’s theorem, considering the remainder term; i.e. the truncation error of ( ) is given as (

)

( )

(2.2.10)

2.3 APPLICATION OF THE TAYLOR’S SERIES METHOD Apply the Taylor’s series method to solve condition ( ) up-to where . Solution: Given that

(

with the initial

)

Study On Different Numerical Methods For Solving Differential Equations. [Page 14]

Chapter-2: Solution Of Differential Equations Of First Order And First Degree By Numerical Methods Of Early Stage. Also

To find

when

and

, so that

we are to proceed as follows:

From (2.2.3) we get

Neglecting the terms containing and higher order terms and by substituting , we get the values of

(

(

)(

)

( )

(

)

(

To find

)

( )

(

)

(

)

(

)

( )

(

)

(

)

)

we are to proceed as follows:

Study On Different Numerical Methods For Solving Differential Equations. [Page 15]

Chapter-2: Solution Of Differential Equations Of First Order And First Degree By Numerical Methods Of Early Stage.

From (2.2.4) we get

Neglecting the terms containing and higher order terms and by substituting the values of , we get

(

)(

(

)

(

(

)

(

)

(

(

)

)

)

(

)

(

)

(

)

(

)

) (

)

Thus we get Exact result: We have

This is a linear differential equation in

whose integrating factor is



Multiplying the above differential equation by

(

, it becomes

) ∫ (

From the initial condition we get,

)

, we get

Study On Different Numerical Methods For Solving Differential Equations. [Page 16]

Chapter-2: Solution Of Differential Equations Of First Order And First Degree By Numerical Methods Of Early Stage.

Then the above solution becomes

At

we get

So, the truncation error is –

Hence in this case approximation in Taylor’s series method is correct to the eight decimal places. Advantages of Taylor’s series method: This is a single step method in general and works well so long as the successive derivatives can be calculated easily. Also if a problem is written in variable separable form it gives a correct solution with significant digits of accuracy. Disadvantages of Taylor’s series method: In practical life it has not much importance for needing of partial derivatives whose are complex to compute. Also h should be small enough; as a result successive terms of the series diminish quite rapidly. Again the evaluation of additional terms becomes increasingly difficult. The most significant disadvantage of this method is the requirement of evaluation of the higher order derivatives frequently. For time consuming process it is highly disliked for computation. 2.4 PICARD’S METHOD OF SUCCESSIVE APPROXIMATION Derivation: Let us consider the initial value problem ( We have

(

)

( )

(2.4.1)

)

(2.4.2)

Integrating (2.4.2) between corresponding limits (2.4.2) gives as following ∫



(

)



(

)



(

)



(

)

to

and

to , then

(2.4.3)

Study On Different Numerical Methods For Solving Differential Equations. [Page 17]

Chapter-2: Solution Of Differential Equations Of First Order And First Degree By Numerical Methods Of Early Stage. Here the integral term in the right hand side represents the increment in produced by an increment – in . The equation is complicated by the presence of in (2.4.3) under the integral sign as well as outside it. An equation of this kind is called an integral equation and can be solved by a process of successive approximation or iteration, if the indicated integrations can be performed in the successive steps [11]. To solve (2.4.1) by the Picard’s method of successive approximation, we get a ( ) first approximation in by putting in (2.4.3), then ( )

( )

(



)

(2.4.4)

The integral is now a function of alone and the indicated integration can be performed at least for one time. Having first approximation to , substitute it for in the integrand in (2.4.3) and by integrating again we get the second approximation of as following ( )

( )

(



)

(2.4.5)

( ) Proceeding in this way we obtain ( ) ( ) and so on. Thus, we get the approximation being given by the following equation ( )

Then putting

( )

for

(

(



)

)

(2.4.6)

in (2.4.3), we get the next approximation as follows (

)



( )

(

)

(2.4.7)

This process will be repeated in this way as many times as necessary or desirable. This process will be terminated when two consecutive values of are same to the desired degree of accuracy. 2.5 APPLICATION APPROXIMATION

OF

THE

PICARD’S

METHOD

OF

SUCCESSIVE

Apply Picard’s method of successive approximation to solve the initial condition ( ) . ( )

Solution: Given that

and

with

Now integrating given equation between corresponding limits to , then it becomes ∫

∫ ∫ (

(

to

) )

Study On Different Numerical Methods For Solving Differential Equations. [Page 18]

Chapter-2: Solution Of Differential Equations Of First Order And First Degree By Numerical Methods Of Early Stage.

(

)

∫ (

)

∫ (

)

∫ (

( )

)

(2.5.1)

For first approximation we proceed as follows: Putting ( )

∫ (

( )

∫ (

)

in (2.5.1), we get

)

For second approximation we proceed as follows: Putting ( )

∫ (

( )

in (2.5.1), we get

)

∫ (

)

∫ (

)

For third approximation we proceed as follows: Putting ( )

∫ (

( )

in (2.5.1), we get

)

∫ (

)

∫ (

)

For fourth approximation we proceed as follows: Putting ( )

∫ (

( )

in (2.5.1), we get

)

∫ ( ∫ (

) )

Study On Different Numerical Methods For Solving Differential Equations. [Page 19]

Chapter-2: Solution Of Differential Equations Of First Order And First Degree By Numerical Methods Of Early Stage.

For fifth approximation we proceed as follows: Putting ( )

( )

∫ (

in (2.5.1), we get

)

∫ (

)

∫ (

)

For sixth approximation we proceed as follows: Putting ( )

( )

∫ (

in (2.5.1), we get

)

∫ (

)

∫ (

At

)

, we get ( )

(

)

(

)

(

)

(

)

(

)

(

)

(

)

(

)

( )

( ) (

)

( )

Study On Different Numerical Methods For Solving Differential Equations. [Page 20]

Chapter-2: Solution Of Differential Equations Of First Order And First Degree By Numerical Methods Of Early Stage. (

)

(

)

(

)

(

)

(

)

(

)

(

)

(

)

(

)

(

)

(

)

(

)

(

)

(

)

(

)

(

)

(

)

( )

( )

Exact result: We have

(

)

( )

From the analytical solution in section-2.3, the solution of above differential equation is obtained as

So the particular solution becomes

Now the sixth approximation is correct up to first seven terms in the series, thus the truncation error is obtained as – *

( )

+

Now putting x = 0.2 in the analytical solution, we get the exact value of y is

Study On Different Numerical Methods For Solving Differential Equations. [Page 21]

Chapter-2: Solution Of Differential Equations Of First Order And First Degree By Numerical Methods Of Early Stage.

So,



Thus, we can conclude that in this case the approximation in Picard’s method is correct to eight decimal places. Graphical representation of above approximation and exact result: We have the approximations in is obtained as ( ) ( ) ( ) ( ) ( ) ( )

And the actual value in

is

Now by putting the above values in graph [11], we get the following figure

Figure – (2.1)

Study On Different Numerical Methods For Solving Differential Equations. [Page 22]

Chapter-2: Solution Of Differential Equations Of First Order And First Degree By Numerical Methods Of Early Stage. The figure-(2.1) shows that the approximating curves approach the curve ( ) more closely with each successive approximation and passing over it at the sixth approximation. The successive approximations ( ) ( terms as in the exact infinite series truncated after

)

( )

( )

( )

( )

have same terms respectively.

Advantages of Picard’s method: The iteration process is quite easy to implement in computer algebra system and will sometimes yield useful accuracy in numerical solution. The speed of the calculation is another advantage of this method. Also it gives a better approximation of the desired solution than the previous method. i.e. correct up to one more decimal place. Disadvantages of Picard’s method: In practice it is unsatisfactory for difficulties arise in performing the necessary integrations. The integral part becomes more and more difficult as we proceed to higher order iterations. Adoption of the numerical technique in this method for integrations consumes computation time besides affecting the accuracy of the result. 2.6 COMPARISON BETWEEN TAYLOR’S SERIES METHOD & PICARD’S METHOD OF SUCCESSIVE APPROXIMATION. Both of the Taylor’s series method and the Picard’s method involve analytic operations [1]. Taylor’s series method involves only analytic differentiation and can be mechanized quite readily on a digital computer. In fact the Taylor’s series method has been proposed a general purpose of numerical integration method and programs exist for solve system of differential equation by using analytic continuation method. On the other hand Picard’s method involves indefinite integrations while programs have been written to mechanize this process. They do not always work, even when the integral can be expressed in terms of elementary functions for which the indefinite integrals cannot be so expressed. Moreover the truncation errors in above two methods show that Taylor’s series method gives accuracy correct to seven decimal places after sixth step whereas Picard’s method gives accuracy correct to eight decimal places. Thus we can conclude that Picard’s method is better than Taylor’s series method in real life practice. 2.7 EULER’S METHOD Derivation: Let us consider the initial value problem (

)

( )

(2.7.1)

Study On Different Numerical Methods For Solving Differential Equations. [Page 23]

Chapter-2: Solution Of Differential Equations Of First Order And First Degree By Numerical Methods Of Early Stage. We know that if the function is continuous in the open interval containing , there exists a unique solution [11] of the equation (2.7.1) as ( )

(2.7.2)

The solution is valid for throughout the interval determine the approximate values of the exact solution interval for the value

. We wish to ( ) in the given

Figure-(2.2) Now we will derive a tangent line equation for (2.7.1). From above figure

(

) ( ) ( ) (

This is the first approximation for

)

(2.7.3) ( ) at

Similarly we get the next approximations as

In general the (

(

) at

(

) at

) (

and so on.

approximation at

is given by

)

(2.7.4)

the exact solution of (2.7.1) be Truncation error: Let at the approximate solution is given by (2.7.4). Then we get

(

(

)

)

(2.7.5)

Study On Different Numerical Methods For Solving Differential Equations. [Page 24]

Chapter-2: Solution Of Differential Equations Of First Order And First Degree By Numerical Methods Of Early Stage. Assuming the existence of higher order derivatives expanded by Taylor’s series about , we obtain ( ) Therefore the truncation error

(

) can be

(2.7.6) is given by ( )

(2.7.7)

Thus the truncation error is of ( ) . i.e. the truncation error is proportional to . By diminishing the size of , the error can be minimized. If is a ( ) positive constant such as , then (2.7.8) Here the right hand side is an upper bound of the truncation error. The absolute value of is taken for the magnitude of the error only. 2.8 PHYSICAL APPLICATION OF EULER’S METHOD Suppose a bob of mass is suspended from a fixed point with a thin, light and inextensible string of length When the bob is shifted from its equilibrium position and released, it will execute a simple harmonic motion [20,21]. The motion is described by the equation (2.8.1) Where of the bob and

the angle between the string and the vertical, is the constant acceleration due to the gravity.

is the acceleration

Then equation (2.8.1) takes the form (2.8.2)

Figure-(2.3) Study On Different Numerical Methods For Solving Differential Equations. [Page 25]

Chapter-2: Solution Of Differential Equations Of First Order And First Degree By Numerical Methods Of Early Stage. The oscillation taken as very small, then (2.8.2) can be reduced by approximating as . So the approximation reduced the equation (2.8.2) as an analytical solution given below (2.8.3) However is not small, then (2.8.3) cannot be carried out. Now multiplying both sides of (2.8.2) by ( ) , we get ( )

( )

(2.8.4)

Now integrating both sides of (2.8.4) with respect to , we get ( )

(2.8.5)

By assuming suitable initial condition we can determine the value of . Suppose the initial condition is as the angular displacement is the maximum and equal to we thus obtain

Then (2.8.5) becomes ( ) ( ) √

(

)

(2.8.6)

The differential equation is of first order but not in linear form, so analytical method is not fruitful for it. Thus we are to apply numerical method to solve (2.8.6). i.e. apply Euler’s method for solving (2.8.6), for it becomes

(

)



(

)



(

)



(

Now, assuming the initial condition problem arise as follows

) when

Then a initial value

Study On Different Numerical Methods For Solving Differential Equations. [Page 26]

Chapter-2: Solution Of Differential Equations Of First Order And First Degree By Numerical Methods Of Early Stage. ( With

)



when

for

(

)

(2.8.7)

, Euler’s method gives (

) (

√ (

)

)√

(

)

For first approximations we proceed as follows: Putting (

)√

(

)

(

)√

(

)

For second approximations we proceed as follows: Putting (

)√

( (

)√

)√

(

( (

(

)√

)√

(

( (

(

)√

)√

(

( (

)

)

in (2.8.8), we get

)

)

in (2.8.8), we get

) (

For fifth approximations we proceed as follows: Putting (

in (2.8.8), we get

)

For fourth approximations we proceed as follows: Putting (

in (2.8.8), we get

)

For third approximations we proceed as follows: Putting (

(2.8.8)

)

)

in (2.8.8), we get

) )√

(

(

)

)

Study On Different Numerical Methods For Solving Differential Equations. [Page 27]

Chapter-2: Solution Of Differential Equations Of First Order And First Degree By Numerical Methods Of Early Stage.

Now the solution of the simple pendulum problem is shown in following table Approximation No 00 01 02 03 04 05 Graphical representation of the application: By drawing, the actual solution of ( ) of the differential equation (2.8.1) can be represented as shown as the following figure located by the dotted curve [22].

Figure-(2.4) In this method the actual curve at any point is representing by a constant slope over a small interval of length . With each of length successive sub-intervals are are considered, where the initial point . The extension of Euler’s algorithm up-to point approximation value , given by (

)

(

) (

Here above figure

is the value of

So that, the ordinate of

(

) )

(2.8.9) ̅̅̅̅̅̅̅. Now from

at (

̅̅̅̅̅̅̅

yield the

)

is given by (

)

Study On Different Numerical Methods For Solving Differential Equations. [Page 28]

Chapter-2: Solution Of Differential Equations Of First Order And First Degree By Numerical Methods Of Early Stage. This is obtained by considering the constant slope of the line segment

.

The next slope to be computed corresponding to the point ( the slope is ( ). According to this slope, we get the point where . ( Here the ordinate

of (

)

)

is the sum of the ordinates of )

(

and

, we get

) and so on.

Generalizing this procedure, we obtain the equation (2.8.9), which gives the ordinate of , the approximate value of at . Thus the error in this solution is given by

.

Advantages of Euler’s method: Since in this method no integration appeared in calculation this is easier than previous two methods of this chapter according to practical purpose. As in each approximation of the calculation result of previous approximation is used, it improves the accuracy of the solution. It is also less time consuming. Moreover a problem which cannot be solved in analytical methods or hardly to be done by Taylor’s series method or Picard’s method, Euler’s method can be successfully applied to solve that problem, for its recurring ability. Disadvantages of Euler’s method: In Euler’s method changes rapidly over an interval; this gives a poor approximation at the beginning of the process in comparison with the average value over the interval. So the calculated value of in this method occurs much error than the exact value, which reasonably increased in the succeeding intervals, then the final value of differs in large scale from the exact value. Euler’s method needs to take a smaller value of , because of this restriction this method is unsuitable for practical use and can be applied for tabulating the value of depending variable over a limited range only. Moreover if is not small enough this method is too inaccurate. In Euler’s method the actual solution curve is approximated by the sequence of short straight lines, which sometimes deviates from the solution curve significantly. 2.9 MODIFICATION OF EULER’S METHOD Due to the above consideration, we can say that the computed values of will deviate further and further from the actual values of so long as the curvature of the graph does not change. This encourages doing a modification of Euler’s method [11,18]. Derivation: Starting with the initial value from the relation given by

, an approximate value of

is computed

Study On Different Numerical Methods For Solving Differential Equations. [Page 29]

Chapter-2: Solution Of Differential Equations Of First Order And First Degree By Numerical Methods Of Early Stage. ( )

Here

( )

( )

(

is the first approximation of at

) .

Substituting this approximate value of in (2.7.1) we get an approximate ( ) at the end of the first interval we get value of ( )

( )

Now the improved value of

( )

(

)

is obtained by using the Trapezoidal rule as

( )

( )

(

)

( ) ( )

(

)

is more accurate than the value

This value of

( )

(

is now

Then the second approximation for ( )

)

(

)

( )

By substituting this improved value of ( ) as follows value of ( )

( )

Then the third approximation for ( )

( )

(

( )

(

)

we get the second approximate

)

is now

(

)

(

( )

)

(

(

)

Continuing this process, we can find ( )

(

Then the next approximation for (

)

)

)

(2.9.1)

( )

)

(2.9.2)

is obtained as (

)

(

This process is applied repeatedly until no significant change is produced in two consecutive values of . Study On Different Numerical Methods For Solving Differential Equations. [Page 30]

Chapter-2: Solution Of Differential Equations Of First Order And First Degree By Numerical Methods Of Early Stage. The above process is applied for the first interval and same manner can be hold for the next intervals also. Then the general formula for the modified Euler’s method takes the following form (

)

(

)

( )

( (

and

)

(2.9.3)

)

(2.9.4)

Truncation error of modified Euler’s method: First we will improve the modified Euler’s formula for better accuracy and then find the truncation error by the help of improved formula [11]. We have from Euler’s method first approximation to be found by means of the formula

could

( )

(2.9.4)

But as soon as two consecutive values of are known, the first approximation to the succeeding values of can be found more accurately from the formula (2.9.5) To derive this formula, let us choose the function the neighborhood of by the Taylor’s series as follows

( ) be represented in

(

)

(

)

(

)

(

)

(

)

(2.9.6)

(

)

(

)

(

)

(

)

(

)

(2.9.7)

Now by subtracting (2.9.7) from (2.9.6), we get (

)

(

)

(

)

(

)

(

)

(

)

(2.9.8)

When h is very small and only the first two terms in the right hand members of (2.9.6) and (2.9.8) are used, the truncation errors are and the latter one is much smaller than the former. Thus, (2.9.8) gives a more accurate value of . The first approximation of formed from (2.9.5) is to be corrected and improved by the averaging process described above. The principal part of the error in the final value of can be found as follows.

Study On Different Numerical Methods For Solving Differential Equations. [Page 31]

Chapter-2: Solution Of Differential Equations Of First Order And First Degree By Numerical Methods Of Early Stage. Since the increment in

for each step is obtained from the formula )

(

(2.9.9)

The right hand member of (2.9.9) has the form of first group of terms in Euler’s method; the principal part of the error in (by mean value theorem) is obtained as (

)

(

)

(

)

( )

(2.9.10)

This shows that the error involves only terms in and higher order. For this it follows that the error of the order . Since in the case of Euler’s method the error is of order , it is clear that the modified Euler’s method is more accurate than the Euler’s method. 2.10 APPLICATION OF MODIFIED EULER’S METHOD ( )

Solve the initial value problem method. (

Solution: Given that

by the modified Euler’s

) with

at

We know from Euler’s and modified Euler’s formulae as ( (

)

(

Here, taking

: putting

Second approximation for ( )

)

(2.10.2)

in (2.10.1), we get (

(

)

( )

(

)

( )

( )

(2.10.1)

)

(

and

First approximation for

So, (

)

)

)( ) ( )

: putting (

in (2.10.2), we get )

(

( )

)

Study On Different Numerical Methods For Solving Differential Equations. [Page 32]

Chapter-2: Solution Of Differential Equations Of First Order And First Degree By Numerical Methods Of Early Stage. So, (

( )

( )

)

Third approximation for

: putting

( )

So, (

( )

(

( )

( )

(

: putting

( )

So, (

)

( )

)

( )

)

Fourth approximation for

in (2.10.2), we get

in (2.10.2), we get

(

)

(

( )

)

( )

)

( )

Since are same, we get no further change in iteration process. Therefore we take ( First approximation for

: putting ( (

( )

)

Second approximation for

( )

)

Third approximation for

( )

)

Fourth approximation for

)(

)

: putting (

in (2.10.2), we get )

(

( )

)

( )

: putting

( )

So, (

)

( )

( )

So, (

) in (2.10.1), we get

( )

So, (

continuing the

(

in (2.10.2), we get )

(

( )

)

( )

: putting

in (2.10.2), we get

Study On Different Numerical Methods For Solving Differential Equations. [Page 33]

Chapter-2: Solution Of Differential Equations Of First Order And First Degree By Numerical Methods Of Early Stage. ( )

( )

So, ( ( )

(

)

( )

(

)

( )

) ( )

Since are same, we get no further change in iteration process. Therefore we take (

continuing the

)

Collecting our results in tabular form, we have the following table

0.00 0.05 0.10

( ) 1.00000 1.10256 1.21038

1.00000 1.05256 1.11038

Exact result: We have

This is a linear differential equation in

whose integrating factor is



Multiplying the above differential equation by

(

, it becomes

) ∫ (

From the initial condition we get,

)

, we get

Then the above solution becomes

Study On Different Numerical Methods For Solving Differential Equations. [Page 34]

Chapter-2: Solution Of Differential Equations Of First Order And First Degree By Numerical Methods Of Early Stage. So we obtain the required solutions as followings

This show by comparing with the table obtained from approximation, the method can be improved by taking a smaller value of , since the difference of approximate values and the exact solution values increasing step by step.

Study On Different Numerical Methods For Solving Differential Equations. [Page 35]

CHAPTER-3

SOLUTION OF ORDINARY DIFFERENTIAL EQUATIONS BY PREDICTOR-CORRECTOR METHOD AND RUNGE-KUTTA METHOD.

Chapter-3: Solution Of Ordinary Differential Equations By Predictor-Corrector Method And Runge-Kutta Method.

CHAPTER-3 SOLUTION OF ORDINARY DIFFERENTIAL EQUATIONS BY PREDICTOR-CORRECTOR METHOD AND RUNGE-KUTTA METHOD. 3.1 INTRODUCTION In the previous chapter, we have discussed three numerical methods of early stage for solving ordinary differential equations. Now, in this chapter we will discuss two modern numerical methods for solving ordinary differential equations. These are known as predictor-corrector method and Runge-Kutta method respectively. It to be noting that one of the predictor-corrector based method already have mentioned in previous chapter; namely, modified Euler’s method. Now, we are describing the numerical methods mentioned above in detail with the applications, then will compare them. 3.2 DEFINITION OF PREDICTOR-CORRECTOR METHOD In the methods so far described to solve an ordinary differential equation over an interval, only the value of at the beginning of the interval was required. Now in the predictor-corrector methods, four prior values are needed for finding the value of at given value of [2,6]. These methods though slightly complex, have the advantage of giving an estimate of error from successive approximations of , with . Then from Euler’s formula, we have (3.2.1) Also from modified Euler’s formula, we have (3.2.2) The value of is first estimate by (3.2.1) and then by using (3.2.2) gets a better approximation of . This value of is again substituted in (3.2.2) to find still a better approximation of . This procedure will repeat until two consecutive iterated values of agree. This technique of refining an initial crude estimation of by means of a more accurate formula is known as predictor-corrector method.

Study on different Numerical methods For Solving Differential Equations.

[Page 36]

Chapter-3: Solution Of Ordinary Differential Equations By Predictor-Corrector Method And Runge-Kutta Method. The equation (3.2.1) is taken as the predictor, while (3.2.2) serves as a corrector of . In this section we will describe two predictor-corrector methods as Milne’s predictor-corrector formula and Adams-Moulton predictor-corrector formula. 3.3 MILNE’S PREDICTOR-CORRECTOR METHOD Milne’s method is a simple and reasonably accurate method of solving ordinary differential equations numerically. To solve the differential equation by this method, first we approximate the value of by predictor formula at and then improve this value of by using a corrector formula. These formulae will derive from Newton’s formula of interpolation [6,9]. Derivation of Milne’s predictor formula: We know that Newton’s formula of forward interpolation in terms of and are given by

(3.3.1) Here

Now, integrating (3.3.1) over the interval

to

. i.e.,

to

,

we get ∫



(3.3.2) After neglecting those terms containing and higher orders and substituting , from (3.3.2) we get Milne’s predictor formula as followings

Study on different Numerical methods For Solving Differential Equations.

[Page 37]

Chapter-3: Solution Of Ordinary Differential Equations By Predictor-Corrector Method And Runge-Kutta Method.

(

(

)

)

(

)

(3.3.3) Derivation of Milne’s corrector formula: To obtain the corrector formula, we integrate (3.3.1) over the interval to . i.e., to , then we get ∫



(3.3.4) After neglecting those terms containing and higher orders and substituting , from (3.3.4) we get Milne’s corrector formula as followings

(

)

(

)

(3.3.5) Study on different Numerical methods For Solving Differential Equations.

[Page 38]

Chapter-3: Solution Of Ordinary Differential Equations By Predictor-Corrector Method And Runge-Kutta Method. Generalization of Milne’s Predictor-Corrector formula: We can write the form of Milne’s predictor and corrector formulae according to (3.3.3) and (3.3.5) as follows (3.3.6) (3.3.7) Here the index respectively at .

indicates the predicted and corrected values of

Local truncation error: The terms involving omitted in above formulae, as whereas they indicate the principal parts of the errors in the values of computed from (3.3.6) and (3.3.7). It to be noticed that the errors occurred in (3.3.6) and (3.3.7) are of opposite sign with very small magnitude. Since we may write

are taken as the principal parts of the errors, thus

(3.3.8) (3.3.9) Now substituting (3.3.9) from (3.3.8), then we get as

(

)

Here denotes the principal part of the error in equation (3.3.7). From this we get as following

(3.3.10)

Thus we can conclude that the error in (3.3.9) is between the predicted and corrected values of y at .

of the difference

3.4 APPLICATION OF MILNE’S PREDICTOR-CORRECTOR METHOD Solve the differential equation

with the initial

values

Study on different Numerical methods For Solving Differential Equations.

[Page 39]

Chapter-3: Solution Of Ordinary Differential Equations By Predictor-Corrector Method And Runge-Kutta Method. Solution: Given by By taking the step length

from initial conditions and, we get

Finally, Now, putting follows

in (3.3.6), we get Milne’s predictor formula for

, as

in (3.3.7), we get Milne’s corrector formula for

, as

Then Now, putting follows

Then we get the approximations of

by above formula as followings

First iteration:

Then Second iteration:

Study on different Numerical methods For Solving Differential Equations.

[Page 40]

Chapter-3: Solution Of Ordinary Differential Equations By Predictor-Corrector Method And Runge-Kutta Method.

Then Third iteration:

Then Fourth iteration:

Then is same as third approximation, we can Since fourth approximation for choose the following approximation values at Now the local truncation error from (3.3.12), given by

3.5 ADAMS-MOULTON PREDICTOR-CORRECTOR METHOD Adams-Moulton method is a general approach to the predictor-corrector formula which developed for using the information of a function and its first derivative given by at the past three points together with one more old value of the derivatives [2,9]. Derivation of Adams-Moulton predictor formula: The most general linear predictor formula which involves the information about the function and its derivative at the past three points together with the value of the at the given point being computed as (3.5.1) Above equation contains seven unknowns. Suppose it holds for polynomials up-to degree four. Hence we take . Let the space between the consecutive values of be unity. i.e. taking . Study on different Numerical methods For Solving Differential Equations.

[Page 41]

Chapter-3: Solution Of Ordinary Differential Equations By Predictor-Corrector Method And Runge-Kutta Method. Now putting

and

successively in (3.5.1), we get

(3.5.2) We have five equations with seven unknowns. Taking and solving the equations in (3.5.2), we get

as parameters

(3.5.3) Since followings

are arbitrary, choosing.

. Then we obtain from (3.5.3) the

Substituting these values in (3.5.1), we get Adams-Moulton predictor formula as follows

(3.5.4) Derivation of Adams-Moulton corrector formula: The most general linear corrector formula which involves the information about the function and its derivative at the past three points together with the value of the at the given point being computed as (3.5.5) Above equation contains seven unknowns. Suppose it holds for polynomials up-to degree four. Hence we take . Let the space between the consecutive values of be unity. i.e. taking .

Study on different Numerical methods For Solving Differential Equations.

[Page 42]

Chapter-3: Solution Of Ordinary Differential Equations By Predictor-Corrector Method And Runge-Kutta Method. Now putting

and

successively in (3.5.5), we get

(3.5.6) We have five equations with seven unknowns. Taking and solving the equations in (3.5.6), we get

as parameters

(3.5.7) Since followings

are arbitrary, choosing.

. Then we obtain from (3.5.7) the

Substituting these values in (3.5.5), we get Adams-Moulton corrector formula as follows

(3.5.8) We can find more predictor & corrector formulae using suitable new values of and , to solving systems of equations (3.5.3) and (3.5.7). Local truncation error of Adams-Moulton predictor-corrector formula: We have from Taylor’s series expansion (3.5.9) Putting

for , then we get

Study on different Numerical methods For Solving Differential Equations.

[Page 43]

Chapter-3: Solution Of Ordinary Differential Equations By Predictor-Corrector Method And Runge-Kutta Method.

Equation (3.5.9) also can be written as (3.5.10) Putting

for , then we get

Now, substituting all these values of then we get

in (3.5.4),

(3.5.11) Here the truncation error is . Using the first term of above error as an estimate of the local truncation error of the Adams-Moulton predictor formula is . Again, substituting all these values of (3.5.8), then we get

in

(3.5.11) Study on different Numerical methods For Solving Differential Equations.

[Page 44]

Chapter-3: Solution Of Ordinary Differential Equations By Predictor-Corrector Method And Runge-Kutta Method. . Using the first term of above Here the truncation error is error as an estimate of the local truncation error of the Adams-Moulton predictor formula is . Since we may write

are taken as the principal parts of the errors, thus

(3.5.12) (3.5.13) Now substituting (3.5.13) from (3.5.12), then we get as

(

)

Here denotes the principal part of the error in equation (3.5.13). From this we get as following

(3.5.14)

Thus we can conclude that the error in (3.5.13) is between the predicted and corrected values of y at . 3.4 APPLICATION METHOD

OF

ADAMS-MOULTON

Solve the differential equation

of the difference

PREDICTOR-CORRECTOR

with the initial

values Solution: Given by By taking the step length

from initial conditions and, we get

Study on different Numerical methods For Solving Differential Equations.

[Page 45]

Chapter-3: Solution Of Ordinary Differential Equations By Predictor-Corrector Method And Runge-Kutta Method. Finally,

for

Now, putting , as follows

in (3.5.4), we get Adams-Moulton predictor formula

Then

for

Now, putting , as follows

in (3.5.8), we get Adams-Moulton corrector formula

Then we get the approximations of

by above formula as followings

First iteration:

Then Second iteration:

Then

Study on different Numerical methods For Solving Differential Equations.

[Page 46]

Chapter-3: Solution Of Ordinary Differential Equations By Predictor-Corrector Method And Runge-Kutta Method. Third iteration:

Then Fourth iteration:

Then Since fourth approximation for is same as third approximation, we can choose the following approximation values at Now the local truncation error from (3.5.14), given by

3.7 COMMENTS ON PREDICTOR-CORRECTOR METHODS Advantages: Predictor-Corrector methods allow different step-length to be used in each evaluation and arbitrary increments in the independent variables. The main advantage of these methods that, one has to compute the right hand side term of one grid point only. These are successful methods to give solution of stiff systems of ordinary differential equations. Since after predicting a value of depending variable by predictor formula, they apply corrector formula several times to improve the value till desired level of accuracy, they are suitable methods for sophisticated problems. For the multi-step system, truncation errors and round-off errors are minimized step by step in these methods. Disadvantages: Since for each step forward in the equation using the shortest steplength up-to different co-efficient would have to calculate and the time taken to this might be a significant proportion of the total computing time. Also up-to 4n quantities representing the previous three step-lengths and current step-length in each equation must to be stored and this together with a longer program would represent a considerable increase in the storage space require, compared with other systems. Study on different Numerical methods For Solving Differential Equations.

[Page 47]

Chapter-3: Solution Of Ordinary Differential Equations By Predictor-Corrector Method And Runge-Kutta Method. In order to begin computation with these methods, one first to calculate additional initial values. Because of this problem, these methods are not self-starting. Also the first few values must be computed using other formulae. Moreover for the multi-step system it takes huge time, which is highly disliked to the modern fast world. 3.8 RUNGE-KUTTA METHOD There are many different schemes for solving ordinary differential equations numerically. We have already introduced some of them. Many of the more advanced techniques are making complexity to derive and analyze them. One of the standard workhorses for solving ordinary differential equations, named as Runge-Kutta method. It to be noted, numerical methods are subsequently improved to a level of considerable degree. This development led to the method is known as Runge-Kutta method, which is particularly suitable in cases when the computations of higher order derivatives are complicated. In the Runge-Kutta method the increment in functions are to be calculated once for all by means of a definite set of formulae. The calculation for any increment is exactly same as for the first increment. The improved values of independent and dependent variables are to be substituted in a set of recursive formulae. Derivation of Runge-Kutta formulae: We will derive the formulae of Runge-Kutta method to obtain an approximate numerical solution of the first order differential equation with the initial condition and it is assumed that is not a singular point. Also the errors are to be so small that, they can be neglected [1,22]. Let us take the first order differential equation (3.8.1) Let be the interval between two equidistant values of . So that From Taylor’s series expansion, we have

.

(3.8.2) Differentiating (3.8.1) partially with respect to variables

, we get

Study on different Numerical methods For Solving Differential Equations.

[Page 48]

Chapter-3: Solution Of Ordinary Differential Equations By Predictor-Corrector Method And Runge-Kutta Method. (

) (

)(

)

Let us introduce the following convenient form

Then we get as

(

)

(

)

Now putting them in (3.8.2), we obtain

(

)

(

)

(3.8.3)

Now we shall develop a fourth-order formula. In order to develop the Rungefrom below Kutta formulae to find the co-efficient

(3.8.4) Our aim then

will be expressed in the form (3.8.5)

Now, we may use Taylor’s series expansion for two variables as followings

Study on different Numerical methods For Solving Differential Equations.

[Page 49]

Chapter-3: Solution Of Ordinary Differential Equations By Predictor-Corrector Method And Runge-Kutta Method. (

)

(

) (

)

(

) in (3.8.5), we get

Putting above values of

( (

)

)

( (

)

)

( )

(3.8.6)

Now comparing (3.8.3) and (3.8.6), we get

Solving above system of equation, we obtain

Study on different Numerical methods For Solving Differential Equations.

[Page 50]

Chapter-3: Solution Of Ordinary Differential Equations By Predictor-Corrector Method And Runge-Kutta Method. Now, putting these values in (3.8.4) and (3.8.5), we get fourth-order RungeKutta formulae as follows

When the initial values are the formulae given below

, the first increment in

is computed from

Thus, the general fourth-order Runge-Kutta formulae for by the followings

interval is given

(3.8.7) Above formulae are called the standard forth-order Runge-Kutta formulae. Study on different Numerical methods For Solving Differential Equations.

[Page 51]

Chapter-3: Solution Of Ordinary Differential Equations By Predictor-Corrector Method And Runge-Kutta Method. In the similar manner we can derive second & third Runge-Kutta formulae given by follows Second-order Runge-Kutta formulae:

Third-order Runge-Kutta formulae:

Error estimation in Runge-Kutta formulae: Direct method of estimating the error of higher order Runge-Kutta formulae are very complicated and time consuming. Although it is possible to computing the errors in laborious way, are very hard, involving higher order partial derivatives. Thus we will first estimate the error in second-order Runge-Kutta formulae and the errors for higher orders can be obtained by generalizing the computed error. We get the second-order Runge-Kutta formulae as follows

(3.8.8) Now, the truncation error is given by the following formula (3.8.9) Now expanding

by Taylor’s series expansion, we get

Study on different Numerical methods For Solving Differential Equations.

[Page 52]

Chapter-3: Solution Of Ordinary Differential Equations By Predictor-Corrector Method And Runge-Kutta Method.

(3.8.10) We may use Taylor’s series expansion in (3.8.8), we get

(

)

(

)

(

)

(

)

( (

) )

(

)

(

)

(3.8.11)

Now, using (3.8.10) and (3.8.11) in (3.8.9), we get *

(

)

)

( (

)

)( (

) )

(

)

( (

) ) (3.8.12)

Thus, (3.8.12) shows that the truncation error of second-order Runge-Kutta formula is of order . Similarly, we can show that the truncation error in third-order, fourth-order Runge-Kutta formulae are of respectively. Thus by applying Taylor’s series expansion as above manner we get the truncation error of nth-order Runge-Kutta formulae is of order as follows (3.8.13) 3.9 PHYSICAL APPLICATION OF RUNGE-KUTTA METHOD Consider a large number of radioactive nuclei. Although the number of nuclei is discrete we can often treat this number as a continuous variable. Using the approach, the fundamental law of radioactive decay is that the rate of decay is proportional to the number of nuclei of the decay time. Thus we can write

Study on different Numerical methods For Solving Differential Equations.

[Page 53]

Chapter-3: Solution Of Ordinary Differential Equations By Predictor-Corrector Method And Runge-Kutta Method. (3.9.1) Here is the number of nuclei and is the decay constant. If the half-life of the radioactive nuclei is ⁄ then can be expressed as (3.9.2) ⁄

For a practical observation, let initially the number of nuclei is we have to find how much nuclei remain after day. Given that the half-life is days. ⁄ Solution: Given that

(3.9.3)

Here the given conditions are



Thus (3.9.2) gives ⁄

Then (3.9.3) becomes (3.9.4) Now, taking the step-length Runge-Kutta method, we can write

day and according to the fourth-order

(3.9.5) With

(3.9.6) To find we proceed as follows: For the first interval, putting (3.9.6), from (3.9.4) we get as follows

Study on different Numerical methods For Solving Differential Equations.

in (3.9.5) &

[Page 54]

Chapter-3: Solution Of Ordinary Differential Equations By Predictor-Corrector Method And Runge-Kutta Method.

So that

To find we proceed as follows: For the second interval, putting & (3.9.6), from (3.9.4) we get as follows

Study on different Numerical methods For Solving Differential Equations.

in (3.9.5)

[Page 55]

Chapter-3: Solution Of Ordinary Differential Equations By Predictor-Corrector Method And Runge-Kutta Method.

So that

To find we proceed as follows: For the third interval, putting (3.9.6), from (3.9.4) we get as follows

Study on different Numerical methods For Solving Differential Equations.

in (3.9.5) &

[Page 56]

Chapter-3: Solution Of Ordinary Differential Equations By Predictor-Corrector Method And Runge-Kutta Method.

So that

To find we proceed as follows: For the fourth interval, putting (3.9.6), from (3.9.4) we get as follows

Study on different Numerical methods For Solving Differential Equations.

in (3.9.5) &

[Page 57]

Chapter-3: Solution Of Ordinary Differential Equations By Predictor-Corrector Method And Runge-Kutta Method.

So that

Exact solution: we have (3.9.7) Now integrating (3.9.7) within limit then we get ∫

to

when

to

,



(3.9.8) Now applying the initial conditions Thus (3.9.8) takes the following form (3.9.9) When

day, from (3.9.9) we obtain

Hence the truncation error and relative error are

Study on different Numerical methods For Solving Differential Equations.

[Page 58]

Chapter-3: Solution Of Ordinary Differential Equations By Predictor-Corrector Method And Runge-Kutta Method.

Advantages of Runge-Kutta method: Runge-Kutta method is the most widely used numerical weapon, since it gives reliable values starting values and particularly suitable when the computation of higher order derivatives are complicated. It scores over the earlier methods in obtaining greater accuracy of the solution and at the same time avoiding the need of higher order derivatives. Also it possesses the advantage at requiring only the function values at some selected points on the sub-intervals. Moreover, it is easy to change the step-length for greater accuracy as we need and there is no special procedure is necessary for starting, which minimize the computing time. Disadvantages of Runge-Kutta method: Though Runge-Kutta method is very useful, it is also very laborious. It is a lengthy process and need to check back the values computed earlier. Also the inherent error in Runge-Kutta method is hardly to be estimated. Moreover, it has its limitation in solving certain types of differential equations only and the step-length is the key factor of the computation. 3.10 EXTENTIONS OF RUNGE-KUTTA FORMULAE We have already discussed about second-order, third-order and standard fourth-order Runge-Kutta method. Now we will give a brief discussion about the modifications of Runge-Kutta method [22], as given below. Runge-Kutta-Gill method: This is a fourth-order step-by-step iteration method, being a modification of useful standard fourth-order Runge-Kutta method. The fourthorder modified Runge-Kutta formula is of the following form

Here the constants have to be determined by expanding both sides by Taylor’s series and equating the co-efficient of powers of up-to degree four. Here again, because of degree of freedom, several solutions are possible. Of these solutions, Study on different Numerical methods For Solving Differential Equations.

[Page 59]

Chapter-3: Solution Of Ordinary Differential Equations By Predictor-Corrector Method And Runge-Kutta Method. popularly used fourth-order Runge-Kutta method is obtained by the following choice of the co-efficient √

√ √



(



)

(





)

Then we obtain the Runge-Kutta-Gill method by putting them. This modification was introduced by Gill with the above change for the equation

The next value of

is given by √





√ √ √



Runge-Kutta-Merson method: This is also a fourth-order method which involves an additional derivative calculation and provides an error estimate. The error estimate is exact if the derivative function is linear in . In fact this method even provides a technique to automatic adjustment of the step-length h to ensure good convergence of the solution. The Runge-Kutta-Merson formula is given below

Runge-Kutta-Butcher method: J.C.Butcher enhanced the order to five so that the error term is now of order up-to degree six. This method requires six functional values. Butcher showed that this method involves minimum computing time and at the same time ensured greater accuracy. We get this method in following form Study on different Numerical methods For Solving Differential Equations.

[Page 60]

Chapter-3: Solution Of Ordinary Differential Equations By Predictor-Corrector Method And Runge-Kutta Method.

Kutta-Nystrom method: This is a sixth-order method, with the error term of order up-to degree six, which involves six functional values. This method yields the following form

Runge-Kutta-Fehlberg method: Runge-Kutta-Fehlberg method now one of the most popular method of modification of Runge-Kutta methods. Only six functional evaluations are required, which has the following form

Study on different Numerical methods For Solving Differential Equations.

[Page 61]

Chapter-3: Solution Of Ordinary Differential Equations By Predictor-Corrector Method And Runge-Kutta Method.

3.11 GENERALIZED FORMULA FOR RUNGE-KUTTA METHODS The general

order Runge-Kutta method expressed as [3] following form ∑ ∑

Here . The co-efficient are collectively referred to as Runge-Kutta matrices. The quantities are called Runge-Kutta weights and are called RungeKutta nodes. The Runge-Kutta matrices, weights and nodes are often displays graphically as the following Runge-Kutta table.

Figure-(3.1) 3.12 COMPARISON BETWEEN PREDICTOR-CORRECTOR METHOD AND RUNGE- KUTTA METHOD To compare predictor-corrector method and Runge-Kutta method, we have to discuss following points [1]. 1. Runge-Kutta method is self-starting the interval between steps may be change at will and in general, they are particularly straight forward to apply on a digital computer. 2. They are comparable in accuracy than corresponding order predictorcorrector methods. However, if we do not monitor the per step error by using additional function evaluations, we shall generally be required to choose the step size conservatively. i.e. smaller than is actually necessary to achieve the desired accuracy. 3. Further, they require a number of evaluations of at each step and at least equal to order of the method. As we have seen, predictor-corrector methods generally require only two evaluations per step. Since evaluation of is usually the time consuming part of solving the initial value Study on different Numerical methods For Solving Differential Equations.

[Page 62]

Chapter-3: Solution Of Ordinary Differential Equations By Predictor-Corrector Method And Runge-Kutta Method. problem . This means that predictor-corrector methods are generally faster than Runge-Kutta method. For example, fourth-order predictor-corrector methods are nearly twice as fast as fourthorder Runge-Kutta methods. 4. Naturally predictor-corrector methods have the advantage that, ingredients for estimating local errors are already in hand when needed. With RungeKutta a separate application of the formulae must be made, as just outlined. This almost doubles the number of times that has to be evaluated, and since this is where the major computing effort is involved, running time may be almost doubled. On the other hand, and as said before, whenever the step size is changed it will be necessary to assist a PredictorCorrector method in making a restart. This means extra programming, and if frequent changes are anticipated, if may be just as well to use RungeKutta method throughout. 5. Finally, monitoring the local truncation error does not involve any additional function evaluations using predictor-corrector methods, whereas it is quite expensive for Runge-Kutta method. Thus the self-starting characteristic of Runge-Kutta methods makes them an ideal adjunct to the usual predictor-corrector methods for starting the solution. Since they will be used for only a few steps or the computation, truncation error and instability is the key consideration. Therefore, for the above purpose, the minimum error bound Runge-Kutta methods should be used.

Study on different Numerical methods For Solving Differential Equations.

[Page 63]

CHAPTER-4

SOLUTION OF PARTIAL DIFFERENTIAL EQUATIONS.

Chapter-4: Solution Of Partial Differential Equations.

CHAPTER-4 SOLUTION OF PARTIAL DIFFERENTIAL EQUATIONS. 4.1 INTRODUCTION Partial differential equations occur in many branches of applied mathematics such as in hydrodynamics, electricity, quantum mechanics and electromagnetic theory. The analytical treatment of these equations is rather involved process and requires application of advanced mathematical methods. On the other hand, it is generally easier to produce sufficiently approximate solutions by simple and efficient numerical methods. Several numerical methods have been proposed for solution of partial differential equations. Among those methods we will only discuss the methods those are related to the solution of Elliptic, Parabolic and Hyperbolic partial differential equations. i.e. in this chapter we will solve Elliptic, Parabolic and Hyperbolic partial differential equations only. 4.2 CLASSIFICATION OF PARTIAL DIFFERENTIAL EQUATIONS. The general second-order linear partial differential equation [23] is of the form

(4.2.1) Here are all functions of . The above equation (4.2.1) – , in the can be classified with respect to the sign of the discriminant following way If at a point in the , then (4.2.1) is said to be of elliptic, parabolic & hyperbolic type of equation respectively. Many physical phenomena can be modeled mathematically by differential equation. When the function being studied involves two or more independent variables, the differential equation will usually be a partial differential equation. Since function of several variables are intrinsically more complicated than those of single variable, partial differential equations can lead to the most challenging among of numerical problems. In fact, their numerical solution is one type of scientific calculation in which the resources of the biggest and fastest computing systems easily become taxed. We shall see later this is so.

Study On Different Numerical Methods For Solving Differential Equations. [Page 64]

Chapter-4: Solution Of Partial Differential Equations. Some important partial differential equations and physical phenomena that govern are listed below. 1.

The wave equation in three spatial variables

and the time is

The function represents the displacement at the time of the particle whose position at the rest is . With appropriate boundary conditions, this equation governs vibrations of a three-dimensional elastic body. 2. The heat equation is

The function represents the temperature at the time of a particle whose position at . the co-ordinates are 3.

Laplace’s equation is

It governs the steady-state distribution of heat or electric charge in a body. Laplace’s equation also governs gravitational, electric, magnetic potentials and velocity potentials in irrigational flows of incompressible fluids. In section 1.6 some special forms of the Laplace’s equation have mentioned. Also there two special cases of problems depend upon boundary conditions with partial differential equations. 1. In Dirichlet problem, a continuous function on the boundary of a region , to find a function satisfies Laplace’s equation in . i.e. to find such that

1. We have Cauchy’s problem for following for

arbitrary functions

as

* + 4.3 FINITE DIFFERENCE APPROXIMATIONS TO PARTIAL DERIVATIVES Let be divided into a network of rectangles of sides and by drawing the set of lines shown in figure-(4.1) Study On Different Numerical Methods For Solving Differential Equations. [Page 65]

Chapter-4: Solution Of Partial Differential Equations.

Figure-(4.1) The points of intersection of their families of lines are called mesh point or lattice point or grid point. For We have following approximations (4.3.1) (4.3.2) (4.3.3) (4.3.4) (4.3.5) (4.3.6) (4.3.7) (4.3.8) The derivatives in any partial differential equation can be replaced by their corresponding difference equations (4.3.1) to (4.3.8), we obtain the finite-difference analogues of the given equation. 4.4 SOLUTION OF ELLIPTIC EQUATIONS In this section [12] we will study various techniques for solving Laplace’s and Poisson’s equations, which are elliptic in nature. Various physical phenomena governed by these well-known equations. Some of them, frequently encountered in physical and engineering applications are steady heat equation, seepage through porous media, rotational flow of an ideal fluid, distributional potential, steady viscous flow, equilibrium stresses in elastic structures etc. Solution of Laplace’s equation: We consider the Laplace’s equation in two dimensions as follows Study On Different Numerical Methods For Solving Differential Equations. [Page 66]

Chapter-4: Solution Of Partial Differential Equations. (4.4.1) is known as the boundary. We take a rectangular region for which Now, assuming that an exact sub-division of is possible, we divide this region into a network of square mesh of side as shown in figure-(4.2).

Figure-(4.2) Replacing the derivatives in (4.4.1) by their finite-difference approximations from (3.3.4) and (3.3.8) with taking , we get

(4.4.2) Equation (4.4.2) shows that value of at any interior mesh point is the average of its values at four neighboring points to adjacent it. The equation (4.4.2) is known as standard 5-point formula exhibited in figure-(4.3). We know that Laplace’s equation remains invariant when the co-ordinate axes . Then the formula (4.4.2) can be re-written as are rotated through an angle (4.4.3) This is similar to (4.4.2), which shows that value of at any interior mesh point is the average of its values at four neighboring diagonal mesh points. The equation (4.4.3) is known as diagonal 5-point formula exhibited in figure-(4.4).

Figure-(4.3)

Figure-(4.4)

Although (4.2.3) is less accurate than (4.4.2) but serves as reasonably good approximation for obtaining starting values of the mesh points. We use (4.4.3) to find the initial values of at the interior mesh points and compute following mesh points Study On Different Numerical Methods For Solving Differential Equations. [Page 67]

Chapter-4: Solution Of Partial Differential Equations.

The values of in the following way

at the remaining interior mesh points are computed by (4.4.2),

After determining once, their accuracy is improved either by using Jacobi’s iterative method or Gauss-Seidal iterative method. The process will be repeated till two consecutive iterations become very close. i.e. the difference between two consecutive iterations become negligibly small in order to achieve the desired level of accuracy. The iterative formula in case of Jacobi’s method and Gauss-Seidal method are given below. Jacobi’s iteration formula & Gauss-Seidal iteration formula are given by (4.4.4) (4.4.5)

Here denotes the iterative value of improved values of at the interior mesh points.

and gives us the

Gauss-Seidal iteration formula uses the latest iterative value available and scans the mesh points symmetrically from left to right along successive rows. Also, GaussSeidal method is simple and well suited to computer calculation. Jacobi’s iteration formula being slow, the working is same but lengthy. However, it can be shown that the Gauss-Seidal scheme converges twice as fast as Jacobi’s method. Solution of Poisson’s equation: We consider the Poisson’s equation in two dimensions as follows

Study On Different Numerical Methods For Solving Differential Equations. [Page 68]

Chapter-4: Solution Of Partial Differential Equations. (4.4.6) The method of solving (4.4.6) is similar to that of the Laplace’s equation (4.4.1). Here the standard 5-point formula for (4.4.6) takes the form (4.4.7) Using (4.4.7) at each interior mesh point, we arrive at a system of linear equations in the nodal values , which can be solved by Gauss-Seidal method. The error in replacing by the finite difference approximation is of the order . , the error in replacing by the finite difference approximation is also Since of the order . Thus the error in solving Laplace’s equation and Poisson’s equation by finite difference method is of order . Solution of elliptic equation by relaxation method: Let us consider the Laplace’s equation in two dimensions as follows (4.4.8) We take a square region and divide it into a square net of mesh size . Let value of at be and its values at four adjacent points to be respectively as shown as figure-(4.5)

Figure-(4.5) Then

If (4.4.8) is satisfied at , then we have

Let

be the residuals at the mesh point

, then we have (4.4.9) (4.4.10)

Study On Different Numerical Methods For Solving Differential Equations. [Page 69]

Chapter-4: Solution Of Partial Differential Equations. This is a continuous process. The main aim of the method is to reduce all the residuals to zero, by making them as small as possible step by step. Thus, we try to adjust the value of at an internal mesh point, so as to make the residual threat zero. When the value of changing at a mesh point, the values of the residuals at the neighboring interior points will also are changed. If is given an increment , then (i) equation (4.4.9) shows that is changed by and (ii) equation (4.4.10) shows that is changed by . The relaxation pattern is shown in figure-(4.6).

Figure-(4.6) In general, equation (4.4.5) of Gauss-Seidal formula can be written as

(4.4.11)

This shows that is the change in the value of for Gauss-Seidal iteration. In successive over relaxation method, large changes than this is given to and the iteration formula is written as

*

+

(4.4.12)

The rate of convergence of (4.4.12) depends on the choice of , which is called the accelerating factor and lies between . In general, it is difficult to estimate the best value of . To solve an elliptic equation by relaxation method, we will follow the following algorithm 1. Write down by trial values, the initial values of by (4.4.3).

at the interior mesh points

2. Calculate the residuals at each of their points by (4.4.9). The application of this formula at a point near the boundary, chopped off one or more end points, since there is no residuals at boundary.

Study On Different Numerical Methods For Solving Differential Equations. [Page 70]

Chapter-4: Solution Of Partial Differential Equations. 3. Write the residuals at a mesh point on the right of this point and the value of on its left. 4. Obtain the solution by reducing the residuals to zero one by one, by giving suitable increment to and using the figure-(4.7). At each step, we reduce numerically largest residual to zero and record the increment of in the left and the modified residual on the right. 5. When a round of relaxation is completed, the value of and its increments added at each point. Using these values, calculate all the residuals afresh. If some of the recalculated residuals are large, we liquidate these again. 6. Stop the relaxation process, when the current values of the residuals are quite small. The current value of at each of nodes gives us the solution. 4.5 APLICATIONS OF SOLVING ELLIPTIC EQUATION Application-1: Given the values of on the boundary of the square in the following figure-(4.8), evaluate the function satisfying the Laplace’s equation at the pivotal points of the figure-(4.7) by (i) Jacobi’s method (ii) Gauss-Seidal method.

Figure-(4.7) Solution: We know the standard 5-point formula and diagonal 5-point formula are (4.5.1) (4.5.2) Using above formulae and assuming

, we get followings

Study On Different Numerical Methods For Solving Differential Equations. [Page 71]

Chapter-4: Solution Of Partial Differential Equations. Here have determined by using (4.5.2) and using (4.5.1) respectively.

have determined by

(i) Using Jacobi’s formula:

First iteration: Putting

Second iteration: Putting

Third iteration: Putting

Fourth iteration: Putting

, we obtain

, we obtain

, we obtain

, we obtain

Study On Different Numerical Methods For Solving Differential Equations. [Page 72]

Chapter-4: Solution Of Partial Differential Equations. Fifth iteration: Putting

, we obtain

Sixth iteration: Putting

, we obtain

Seventh iteration: Putting

Eighth iteration: Putting

, we obtain

, we obtain

Since eighth iteration is very close to seventh iteration, we can conclude that

(ii) Using Gauss-Seidal formula:

Study On Different Numerical Methods For Solving Differential Equations. [Page 73]

Chapter-4: Solution Of Partial Differential Equations.

First iteration: Putting

Second iteration: Putting

Third iteration: Putting

Fourth iteration: Putting

Fifth iteration: Putting

, we obtain

, we obtain

, we obtain

, we obtain

, we obtain

Study On Different Numerical Methods For Solving Differential Equations. [Page 74]

Chapter-4: Solution Of Partial Differential Equations. Sixth iteration: Putting

, we obtain

Since sixth iteration is very close to fifth iteration, we can conclude that

Application-2: Apply relaxation method to solve the Laplace’s equation inside the square bounded by . Here, given that on the boundary.

Figure-(4.8) Therefore, residual at , by using (4.4.9), we get

By similar manner, we obtain

Now, we determine the mesh points by following way 1. The numerically largest residual is by . So that, the residual becomes get increased by .

. To liquidate it, we increase and the residuals neighboring nodes

Study On Different Numerical Methods For Solving Differential Equations. [Page 75]

Chapter-4: Solution Of Partial Differential Equations. 2. The next numerically largest residual is . To reduce it to , we increase by . So that, the residuals at the adjacent nodes is increased by . . To reduce it to , we 3. Again, the numerically largest residual is increase by . So that, the residuals at the adjacent nodes is increased by . 4. The numerically largest current residual being . Now, we stop the relaxation process Thus the final values of

at different points are obtained as

4.6 SOLUTION OF PARABOLIC EQUATIONS In this section [12] we will consider a model problem of modest scope to introduce some of the essential ideas. For technical reasons, the problem is said to be of parabolic type. Solution of one dimensional heat equation: Let us consider the one dimensional heat equation (4.6.1) Here thermal conductivity,

is the diffusivity of the substance, where is density and

is the

is specific heat of the substance.

We can solve (4.6.1) by Schmidt method, Crank-Nicholson method and iterative method. These methods are described below. with spacing Schmidt method: We consider a rectangular mesh in along and along . Now denoting the mesh point as simply , we have

Using these in (4.6.1), we obtain

Study On Different Numerical Methods For Solving Differential Equations. [Page 76]

Chapter-4: Solution Of Partial Differential Equations. [

]

(4.6.2) The relation (4.6.2) is a relation between the function values at the two time levels and and hence called formula. This Formula enables us to determine the value of at the mesh point in terms of the known function at the instant . The schematic form of (4.6.2) is values at the points shown in figure-(4.9).

Figure-(4.9) Hence the formula (4.6.2) is called Schmidt explicit formula which is valid . In particular when , equation (4.6.2) reduces to only for

(4.6.3) This shows that the value of at when time is the mean of the values of at and when time . This relation (4.6.3) is known as Bender-Schmidt recurrence relation, gives the value of at the internal points with the help of boundary conditions. Crank-Nicholson method: Crank and Nicholson proposed a method according to which

is replaced by average of the finite difference approximations on rows. Thus, we have

and

Hence, (4.6.1) reduced to

Study On Different Numerical Methods For Solving Differential Equations. [Page 77]

Chapter-4: Solution Of Partial Differential Equations.

(4.6.4) On the left hand side of (4.6.4), we have three unknowns and on the right hand side all the three quantities are known. Thus, the implicit scheme (4.6.4) is called the Crank-Nicholson formula and it is convergent for all values of . If there are internal mesh points on each row, then formula (4.6.4) gives simultaneous equations for unknowns in terms of the given boundary values. Similarly, the internal mesh points on all rows can be calculated. The computational model of this method is given below in figure-(4.10)

Figure-(4.10) Iterative method: By using iterative method we develop a method by means of Crank-Nicholson method for solving (6.4.1). In the Crank-Nicholson method the partial differential equation (4.6.1) is replaced by the finite difference equations

(4.6.5) In (4.6.5), the unknowns are known since they are already computed at the setting

and all the others are step. Hence, dropping that all and

(4.6.6) Thus, (4.6.5) can be written as (4.6.7)

From (4.6.7), we obtain the iteration formula *

+

(4.6.8)

Study On Different Numerical Methods For Solving Differential Equations. [Page 78]

Chapter-4: Solution Of Partial Differential Equations. This expresses the iteration in terms of iteration and is known as Jacobi’s iteration formula. It can be seen from (4.6.8) that at the line of computing , the latest value of , namely is already available. Hence, the convergence of Jacobi’s iteration formula can be improved by replacing in (4.6.8) by its latest values available, namely . Accordingly, we obtain the Gauss-Seidal iteration formula *

+

(4.6.9)

We can show that the Gauss-Seidal scheme (4.6.9) converges for all finite values of and that are converges twice as fast as the Jacobi’s scheme (4.6.8). 4.7 APLICATION OF SOLVING PARABOLIC EQUATION Solve by using (i) Schmidt method (ii) Crank-Nicholson method. Subject to the conditions and . Carry out computations for the two levels, taking and . Solution: Here given by

.

Thus, √



&

Also, all boundary values are zero as shown in figure-(4.11) below

Figure-(4.11) (i) The Schmidt formula (4.6.2) in this case becomes

For

, we get

*





+

Study On Different Numerical Methods For Solving Differential Equations. [Page 79]

Chapter-4: Solution Of Partial Differential Equations.

* For





+ , we get

Thus, by Schmidt scheme we get the following mesh points

(i) The Crank-Nicholson formula (4.6.4) in this case becomes

For

, we get

√ √



(4.7.1)

√ √



(4.7.2)

Now solving (4.7.1) and (4.7.2), we get For

, we get

(4.7.3)

Study On Different Numerical Methods For Solving Differential Equations. [Page 80]

Chapter-4: Solution Of Partial Differential Equations.

(4.7.4) Now solving (4.7.3) and (4.7.4), we get Thus, by Crank-Nicholson scheme we get the following mesh points

4.8 SOLUTION OF HYPERBOLIC EQUATIONS The wave equation is the simplest example of the hyperbolic partial differential equations. Its solution is the displacement function defined for values and , satisfying the initial and boundary conditions [12]. Such equations arise from convective type of problems in vibrations, wave mechanics, gas dynamics, elasticity, electromagnetic and seismology. Solution of wave equation (vibration of stretched string): We consider the boundary value problem that models the transverse vibrations of a stretched string as followings (4.8.1) Subject to the conditions (4.8.2) (4.8.3) We have the finite difference approximations for the derivatives, as follows

Now, putting them in (4.8.1), we get

[ [

] ]

(4.8.4)

Study On Different Numerical Methods For Solving Differential Equations. [Page 81]

Chapter-4: Solution Of Partial Differential Equations. The formula (4.8.4) shows that the function values at and levels are required in order to determine those at the level. Such difference schemes are called three level difference schemes compared to the two level difference schemes derived in the parabolic equation case. By expanding the terms in (4.8.4) as Taylor’s series and simplifying, it can be shown that the truncation error in (4.8.4) is of and the formula (4.8.4) holds well if , which is the condition for stability. There also exist implicit finite difference schemes for (4.8.1). Two such schemes whose are hold well for all values of are obtained by taking average of the finite difference approximations of different rows, are given by

(4.8.5) [

) (4.8.6)

4.9 APLICATION OF SOLVING HYPERBOLIC EQUATION Solve

by taking

subject to the initial and

boundary conditions Solution: Here

. , therefore for

the difference equation (4.8.4) reduces to [

]

(4.9.1)

Given by

and, for the co-efficient of

vanishes, choosing

For

, (4.9.1) gives a convenient solution as

as

(4.9.2) Given

Also Study On Different Numerical Methods For Solving Differential Equations. [Page 82]

Chapter-4: Solution Of Partial Differential Equations.

Finally

Putting

successively in (4.9.2), we get

Putting

successively in (4.9.2), we get

Putting

successively in (4.9.2), we get

Putting

successively in (4.9.2), we get

Thus the required values of

are can be shown as the following table

Study On Different Numerical Methods For Solving Differential Equations. [Page 83]

Chapter-4: Solution Of Partial Differential Equations.

0 1 2 3 4 5

0

1

2

3

4

5

0 0 0 0 0 0

4 4 8 6 -2 -16

12 12 10 6 -10 -18

18 18 10 -6 -10 -12

16 16 2 -6 -8 -4

0 0 0 0 0 0

4.10 COMPARISON BETWEEN ITERATIVE METHOD AND RELAXATION METHOD The method of iteration and the method of relaxation are both methods for solving partial differential equations with given boundary values [11]. Although they reach the desired solution by different processes, both methods are of same inherent accuracy. Their points of similarity and dissimilarity are given below. 1. Both methods require that the bounded region be divided into a network of squares or other similar polygons. 2. Both methods require that the boundary values be written down and that rough values of the function be computed, estimated or assumed for all interior points of the network. 3. In order to start a computation, the iteration method assumes that a functional value at any mesh point satisfies the given difference equation and thereby derives the relation which must exist between that functional value and adjacent functional values. The process of iteration is then applied until the required relation is satisfied. On the other hand the relaxation method recognizes at the start that an assumed functional value at any mesh point will not satisfy the given difference equation, but that there will be a residual at that point. The residuals are computed for all points before the relaxation process is started. 4. The iteration process is slow, sure and frequently long. The relaxation process is more rapid, less certain and usually reasonably short. The convergence is rapid by both methods at first, but becomes slow with both methods long before the end is reached. 5. The arithmetic operations are easier and shorter with the method of relaxation. The mental effort necessary to avoid mistakes however is much greater than with the iteration method. 6. The greatest drawback to the method of iteration is its length and the greatest drawback to the method of relaxation is its liability to errors of the computation. Such errors can be kept out only by extreme care and unceasing vigilance on the part of the computer.

Study On Different Numerical Methods For Solving Differential Equations. [Page 84]

Chapter-4: Solution Of Partial Differential Equations. 7. Computational errors in the method of iteration are immediately evident and are self-correcting. In the method of relaxation any errors in the functional values remain hidden and can be brought to light only by application of formula (4.4.9). For this reason, all the interior net point values should be checked by (4.4.9) several times during a long computation. Such checking takes time and keeps the relaxation process from being as short as it might a first appear. 8. In the iteration process, attention is always fixed on the functional values at the lattice points, whereas in the relaxation process, attention is always centered on the residuals at those points. Thus, if anyone solves a problem of moderate length by iteration method and relaxation method, then he can decide himself which method is preferable in his case. 4.11 THE RAYLEIGH-RITZ METHOD Introduction: The Rayleigh-Ritz method of solving boundary value problem is entirely different from either of methods considered in previous sections [11]. It is not based on difference equations and does not employ them. In finding the solution of a physical problem by this method, one assumes that the solution can be represented by a linear combination of simple and easily calculated functions each of which satisfies the given boundary conditions. After a problem has been formulated as the definite integral of the algebraic sum of two or more homogeneous, positive and definite quadratic forms or as the quotient of two such integrals, the desired unknown function is replaced in the integrals by the assumed linear combination. Then the integral or quotient of the integrals is minimized with respect to each of the arbitrary constants occurring in the linear combination. This method is direct and short if only approximate results are desired, but if results of high accuracy are required, the method is quite laborious and the labor cannot be appreciably lessened by mechanical aids. The labor involved is mostly in long and tedious algebraic manipulations. A special and simple form of the Rayleigh-Ritz method was first used by Lord Rayleigh for finding the fundamental vibration period of an elastic body. It was later extended, generalized and its convergence proved by W.Ritz.

Figure-(4.12) Vibration of a rectangular membrane: Consider a thin elastic membrane of rectangular form with sides as shown as figure-(4.12) such as a very thin sheet Study On Different Numerical Methods For Solving Differential Equations. [Page 85]

Chapter-4: Solution Of Partial Differential Equations. of rubber, and assume that the membrane is made fast at the edge while tightly stretched [11]. Take a set of three mutually perpendicular axes, with the coinciding with the membrane and the perpendicular to it. Then if an interior region of the membrane be pulled or pushed in a direction at right angles to its plane of equilibrium (the ), it becomes distorted into a curved surface, the area of which is ∫ ∫ √ ∫ ∫ Since the distortion is small, the increase in area of the membrane due to the distortion is therefore ∫ ∫ ∫ ∫ Let denote the tension on a unit length of boundary of the membrane, the direction of being perpendicular to the edge of the boundary. Then the work-done in deflecting the membrane until its area is unchanged by an amount is as in the following. Consider a rectangular region of dimensions as in the figure-(4.13).

Figure-(4.13) First let the side be fixed and let the membrane be pulled to the right with the force pounds per unit of width, or for the whole side. The force will stretch the membrane an amount and do units of work in doing so. Now let the side be fixed and let the membrane be pulled in the direction of the side by a force of pounds per unit length of border, or for the whole side. The force will stretch the membrane by an amount in that direction and do units of work in doing. Hence the total work-done is

Study On Different Numerical Methods For Solving Differential Equations. [Page 86]

Chapter-4: Solution Of Partial Differential Equations. Now the potential energy in the deflected position is equal to the work done in producing the deflection. Since the deflection is small, the tension remains practically constant. Hence the potential energy of the membrane in a deflected position is ∫ ∫ Because of the elasticity of the membrane, the deflection at any point if proportional to the force applied, and the motion is thus simple harmonic. Hence the deflection is a periodic function of the time, or . On substituting this value of in the above expression for the potential energy, we get ∫ ∫ [( )

( ) ]

The maximum value of it obtained when ∫ ∫ [( ) For the elementary mass

, then ( ) ]

, the kinetic energy of the membrane is

Here denotes the mass of unit area of the membrane. The kinetic energy of the entire vibrating membrane is therefore ∫ ∫ The maximum value of it obtained when

, then

∫ ∫ Since there is assumed to be no loss of energy due to vibration, the maximum potential energy is equal to the maximum kinetic energy and thus we have ∫ ∫ [( )

∫ ∫ ∫ ∫ [(

)

∫ ∫

(

( ) ]

) ]

(4.11.1)

We must now for assume a linear combination of simple functions which will satisfy the boundary conditions of the problem. Such a function is (4.11.2)

Study On Different Numerical Methods For Solving Differential Equations. [Page 87]

Chapter-4: Solution Of Partial Differential Equations. In order to make the convergence as rapid as possible, however, we move the origin to the center of the rectangle. Then because of the symmetry by taking as we may write (4.11.3) Assuming that in (4.11.1) has been replaced by (4.11.2) and (4.11.3) above; we must determine that all so as to make minimum. Hence the derivative of the right member of (4.11.1) with respect to each of all must be zero. Then by the rule for differentiating a quotient we have (∫ ∫ [( )

∫ ∫

∫ ∫ [( )

( ) ]

( ) ]

) ∫ ∫

(4.11.4)

From (4.11.1) we get ∫ ∫ [( )

( ) ]

∫ ∫

(4.11.5)

Substituting (4.11.5) in (4.11.4), we get (∫ ∫ [( )

∫ ∫ ∫ ∫

( ) ]

)

∫ ∫

(4.11.6)

Now taking out the common factor ∫ ∫

and putting

, we

get for (∫ ∫ [( ) ∫ ∫ [( )

( ) ] ( ) ]

)

∫ ∫ (∫ ∫

)

(4.11.7)

The formula (4.11.7) will give homogeneous equations for determining values of . If the form (4.11.3) is used for , the limits of integration (4.11.7) will be from . To get a first approximation to the vibration frequency of the membrane, we take only the first term of the parenthetic polynomial in (4.11.3), then we get

Study On Different Numerical Methods For Solving Differential Equations. [Page 88]

Chapter-4: Solution Of Partial Differential Equations.

Hence ∫ ∫ [( )

( ) ]

∫ ∫

And

∫ ∫ ∫ ∫

On substituting these in (4.11.7) and putting as get as followings

(

, we

)



The frequency is therefore √ √

This is a natural vibration frequency of the membrane. Since , the vibration frequencies found by the classical method separating the variables are given by the formula √ Study On Different Numerical Methods For Solving Differential Equations. [Page 89]

Chapter-4: Solution Of Partial Differential Equations. For

, this formula becomes √

Above formula is very similar to the frequency of a membrane solved by analytical method. Thus we can conclude that the Rayleigh-Ritz method gives a close approximation to the exact value. 4.12 COMPARATIVE DISCUSSION OF THE RAYLEIGH-RITZ METHOD WITH ITERATION METHOD AND RELAXATION METHOD. Three numerical methods [11] for solving partial differential equations with certain conditions in two dimensions have been considered in the current chapter. Each method has its advantages and disadvantages. The iteration method is slow, self-correcting and well adapted to use with an automatic sequence-controlled calculating machine. The arithmetical operations are short and simple. The relaxation method is faster and more flexible than the iteration method. The arithmetical operations are simple, but mistakes are easy to make and not selfcorrecting. It requires constant vigilance and alertness on the part of the computer. Ti is not adapted to use by an automatic calculating machine. The Rayleigh-Ritz method is of considerable value in handling problems of equilibrium and elastic vibrations. It does not require a partial differential equation to start with, but it requires that a physical problem be reduced to be the definite integral of sum, difference or quotient of two or more homogeneous, positive and definite quadratic forms. The method furnishes a short and easy way of finding a good approximation to the natural vibration period of an elastic body, deflection of a membrane etc. The chief disadvantage of this method is the laborious algebra involved in getting results of high accuracy. It is an easy matter to estimate the accuracy of results obtained by the iteration method and relaxation method, but this is not the case with the Rayleigh-Ritz method. No simple and useful formula for estimating the inherent error involved in this method has yet been devised. Finally, it must be realized that not all three methods may be applicable to the given problem. To use the iteration method and relaxation method, a physical problem must first be setup as a partial differential equation and this must then be converted to a partial difference equations. The Rayleigh-Ritz method will give an approximate solution of a problem without setting up a partial differential equation, as was done in the case of vibrating membrane. In problems where all these three methods are applicable, the Rayleigh-Ritz method would probably be the third choice.

Study On Different Numerical Methods For Solving Differential Equations. [Page 90]

CHAPTER-5

SOLUTION OF BOUNDARY VALUE PROBLEMS WITH APPLICATIONS.

Chapter-5: Solution Of The Boundary Value Problems With Applications.

CHAPTER-5 SOLUTION OF THE BOUNDARY VALUE PROBLEMS WITH APPLICATIONS. 5.1 INTRODUCTION In the previous chapters we have discussed about some well-known methods for solving differential equations satisfying certain initial conditions, which are called initial value problems. In such problems initial conditions are given at a single point. In this chapter we will discuss the problems in which the conditions are satisfied at more than one point, which are known as boundary value problems. We will discuss some methods for solution of boundary value problems. The simple examples of two-point linear boundary value problem [23] are ( )

( ) ( )

( ) ( )

( )

(5.1.1)

With the boundary conditions ( ) And

( )

( ) ( ) ( )

( ( )

)

(

)

(5.1.2) (5.1.3)

With the boundary conditions ( )

(

)

(5.1.4)

There exist many numerical methods for solving such boundary value problems. Among them we will discuss only the finite-difference method and the shooting method. Also we will discuss about the applications of Green’s function and Laplace’s equation for solving boundary value problems. Due to the compression of this chapter we will avoid the estimation of truncation errors of mentioned methods. 5.2 FINITE-DIFFERENCE METHOD Let us consider a linear differential equation of order greater than one, with conditions specified at the end points of an interval , -. We divide the interval , - into equal parts of width [20]. We set and , defining the interior mesh points as . The corresponding ( ) ( ) values are denoted by . Study On Different Numerical Methods For Solving Differential Equations. [Page 91]

Chapter-5: Solution Of The Boundary Value Problems With Applications. -. These We shall sometimes have to deal with points outside the interval , will be called the exterior mesh points, those to the left of the being denoted by and so on, and those to right of the being denoted by and so on. The corresponding values of at the exterior mesh points are denoted in the obvious way as respectively. The finite-difference method for solution of boundary value problem consists in replacing the derivatives occurring in the differential equation and in the boundary conditions as well as by means of their finite-difference approximations and then solving the resulting linear system of equations by a standard procedure [23]. In order to obtain the appropriate finite-difference approximation to the derivatives, we proceed as follows. Expanding ( (

)

) in Taylor’s series expansion, we get ( )

( )

( )

( )

(5.2.1)

This can be written the forward difference approximation for (

( )

)

( )

(

( )

)

(

( )

( )

( )

Now, expanding ( (

)

( )

) (5.2.2)

) in Taylor’s series expansion, we get

( )

( )

( )

( )

(5.2.3)

This can be written the backward difference approximation for ( )

( )

(

)

( )

( )

( ) as

(

(

( )

)

( )

( )

( ) as

) (5.2.4)

A central difference approximation for ( ) can be obtained by subtracting (5.2.4) from (5.2.2), then we get the central difference approximation for as (

( )

(

( ) (

)

)

(

)

) (

(

( )

)

) (

)

(5.2.5)

Study On Different Numerical Methods For Solving Differential Equations. [Page 92]

Chapter-5: Solution Of The Boundary Value Problems With Applications. Again by adding (5.2.1) and (5.2.3), we get the central difference approximation for as (

( )

) (

( ) (

( ) )

(

)

(

)

( )

(

(

)

)

(

)

) (

)

(5.2.6) Similarly the central difference approximation for followings

and

are given by as

(5.2.7) (5.2.8) In the similar manner, it is possible to derive finite-difference approximations to higher order derivatives. In order to explain the procedure, we consider the boundary value problem defined by (5.1.1) and (5.1.2). To solve the problem by finite-difference method sub-divide the range , . into equal sub-interval of width . So that ( ) ( ) Then are the corresponding values of at these points. and from (5.2.5) and (5.2.6) respectively and Now taking value of for then substituting them in (5.1.1), we get at the point . (

)

( (

.

(

/

) (

)(

)

(

) ( )

.

/

) ) (

)

(

) (5.2.9)

Since and are specified by the conditions (5.1.2), so (5.2.9) is a general ) unknowns in representation of linear system of ( ) equations with ( . Writing out (5.2.9) and taking , the system takes the form (

)

.

/

.

/

.

/

(

)

.

/

.

/

(

)

.

/

Study On Different Numerical Methods For Solving Differential Equations. [Page 93]

Chapter-5: Solution Of The Boundary Value Problems With Applications.

.

/

(

)

.

/

.

/

(

)

.

/

These co-efficient in above system of linear equations can of course be computed, since ( ) ( ) ( ) are known functions of . We have above system in a matrix form, as follows (5.2.10) Here ( ) representing the vector of unknown quantities, b representing the vector of known quantities on the right side of (5.2.10). ). The Also is the matrix of co-efficient and in this case tri-diagonal of order ( matrix has the special form

A=

The solution of system constitutes an appropriate solution of the boundary value problem defined by (5.1.1) and (5.1.2). 5.3 APPLICATION OF THE FINITE-DIFFERENCE METHOD The deflection of a beam is governed by the equation ( ) with the boundary conditions ( ) ( ) ( ) ( ) . Here φ(x) is given by ( )

1/3 81

2/3 162

1 243

Evaluate the deflection of the pivot points of the beam using three subintervals by the finite-difference approximation method. Solution: Here and the pivot points are ( ) corresponding value of is and Using (5.2.8) in given boundary value problem at the value of as follows

. The are to be determined. we get by putting

(

)

Study On Different Numerical Methods For Solving Differential Equations. [Page 94]

Chapter-5: Solution Of The Boundary Value Problems With Applications. (

)

( (

of

Now we putting ( ) ( )

)

)

(5.3.1)

successively in (5.3.1) and using the values ( ) . After simplification we get

(5.3.2) Again applying given boundary condition in (5.2.5), for

we get

(5.3.3) Again applying given boundary condition in (5.2.6), for

we get

(5.3.4) Finally applying given boundary condition in (5.2.7), for

we get

(5.3.5) Using (5.3.3), (5.3.4), (5.3.5) in (5.3.2), we get

Then by solving above system of linear equations by Gauss-Seidal iteration method, we get

Hence the required solution (correct to the four decimal places) is Study On Different Numerical Methods For Solving Differential Equations. [Page 95]

Chapter-5: Solution Of The Boundary Value Problems With Applications. . /

( )

. /

5.4 SHOOTING METHOD Shooting method requires good initial guesses for the slope and can be applied to both linear and non-linear problems [23]. The main advantage of this method is its easy applicability. We discuss this method with reference to the second order boundary value problem defined by ( )

( )

( )

( )

(5.4.1)

The main steps involved in this method are 1. Transformation of the boundary value problem into an initial value problem. 2.

Solution of the initial value problem by any standard method as highlighted previous chapters.

3. Finally, solution of the given boundary value problem. To apply any initial value method in (5.4.1), we must to know the values of ( ) and ( ). Since ( ) is not given, we consider it as an unknown parameter m (say), which must be determined, so that the resulting solution yields the given value ( ) to some desired level of accuracy. We thus guess at the initial slope and setup an iterative procedure for converging to the correct slope. Let and be two guesses at the initial slope ( ) and let ( ) and ( ) be the values of at obtained from integrating the differential equation. Graphically, the solution may be represented by as in figure-(5.1) and figure-(5.2).

Figure-(5.1)

Figure-(5.2)

In figure-(5.1) the solutions of the initial value problems are drawn, while in figure-(5.2), ( ) is plotted as a function of . Generally better approximation of can be obtained by linear interpolation. The intersection of the line joining to with the line ( ) , has its m co-ordinate given by ( )

(

)

(

(

)

),

(

)

( ) (

( )

) (

-

)

(5.4.2)

Study On Different Numerical Methods For Solving Differential Equations. [Page 96]

Chapter-5: Solution Of The Boundary Value Problems With Applications. We now solve the initial value problem ( )

( )

( )

( )

(5.4.1)

Then we obtain ( ). Again use linear interpolation with ( ( )) and ( ( )) to obtain a better approximation and so on. This process is repeated until convergence is obtained. i.e. until the value of ( ) agrees with ( ) to the desired level of accuracy. The speed of convergence depends upon how good the initial guesses were. This method will be tedious to apply for higher order boundary value problems and in the case of the non-linear problems, linear interpolation yields unsatisfactory results. 5.5 APPLICATION OF SHOOTING METHOD Apply shooting method with taking boundary value problem ( )

( )

( )

to solve the

( )

(5.5.1)

Solution: Applying Taylor’s series method, we obtain ( )

( ),

( )

( ),

( )

(5.5.2)

Now for

, (5.5.2) gives (

)

(

)(

)

(

)

(

)(

)

Then by using (5.4.2), we get ( This is shows that

( )

( )

)

)0

1

, we now solve the initial value problem ( )

( )

( )

(5.5.3)

) Since , then Taylor’s series method gives ( which is same as ( ) Thus in this problem shooting method converges to the exact solution. Now we can solve the initial value problem (5.5.3) any other standard method mentioned in previous chapters.

Study On Different Numerical Methods For Solving Differential Equations. [Page 97]

Chapter-5: Solution Of The Boundary Value Problems With Applications. 5.6 GREEN’S FUNCTION TO SOLVE BOUNDARY VALUE PROBLEMS Boundary value problems are an almost inevitable consequence of using mathematics to study problems arising in the real world and it is not at all surprising that their solution has been concern to many mathematicians. In this section, we will examine in detail a particular method which requires the construction of an auxiliary function known as Green’s function. To show how such functions arise and to initiate a further study of the method, we will first solve, by fairly elementary methods, a typical one-dimensional boundary value problem [11,14]. Consider the problem of forced, transverse vibrations of a taut string of length . If them time dependent parts of the solution are first removed by the usual variable separation technique, we obtain the following differential equation containing the transverse displacement of the string, as unknown ( )

(5.6.1)

If the ends of the string are kept fixed, then this equation must be solved for subject to the boundary conditions ( )

( )

(5.6.2)

To solve the boundary value problem posed by the ordinary second order differential equation (5.6.1) and associated boundary conditions (5.6.2), we will employ the method of variation of parameters. i.e. we will assume that a solution to the problem actually exists and that, furthermore, it has the precise form ( )

( )

( )

If we differentiate (5.6.3) twice with respect to ( )

(5.6.3)

and passing assume that

( )

(5.6.4)

Then, we can find that (5.6.3) constitutes a solution provided that ( )

( )

( )

(5.6.5)

Although assumption (5.6.4) was introduced primarily to ease the ensuring algebra, equation (5.6.4) and (5.6.5) are two linear algebraic equations using the ( ) ( ). Solving these equations, we readily find that unknowns ( )

( )

( )

( )

(5.6.6)

Thus, formally we can write the solution of (5.6.1) in the form ( )



( )



( )

(5.6.7)

Study On Different Numerical Methods For Solving Differential Equations. [Page 98]

Chapter-5: Solution Of The Boundary Value Problems With Applications. Here, and are constants which must be so chosen as to ensure that the boundary conditions (5.6.2) are satisfied. Inserting the condition ( ) into (5.6.7), we find that we must choose such that ( )



(5.6.8)

Since ( ) is assumed arbitrary, this implies that we must choose The condition ( ) ( )

.

, when inserted into (5.6.7), will require that ( )





( )

(5.6.9)

After slight manipulation, we can re-write (5.6.9) in this form ( )



( )



(

)

(5.6.10)

Combining the results (5.6.8) and (5.6.10), we see that the solution (5.6.7) can now be written in the form ( )

∫ ∫ =∫

Here

( )

( (

( ) ( ) ( ( (

)



)

( ) (

( )



(

)

)

(5.6.11)

)

(5.6.12)

)

(

)

)

(

)

(5.6.13)

This function ( ) is two-point function of position known as the Green’s function for the equation (5.6.1) and boundary conditions (5.6.2). 5.7 APPLICATION OF GREEN’S FUNCTION Apply Green’s function to solve the differential equation the boundary conditions ( ) ( ).

with

Solution: By the usual elementary method of solving such equations, we have

Hence the general solution is (

)

(

)

(5.7.1)

Study On Different Numerical Methods For Solving Differential Equations. [Page 99]

Chapter-5: Solution Of The Boundary Value Problems With Applications. Substituting the boundary values (5.7.1), we get

at

( ) So that

respectively in

( )

( )

Hence (5.7.1) reduce to a trivial solution as To get a worth-while solution of the problem we assume a function of the . However, since , form (5.7.3) for each point of the interval to it is plain that the assumed functions need not contain ( ). Hence we take (

)

( (

))

(5.7.2)

Here we have utilized the boundary conditions in writing down these functions.

Figure-(5.3) The graphs of these functions will evidently intersect at some point where and at that point the functions will be equal and their first derivatives will be , we have the form unequal as shown as figure-(5.3). Hence for (

) (

( ( )

)) ( (

(5.7.5) ))

(5.7.6)

From (5.7.5), we have ( ( (

Substituting this value of

)) )

(5.7.7)

in (5.7.6), we find (

) ( )

(5.7.8)

Hence from (5.7.7), we get

Study On Different Numerical Methods For Solving Differential Equations. [Page 100]

Chapter-5: Solution Of The Boundary Value Problems With Applications. ( (

))

(5.7.9)

( )

Substituting (5.7.8) and (5.7.9) in (5.7.4), we get ( (

))

(

)

( )

(

&

)

( (

(

)

( (

)

(

(5.7.10)

( )

(

These can be written as a single solution in the form (

))

))

(

) as

)

( ) )

( (

(5.7.11)

))

( )

The function ( ) is called the Green’s function for this problem. It is a function of two independent variables and s in the interval , - and is evidently symmetrical variables. The Green’s function given by (5.7.12) is thus the solution of the boundary value problem defined in (5.7.1) and (5.7.2). 5.8 CUBIC B-SPLINE METHOD FOR SOLVING TWO POINT BOUNDARY VALUE PROBLEMS OF ORDER FOUR Introduction: Two-point and multi-point boundary value problems for fourth order ordinary differential equations have attracted a lot of attention [25] recently. Many authors have studied the beam equation under various boundary conditions and by different approaches. Consider smooth approximation to the problem of bending a rectangular clamped beam of length resting on elastic foundation. The vertical deflection of the beam satisfies the system ( )

( )

()

( )

()

(5.8.1)

Here is the flexural rigidity of the beam, and is the spring constant of the elastic foundation, and the load ( ) acts vertically downwards per unit length of the beam [17]. Mathematically, the system (5.8.1) belongs to a general class of boundary problems of the form ( ) ( ) ( )

( )

( ) ( )

(5.8.2) ( )

(5.8.3)

- and Here ( ) and ( ) are continuous on , are finite real arbitrary constants. The analytical solution of (5.8.2) for arbitrary choices of ( ) and ( ) cannot be determined. So, numerical methods are developed to overcome this limitation. It has formulated a simple condition that guarantees the uniqueness [19] of the solution of the problem (5.8.2) and (5.8.3). Study On Different Numerical Methods For Solving Differential Equations. [Page 101]

Chapter-5: Solution Of The Boundary Value Problems With Applications. Among many numerical methods, as enumerated above, Spline methods have been widely applied for the approximation solutions of boundary value problems including fourth order boundary value problems. Also, Cubic B-spline has been used to solve boundary value problems and system of boundary value problems [13,16], singular boundary value problems [15] and also, second order perturbation problems. Derivations for Cubic B-spline: The given range of the independent variable is , -. For this range we are to choose the equidistant points are given by the intervals * +. i.e. for . ( ) * ( ) Let us define polynomial on each sub-interval , interval is defined as

,

-+ such that ( ) reduces to cubic -.The basis function ( ) for different

(

)

*

(

)

(

)

(

) +

*

(

)

(

)

(

) +

(

)

( )

(5.8.4)

Let us introduce four additional knots . , -. The calculative values From above expression, it is obvious that each ( ) ( ) ( ) and at nodal points are given by the following table-1 of ( ) ( ) 0 1/6 4/6 1/6 0

each

( ) 0 1/2h 0 -1/2h 0

( ) 0 1/h2 -2/h2 1/h2 0

Since each ( ) is also a piecewise cubic polynomial with knots at the , ( ). Let ( ) * + and let ( ) ( ).

The functions in are linearly independent on , -, thus ( ) is ( )( ). Let ( ) be the B-spline interpolating dimensional. Also, we have ( ) function [17] at the nodal points and ( ) ( ).Then ( ) can be written as ( )



( )

(5.8.5)

Therefore, for a given function ( ), there exists a unique cubic spline (5.8.5) satisfying the interpolating conditions ( )

( )

( )

( )

( )

( )

(5.8.6)

Study On Different Numerical Methods For Solving Differential Equations. [Page 102]

Chapter-5: Solution Of The Boundary Value Problems With Applications. ( ) and

Let

( ) [8], we have

( )

( )

( )

( ) (5.8.7)

( )

( )

( )

(5.8.8)

All can be applied to construct numerical difference formulae ( ) ( ) ( ) as follows (

)

(

)

(

)

(

( ) )

( ) ( )

(5.8.9) ( ) ( )

(5.8.10) (5.8.11)

( ) ( ) ( ) using We will find the values of ( ) ( ) table-1 in system of equations in (5.8.4) and applying above equations (5.8.9), (5.8.10) & (5.8.11) as ( )

( )

( )

(5.8.12)

( )

( )

(5.8.13)

( )

(5.8.14)

( )

( )

(5.8.15)

( )

( )

(5.8.16)

Solution of special case fourth order boundary value problem: Let us consider the boundary value problem ( )

( ) ( )

( )

(5.8.17)

Let the approximate solution of (5.8.17) be ( )

( )



( )

(5.8.18)

Discrediting boundary value problem at the knots, we get ( )

( ) ( )

Putting values in terms of ( ) ( ) we get

( )

(

)

(5.8.19)

using equations (5.8.11) to (5.8.15), by taking

(5.8.20) Study On Different Numerical Methods For Solving Differential Equations. [Page 103]

Chapter-5: Solution Of The Boundary Value Problems With Applications. Simplifying (5.8.20) becomes (

)

(

)

(5.8.21)

This gives a system of ( ) linear equations for ( ) ( ). Remaining four equations in ( ) unknowns. i.e. will be obtained using the boundary conditions as follows ( )

(5.8.22)

( )

(5.8.23)

( )

(5.8.24)

( )

(5.8.25)

The solution (5.8.18) obtained by solving above system of ( ) linear equations in ( ) unknowns using equations (5.8.21) and (5.8.22) to (5.8.25). General linear fourth order boundary value problem: Subject to boundary conditions given by (5.8.3) consider the following boundary value problem ( )

( )

( )

( )

( )

( ) ( )

( ) ( )

( ) (5.8.26)

Let (5.8.18) be the approximate solution of the boundary value problem. By ( ) ( ) ( ) ( ) discrediting at the taking ( ) knots ( )

( )

( )

( )

( )

(5.8.27)

Putting the values of derivatives using (5.8.12) to (5.8.16), we get

(5.8.28) On simplification (5.8.28) becomes (

) (

)

(

(

) )

(

) (5.8.29)

Now, the approximate solution is obtained by solving the system given by (5.8.29) and (5.8.22) to (5.8.25). Non-linear fourth order boundary value problem: Subject to boundary conditions given in (5.8.3) consider non-linear fourth order boundary value problem of the forms

Study On Different Numerical Methods For Solving Differential Equations. [Page 104]

Chapter-5: Solution Of The Boundary Value Problems With Applications. ( )

( )

(

( )

( )

( ))

(5.8.30)

Let (5.8.18) be the approximate solution of the boundary value problem. It must satisfy the boundary value problem at knots. So, we have ( )

( )

(

( )

( )

( ))

(5.8.31)

Using (5.8.12) to (5.8.16), we get

(

)

(5.8.32)

This equation (5.8.32) together with equation (5.8.22) to (5.8.25) gives a nonlinear system of equations, which is solved to get the required solution of boundary value problem. Singular fourth order boundary value problem: Consider singular fourth order boundary value problem of the form ( )

( )

( )

With

( ))

(

( )

(5.8.33)

( )

( )

(5.8.34)

Since is singular point of equation (5.8.33), we first modify it at to get transformed problem as ( ) Here

( ) (

( )

( )

(

)

(5.8.35)

( ) )

(

)

(5.8.36) (

)

(

)

(5.8.37)

Now, as in previous sections, let (5.8.18) be approximate solution of boundary ( ) value problem. By taking ( ) discrediting at knots, we get ( )

( )

(

( ))

(5.8.38)

Putting the values of derivatives using (5.8.12) to (5.8.16), we get .

/ (5.8.39)

Finally boundary conditions provide ( ) ( )

(5.8.40) (5.8.41)

Study On Different Numerical Methods For Solving Differential Equations. [Page 105]

Chapter-5: Solution Of The Boundary Value Problems With Applications. ( )

(5.8.42)

( )

(5.8.43)

This equation (5.8.39) together with equations (5.8.40) to (5.8.43) gives a nonlinear system of equations, which is solved to get the required solution of boundary value problem (5.8.33).

Study On Different Numerical Methods For Solving Differential Equations. [Page 106]

CHAPTER-6

TWO PROPOSED METHODS FOR SOLVING DIFFERENTIAL EQUATIONS.

Chapter-6: Two Proposed Methods For Solving Differential Equations.

CHAPTER-6 TWO PROPOSED METHODS FOR SOLVING DIFFERENTIAL EQUATIONS. 6.1 INTRODUCTION In previous chapters some well-known numerical methods for solving differential equations have discussed. In these chapters we have mentioned about the limitations of discussed methods. Due to the age of modern civilization and technical sphere it is to require upgrading the present numerical methods. This requirement inspired us to propose some modification of present methods or to be introducing newer numerical methods. In this chapter, we are proposing a modified form of the Milne’s PredictorCorrector formula for solving ordinary differential equation of first order and first degree. Here we are to approximate the value of the dependent variable under four initial conditions and then improve this value by proper substitution in the formulae. This process is an iterative way to obtain the values until we get a proper level of accuracy. Also, a modified formula for solving Elliptic equation by the finite-difference approximations will be offered here. In which we are to establish a combined finitedifference formula by means of standard 5-point formula and diagonal 5-point formula, then we will improve the approximated values of several mesh points with the help of the Gauss-Seidal iteration formula. 6.2 MILNE’S (MODIFIED) PREDICTOR-CORRECTOR METHOD by this method, first we are to To solve the differential equation approximate the value of by predictor formula at , then improve this values of by using the corrector formula after proper substitution. These formulae will be derived from the Newton’s formula of forward interpolation. Derivation of Milne’s (modified) Predictor formula: We know that Newton’s formula of forward interpolation in terms of and is given by

(

)

(

)

(

)

Study On Different Numerical Methods For Solving Differential Equations. [Page 107]

Chapter-6: Two Proposed Methods For Solving Differential Equations. (6.2.1) Here

Now, integrating (6.2.1) over the interval

to

. i.e.,

to

,

we get ∫



(

)

After neglecting those terms containing higher orders and substituting , we get Milne’s (modified) predictor formula as follows

as

( (

) )

(

(

)

)

Study On Different Numerical Methods For Solving Differential Equations. [Page 108]

Chapter-6: Two Proposed Methods For Solving Differential Equations. (6.2.2) Derivation of Milne’s (modified) Corrector formula: To obtain the corrector formula, we integrate (6.2.1) over the interval to . i.e., to , then we get ∫



(

)

After neglecting those terms containing higher orders and substituting , we get Milne’s (modified) corrector formula as follows

as

( (

)

) (

(

)

)

(6.2.3)

Study On Different Numerical Methods For Solving Differential Equations. [Page 109]

Chapter-6: Two Proposed Methods For Solving Differential Equations. Generalization of Milne’s (modified) Predictor-Corrector formula: We can write the general form [7] of Milne’s (modified) predictor and corrector formulae according to (6.2.2) and (6.2.3) as follows (6.2.6) (6.2.7) Here the index respectively at .

indicates the predicted and corrected values of

6.3 APPLICATION OF THE MILNE’S (MODIFIED) PREDICTOR-CORRECTOR METHOD Solve the differential equation

with the initial values

Solution: Given by By taking the step length

from initial conditions and, we get

Finally,

for

Now, putting , as follows

in (6.2.6), we get Milne’s (modified) predictor formula

Then

Study On Different Numerical Methods For Solving Differential Equations. [Page 110]

Chapter-6: Two Proposed Methods For Solving Differential Equations.

for

Now, putting , as follows

in (6.2.7), we get Milne’s (modified) corrector formula

Then we get the approximations of

by above formula as followings

First iteration:

Then Second iteration:

Then Third iteration:

Then Since third approximation for is same as second approximation, we can choose the following approximation values at Exact result: Exact result: We have

Study On Different Numerical Methods For Solving Differential Equations. [Page 111]

Chapter-6: Two Proposed Methods For Solving Differential Equations. This is a linear differential equation in

whose integrating factor is



Multiplying the above differential equation by

(

, it becomes

) ∫

From the initial condition we get,

, we get

Then the above solution becomes Now, we get value of

at

Comment: We have observed that

Thus we can conclude that, the value of y at obtained by the Milne’s (modified) predictor-corrector method gives very close value to the exact value. 6.4 SURROUNDING 9-POINT FORMULA Laplace’s equation is an elliptic partial differential equation which can be solved by finite-difference approximations. In that solution it involves to find the mesh points of the given domain certain boundary values. In this section we are proposing a formula, namely surrounding nine-point formula to find the mesh points of the Laplace’s equation with given domain. Derivation of Surrounding Nine-point formula: Let us consider the Laplace’s equation in , as follows (6.4.1)

Study On Different Numerical Methods For Solving Differential Equations. [Page 112]

Chapter-6: Two Proposed Methods For Solving Differential Equations. We can now obtain finite-difference analogues of partial differential equation by replacing the derivatives in above equation by their corresponding difference approximations as (6.4.2) (6.4.3) Replacing the derivatives in (6.4.1) by their finite-difference approximations from (6.4.2) and (6.4.3) with taking , we get

(6.4.4) This is called difference equation of Laplace’s equation, which shows the value of at any point is the mean of its values at the four neighboring points. Equation (6.4.4) is called standard 5-point formula exhibited in figure-(6.1)

Figure-(6.1)

Figure-(6.2)

We know that Laplace’s equation remains invariant when the co-ordinate axes are rotated through an angle . Then the formula (6.4.4) can be re-written as (6.4.5) This is called diagonal 5-point formula, which shows the value of at any point is the mean of its values at the four diagonal points. The formula given by (6.4.5) is represents in figure-(6.2). Now by taking average of (6.4.4) and (6.4.5), we get

(6.4.6) Thus, a newer form for has been proposed by (6.4.6). By which we can find the value of at different mesh points by taking the mean of value of all points Study On Different Numerical Methods For Solving Differential Equations. [Page 113]

Chapter-6: Two Proposed Methods For Solving Differential Equations. surrounding it. So, we can call the proposed formula as surrounding 9-point formula. The figure-(6.3) represents (6.4.6).

Figure-(6.3) Algorithm: Now we are to discuss the algorithm for obtaining mesh points of a given domain under the formula (6.4.6). 1. At first we will consider the boundary values of the given domain. 2. Choose the non-boundary points as zero whose are to be taken as surrounding points to evaluation of a mesh point and it will continued till all the mesh points are approximated for once. 3. When a mesh point evaluation has done by taking some surrounding points as zero is also to be used as surrounding point with the current value to the approximation of the next mesh point , if need. 4. The first approximation values of the mesh points are to be improved by the iteration method of Gauss-Seidal. 5. Finally, we will consider the approximation values as the required mesh points, if the approximation values are very close to the approximation values by a scale of accuracy. 6.5 APPLICATION OF SURROUNDING NINE-POINT FORMULA Solve the Laplace’s equation for the square mesh with the boundary values is given in the figure-(6.4) below, by finite-difference method.

Figure-(6.4)

Figure-(6.5)

Solution: We are to consider figure-(6.5) and by comparing figure-(6.4) with the figure-(6.5), we obtain

Study On Different Numerical Methods For Solving Differential Equations. [Page 114]

Chapter-6: Two Proposed Methods For Solving Differential Equations. We have from (6.4.6), we get

Then by applying above formula to figure-(6.5), we get

Above system of equations can be re-written by means of the Gauss-Seidal iterative form as follows

Now applying above system for the following approximations with initial substitution . First approximation:

Second approximation:

Study On Different Numerical Methods For Solving Differential Equations. [Page 115]

Chapter-6: Two Proposed Methods For Solving Differential Equations.

Third approximation:

Fourth approximation:

Fifth approximation:

Sixth approximation:

Seventh approximation:

Study On Different Numerical Methods For Solving Differential Equations. [Page 116]

Chapter-6: Two Proposed Methods For Solving Differential Equations. Since sixth and seventh approximations are become so closer for values of mesh points, we can choose

Comment: The mesh points of the given domain (6.4) also can be obtained by using the formulae of standard 5-point and diagonal 5-point, the values of the mesh points obtained by these formulae are given below

Hence, after comparing we conclude that surrounding 9-point formula is usable for obtain the mesh points of any given domain with desired level of accuracy. 6.6 ADVANTAGES OF PROPOSED METHODS OVER PREVIOUS METHODS Milne’s (modified) predictor-corrector method: Though Milne’s (modified) predictor-corrector formulae seems to be lengthy process of solving ordinary differential equations, it has following advantages over previous methods 1. The previous methods estimates the value of respecting a given value of by means of four initial conditions whereas the Milne’s (modified) predictor-corrector formulae estimate the value of respecting a given value of by means of four initial conditions, which is more logical. 2. To obtain value of at any value of , previous methods are need to be calculating up-to fourth order Newton’s formula of forward interpolation but Milne’s (modified) predictor-corrector formulae need to be calculating up-to fifth order Newton’s formula of forward interpolation, which will give better accuracy. 3. At Milne’s (modified) corrector formula the co-efficient of is zero, then the truncation error converging to zero, this will upgrade the level of accuracy of the method. Surrounding 9-point formula: It seems to be time consuming process to obtain mesh points by means of the surrounding 9-point formula, but it has following advantages over previous methods. 1. Since surrounding 9-point formula depends upon all mesh points around it to determine any mesh points, it is more contributive and logical, which gives a better accuracy. 2. The initial zero substitution (taking unknown mesh points as zero whose are surrounded to a required mesh point) enables us to solve a bigger domain at which most of mesh points are absent. i.e. are to be estimated. 3. Using of the Gauss-Seidal iteration formula gives the method a quick ending, this saves the estimation time. Study On Different Numerical Methods For Solving Differential Equations. [Page 117]

CHAPTER-7

CONCLUSIONS.

Chapter-7: Conclusions.

CHAPTER-7 CONCLUSIONS. In this thesis paper we have discussed some numerical methods for solution of ordinary differential equations (in chapter-2 & chapter-3), partial differential equations (in chapter-4) and boundary value problems (in chapter-5). Also, we have proposed two modified numerical methods (in chapter-6) in this thesis paper. The conclusions of these discussions are coming next here in brief. In chapter-2, we get from section-2.3 and section-2.5, both of the Taylor’s series method and Picard’s method of successive approximations are correct to eight decimal places with the exact solution for the given initial value problem . But from the comparative discussion of them in section-2.6, we can conclude that Picard’s method of successive approximations is better than Taylor’s series method in this case. Also, from section-2.8 it can be said that computed values of y deviate rapidly in Euler’s method and the disturbance have solved in section-2.9 at modified Euler’s method. In chapter-3, from the comparison between predictor-corrector method and Runge-Kutta method in section-3.12, we have seen that finding local truncation error in Runge-Kutta method is more laborious than in predictor-corrector method, but the self-starting characteristic of Runge-Kutta method makes it favorable than predictorcorrector method. Also, Runge-Kutta method can be used for a wider range of the solution and it is stable for suitable step size. Thus, we can conclude that for practical purposes Runge-Kutta method is to be chosen for better accuracy. In chapter-4, from the comparison between iteration method and relaxation method in section-4.10, we have seen that iteration method is slow, sure and lengthy process whereas relaxation method is rapid, less certain and short process to get o solution of partial differential equations under certain conditions. Also, iteration method is self-correcting and has minimum error bound than relaxation method. Moreover from section-4.12, we have seen that to solve a physical problem by iteration method and relaxation method, it needs to formulate as a partial differential equations whereas the Rayleigh-Ritz method will give an approximate solution without any formulation. Here it to be noted that Rayleigh-Ritz method is quite long and having complexity during the calculation. Study On Different Numerical Methods For Solving Differential Equations. [Page 118]

Chapter-7: Conclusions. Thus, we can choose the iteration method as the best of among three methods and Rayleigh-Ritz method would probably the third one in practice. In chapter-5, from section-5.3 and section-5.5, we have seen that a two-point boundary value problem can be solved directly by finite-difference method and no other methods needs to its assistance, but the shooting method needs the help of one of other standard methods (i.e. Euler’s method, predictor-corrector method and Runge-Kutta method) after primary formulation. Thus, we can take finite-difference method as the better method between above two. Also, from the section-5.7, we have seen that Green’s function is applicable to solve a two-point boundary value problem numerically. Moreover, from section-5.8, we can conclude that multi-order (fourth order) two-point boundary value problem of various cases can be solved numerically by the help of the cubic B-spline method [25] with more accuracy. Finally, in chapter-6, we have proposed a modified form of Milne’s predictorcorrector method for solving ordinary differential equation of first order and first degree. Also, a utilized formula of standard 5-point formula and diagonal 5-point formula for solving partial differential equation of elliptic type have offered here. Now, the advantages, limitations and recommendations future research with aim of above two proposed methods are given below. Advantages of the Milne’s (modified) predictor-corrector formulae: 1. Milne’s (modified) predictor-corrector formulae estimate the value of y respecting the given value of x by means of five initial conditions, which is more contributive and logical. 2. Milne’s (modified) predictor-corrector formulae need to calculate up-to fifth order Newton’s formula of forward interpolation, which will give better accuracy. 3. At Milne’s (modified) predictor-corrector formulae the co-efficient of is zero, then the truncation error converging to zero, this will upgrade the level of accuracy of the method. Advantages of the surrounding 9-point formula: 1. Since surrounding 9-point formula depends upon all mesh points around it to determine any mesh point, it is more contributive and logical, which may give better accuracy. 2. The initial zero substitution may enable us to solve a bigger domain at which most of mesh points are absent. 3. Using of the Gauss-Seidal iteration formula may give the method a quick ending, this will save the estimation time. Study On Different Numerical Methods For Solving Differential Equations. [Page 119]

Chapter-7: Conclusions. Limitations of the Milne’s (modified) predictor-corrector formulae: 1. In Milne’s (modified) predictor-corrector formulae it is require to use one more initial condition than the previous. 2. It needs few more calculation time than previous formulae. Limitations of the surrounding 9-point formula: 1. Surrounding 9-point formula is not applicable to the domains having less than nine mesh (grid) points. 2. It can used to solve partial differential equations of elliptic type only. Recommendations future research: We can proof the advantages mentioned above by substituting proper applications and comparisons. Due to the compression of the thesis paper we have omitted these proofs. But in section-6.3 and section-6.5 we have just shown some applications and comments about these methods comparing with exact solutions. Therefore, in the future these proofs are to be tried. Further work can be done: 1. To measure efficiency of Milne’s (modified) predictor-corrector formulae and surrounding 9-point formula, compare them with previous all. 2. To construct a generalized predictor-corrector formulae for solving ordinary differential equation of first order and first degree. Also. Similar formulae as surrounding 9-point formula for solving partial differential equations for parabolic and hyperbolic types are to be tried to construct. 3. To implement Milne’s (modified) predictor-corrector surrounding 9-point formula to the real world problems.

formulae

and

Study On Different Numerical Methods For Solving Differential Equations. [Page 120]

References

REFERENCES [01]. ANTONY RALSTON, PHILIP RABINOWITZ, 1988. A first course in numerical analysis (McGraw-Hill Book Company.). P.196 [02]. A. R. VASISTHA, VIPIN VASISTHA, 1999. Numerical analysis (Kedar Nath-Ram Nath, Meerut.). P.265 [03]. BRIAN BRADIE, 2007. A friendly introduction to numerical analysis (Pearson Prentice Hall, New Delhi.). P.588 [04]. CURTIS F. GERALD, PATRICK O. WHEATLEY, 1970. Applied numerical analysis (Addison-Wesley publishing company.). P.340 [05]. Dr. B. D. SHARMA, 2006. Differential equations (Kedar Nath-Ram Nath, Meerut.). P.01 [06]. Dr. B. S. GOEL, Dr. S. K. MITTAL, 1995. Numerical analysis (Pragati Prakashan, India.). P.518 [07]. E. L. REISS, A. J. CALLEGARI, D. S. AHLUWALIA, 1776. Ordinary Differential Equation with Applications, Holt, Rinehart and Winston, New Cork. [08]. F. LANG, XIAO-PING XU, 2011. A new cubic B-spline method for linear fifth order boundary value problems (Journal of Applied Mathematics and Computing 36 (2011).) P.101 [09]. FRANCIS SCHELD, Ph.D., 1988. Numerical analysis (Schaum’s Outline Series McGraw-Hill.). P.471 [10]. IAN N. SNEDDON, 1957. Elementary of Partial differential equations (McGraw-Hill Book Company, INC.). P.327 [11]. JAMES B. SCARBOROUGH, Ph.D., 1966. Numerical mathematical analysis (Oxford and IBM Publishing Co. Pvt. Ltd.). P.310 [12]. J. N. SHARMA, 2004. Numerical methods for Engineers and Scientists (Narosa Publishing House, New Delhi.). P.222 [13]. M. DEHGHAN, M. LAKESTANI, 2008. Numerical solution of nonlinear system of second-order boundary value problems using cubic B-spline scaling functions (International Journal of Computer Mathematics, 85(9).). P.1455

Study On Different Numerical Methods For Solving Differential Equations. [Page 121]

References [14]. M. D. RAISINGHANIA, S. CHAND, 2007. Integral equations and Boundary value problems (S. Chand and Company Ltd.). P.11.5 [15]. M. KUMAR, Y. GUPTA, 2010. Methods for solving singular boundary value problems using splines (A review, Journal of Applied Mathematics and Computing 32(2010).). P.265 [16]. N. CAGLAR, H. CAGLAR, K. ELFAITURI, 2006. B-spline interpolation compared with finite difference, finite element and finite volume methods which applied to two point boundary value problems, Applied Mathematics and Computation 175 (2006). P.72 [17]. P. M. PRENTER, 1989. Splines and variation methods, John Wiley & sons, New York. [18]. P. N. CHATTERJI, 1999. Numerical analysis (Rajhans Prakashan Mandir, Meerut.). P.528 [19]. R. A. USMANI, 1978. Discrete methods for boundary-value problems with Engineering application, Mathematics of Computation, 32 (1978). P.1087 [20]. SAMUEL D. CONTE, CARL DE BOOR, 1980. Elementary numerical analysis (McGraw-Hill Book Company.). P.432 [21]. STEVEN C. CHAPRA, Ph.D., RAYMOND P. CANDLE, Ph.D., 1990. Numerical methods for Engineers (McGraw-Hill Book Company.). P.812 [22]. S. BALACHANDRA RAO, C. K. SHANTHA, 2000. Numerical methods (Universities Press India Ltd.). P.359 [23]. S. S. SASTRY, 2002. Introductory methods of numerical analysis (PrenticHall of India Private Limited.). P.267 [24]. WARD CHENEY, DAVID KINCAID, 1980. Numerical mathematics and computing (Books/Cole Publishing Company Monterey, California.). P.362 [25]. Y. GUPTA, P. K. SRIVASTAVA, 2011, International Journal of Computer Technology and Application,Vol 2(5). P.1426

Study On Different Numerical Methods For Solving Differential Equations. [Page 122]