Simpson Proj 2 2015
Simpson Proj 2 2015
INTRODUCTION
1.1 Back ground of the study
Simpson’s rule for estimating definite integral is named after Thomas Simpson(1710-1761), who
published it in 1743.However, Simpson was not a first to discover the rule; Bonaventura
Cavalieri (1598-1647) found a version of its early as 1639, and James Gregory (1638-1675)
published it in 1668.
In addition to these, Simpson’s one third rule works if you use an even number of sub intervals
only.
1
CHAPTER TWO
PRELIMINARY
2.1 The Definite Integral
2.1.1 Partitions
Consider any region R bounded by the graph of a non- negative function f that is continuous on
an interval by the x-axis, by the lines and , where .We call R the
region between the graphs of f and the x-axis on
Definition 2.1 A partition of is a finite set P of points such that,
. We describe P by writing
1 3
Example 2.1 i. {−1, − 2 , 0,2, 2 , 3}is a partition of [−1, 3]
1
ii. {− 2 , 0, 3}is not a partition of [−1, 3] since −1 𝑛𝑜𝑡 𝑖𝑛 𝑝.
and
Assuming that is continues on from the maximum and minimum values theorems for
each k between 1 and n then exists a smallest value of f on the sub-interval .
For each , the rectangle has base with length and has
height . Hence the area of is
2
For each .
Notes: 1.
2 We denote this sum by and call it the lower sum of f associated with the
partition p.
Thus,
1 3 5
Example 2.2 Let for . Find for the partitions 𝑝 = {0, 2 , 1, 2 , 2, 2 , 3}
Since f is increasing on , the minimum values of f on each subinterval attained at the left
end points of the subintervals. Hence for the partition 𝑝
3
Hence areas of and the sum of the areas of the
circumscribed rectangle should be no smaller than the areas of R. i.e.
We denote this sum and call it the upper sum of f associated with the partition P.
Thus,
Note:
Since f is increasing on , the minimum values of f on each subinterval attained at the left
end points of the subintervals. Hence
i. For the partition p
Remark: The variable x appearing in the integral may be replaced by any other
variable such as t or u….this means that
𝑏 𝑏 𝑏
∫ 𝑓 (𝑥 )𝑑𝑥 = ∫ 𝑓(𝑡)𝑑𝑡 = ∫ 𝑓(𝑢)𝑑𝑢
𝑎 𝑎 𝑎
4
1
Example 2.4: Evaluate the integral ∫0 (6 𝑥 2 + 5𝑒 𝑥 ) 𝑑𝑥
Solution: First we break the integration in to its components and then integrate.
1 1
∫0 (6 𝑥 2 + 5𝑒 𝑥 ) 𝑑𝑥=∫0 6 𝑥 2 𝑑𝑥+ 5𝑒 𝑥 𝑑𝑥
= 5𝑒 − 3
2.2 DOUBLE INTEGRAL
Definition 2.3. The double integral of a function f(x, y) over a region D in𝑅 2 is
denoted by
∬ 𝑓(𝑥, 𝑦)𝑑𝑥𝑑𝑦.
The value of this integral depends on x and we get a new function of x. This can be
𝑑 𝑏
integrated depending on x and we get ∫𝑐 [∫𝑎 𝑓 (𝑥, 𝑦)𝑑𝑥 ] 𝑑𝑦.
1 2
Example 2.5: Evaluate ∫0 ∫0 𝑥𝑦 2 𝑑𝑦𝑑𝑥
1 2 1 𝑥𝑦 3 1 1 1 1 8𝑥 2 1 4
Solution:∫0 ∫0 𝑥𝑦 2 𝑑𝑦𝑑𝑥 = ∫0 [ ] 𝑑𝑦 = ∫0 8𝑥𝑑𝑥 = [ ] =
3 0 3 3 2 0 3
2.3 Interpolation
Interpolation is the process of estimating the values of a function at a point from its value at near
by points. That means, interpolation is the technique of estimating the value of a function for
any intermediate value of the independent variable. The process of computing or finding the
value of a function for any value of the independent variable outside the given range is called
extrapolation. Here, interpolation denotes the method of computing the value of the function
𝑦 = 𝑓 (𝑥 ) for any given value of the independent variable 𝑥 when a set of values of for certain
values of 𝑥 are known or given. Hence if (𝑥𝑖 , 𝑦𝑖 ), 𝑖 = 0,1,2, … , 𝑛 are the set of 𝑛 + 1 given
data points of the function
5
𝑦 = 𝑓 (𝑥 ) , then the process of finding the value of 𝑦 corresponding to any value of 𝑥 =
𝑥𝑖 between 𝑥0 and 𝑥𝑛 , is called Interpolation.
If the function 𝑓(𝑥) is known explicitly, then the value of 𝑦 corresponding to any value of 𝑥 can
easily be obtained. On the other hand, if the function 𝑓(𝑥)is not known, then it is very hard to
find the exact form of 𝑓(𝑥) with the tabulated values(𝑥𝑖 , 𝑦𝑖 ). In such cases, the function 𝑓(𝑥 ) can
be replaced by a simpler, function, say, 𝑝(𝑥 ), which has the same values as 𝑓 (𝑥 ) for 𝑥0 , 𝑥1 , … , 𝑥𝑛
. The function 𝑝(𝑥 ) is called the interpolating or smoothing function and any other value can be
computed from 𝑝(𝑥).
If 𝑝(𝑥) is a polynomial, then 𝑝(𝑥) is called the interpolating polynomial and the process of
computing the intermediate values of 𝑦 = 𝑓(𝑥) is called the polynomial interpolation.
There are two main uses of these approximating polynomials.
I. The first use is to reconstruct
the function f(x) when it is not given explicitly and only values of f(x) and/ or its certain order
derivatives are given at a set of distinct points called nodes or tabular points.
II. The second use is to perform the required operations which were intended for f(x),
like determination of roots,differentiation and integration etc. can be carried out using
the approximating polynomial P(x).
The approximating polynomial P(x) can be used to predict the value of f(x) at a non tabular
point. The deviation of P(x) from f(x), that is f(x) – P(x), is called the error of approximation.
Let f(x) be a continuous function defined on some interval [a, b], and be prescribed
at n + 1 distinct tabular points x0, x1,..., xn such that a = x0 < x1 < x2 < ... < xn = b. The distinct
tabular points x0, x1,..., xn may be non-equispaced or equispaced, that is 𝑥𝑘+1 – 𝑥𝑘 = ℎ, k = 0,
1,2,…, n –1. The problem of polynomial approximation is to find a polynomial Pn(x), of degree
≤ n, which fits the given data exactly, that is,
𝑝𝑛 (𝑥𝑖) = 𝑓(𝑥𝑖 ), 𝑖 = 0, 1, 2, … , 𝑛. ………………(*)
The polynomial 𝑝𝑛 (𝑥) is called the interpolating polynomial. The conditions given in
(*) are called the interpolating conditions.
Remark : Through two distinct points, we can construct a unique polynomial of degree
1(straight line). Through three distinct points, we can construct a unique polynomial of degree 2
6
(parabola) or a unique polynomial of degree1 (straight line). That is, through three distinct
points, we can construct a unique polynomial of degree ≤ 2. In general, through n + 1 distinct
points, we can construct a unique polynomial of degree ≤ n. The interpolation polynomial
fitting a given data is unique. We may express it in various forms but are otherwise the same
polynomial.
For example, 𝑓(𝑥) = 𝑥 2 – 2𝑥 – 1 can be written as
𝑥 2 – 2𝑥 – 1 = – 2 + (𝑥 – 1) + (𝑥 – 1) (𝑥 – 2).
(𝑓(𝑥1 )−𝑓(𝑥0 ))
⇒ 𝑓 (𝑥 ) ≈ 𝑝1 (𝑥) = 𝑓 (𝑥0 ) + (𝑥 − 𝑥0 )
𝑥1−𝑥0
𝑥 𝑓(𝑥) 𝛿 ′ 𝑓(𝑥)
1 1
1
7
8 2
7
2.3.2. 𝒏𝒕𝒉 Degree Interpolation
Given three data points(𝑥0 , 𝑓(𝑥0 ))(𝑥1 , 𝑓(𝑥1 )) 𝑎𝑛𝑑 (𝑥2 , 𝑓(𝑥2 )), we can approximate 𝑓(𝑥) by a
polynomial of degree at most two, say, 𝑝2 (𝑥) given by:
Given 𝑛 + 1 data points (𝑥𝑖 , 𝑦𝑖 ), 𝑖 = 0,1,2, … , 𝑛 that satisfies the relation 𝑦 = 𝑓(𝑥), an 𝑛𝑡ℎ
degree interpolating polynomial is defined by
Example 2.7: Find a polynomial of degree two that contains the point (2,4), (3,6) and (7,9) and
particularly compute 𝑓(9).
7 9
1
= 4 + 2(𝑥 − 2) − 4 (𝑥 − 2)(𝑥 − 3)
8
15
⇒ 𝑓 (9) =
2
The first degree Lagrange interpolation polynomial for two points 𝑥0 and 𝑥1 is given by
𝑥−𝑥1 𝑥−𝑥1
𝑃1 (𝑥) = 𝑥 𝑓(𝑥0 ) + 𝑥 𝑓(𝑥1 )
0 −𝑥1 1−𝑥0
It is shortly written as
𝑥−𝑥1 𝑥−𝑥0
Where 𝐿0 (x)= 𝑥 and 𝐿1 (x)=
0 −𝑥1 𝑥1 −𝑥0
The quadratic Lagrange’s interpolating polynomial for three arbitrary points 𝑥0 , 𝑥1 and 𝑥2 is
given by
(𝑥−𝑥1 ) (𝑥−𝑥2 )
Where Lo(x) = (𝑥 ,
0−𝑥1 ) (𝑥0−𝑥2 )
(𝑥−𝑥0 ) (𝑥−𝑥2 )
L1(x) = (𝑥 and
1−𝑥0 ) (𝑥1−𝑥2 )
(𝑥−𝑥0 ) (𝑥−𝑥1 )
L2(x) = (𝑥
2−𝑥0 ) (𝑥2−𝑥1 )
In general for 𝑛 + 1 arbitrarily spaced points 𝑥0, 𝑥1, 𝑥2, ….., 𝑥𝑛 , the nth degree Lagrange
interpolation polynomial is given by.
= ∑𝑛𝑖=0 𝐿𝑖 (𝑥 )𝑡(𝑥𝑖 )
Where
9
(𝑥 − 𝑥0 ) (𝑥 − 𝑥1 ) … … (𝑥 − 𝑥𝑖−1 ) (𝑥 − 𝑥𝑖+1 ) … … (𝑥 − 𝑥𝑛 )
𝐿 𝑖 (𝑥 ) =
(𝑥𝑖 − 𝑥0 ) (𝑥𝑖 − 𝑥1 ) … … (𝑥𝑖 − 𝑥𝑖−1 ) (𝑥𝑖 − 𝑥𝑖+1 ) … … (𝑥𝑖 − 𝑥𝑛 )
𝑛 (𝑥−𝑥𝑗)
𝐿𝑖 (𝑥 ) = ∏𝑗=0 (𝑥𝑖 −𝑥𝑗 )
𝑗 𝑖
Example 2.8: Find the second order Lagrange’s interpolating polynomial that fits the data
1 1 1
points (0,0) ,(6 , 2) and (2 , 1)
1 1
Solution: Here 𝑥0 = 0, 𝑥1 = 6 𝑎𝑛𝑑 𝑥2 = 2
(𝑥−𝑥0 )(𝑥−𝑥1)
+ (𝑥 f (𝑥2 )
2−𝑥0 )(𝑥2−𝑥1)
1 1 1
(𝑥− )(𝑥− ) (𝑥−0)(𝑥− ) 1
6 2 2
𝑝2 (𝑥 ) = 1 1 𝑓 (𝑥 0 ) + 1 1 1 f (6)
(0− )(0− ) ( −𝑥0 )( − )
6 2 6 6 2
1
(𝑥−0)(𝑥− ) 1
6
+ 1 1 1 f (2 )
( −0)( − )
2 2 6
1
1 2 1 1
= 0 + 𝑥 (𝑥 − 2) 1 + 𝑥 (𝑥 − 6) 1
18 6
1 1
= -9(𝑥 2 − 2 𝑥) + 6 (𝑥 2 − 6 𝑥)
9
=−9𝑥 2 + 2 𝑥 + 6𝑥 2 − 𝑥
7
= 𝑥 − 3𝑥 2
2
10
𝑬𝒙𝒂𝒎𝒑𝒍𝒆 𝟐. 𝟗: Using Lagrange’s interpolation formula find 𝑃3(𝑥)given that
𝑥𝑖 -1 0 1 2
𝑓(𝑥𝑖 ) 1 1 1 5
1
= − 6 (𝑥 3 − 3𝑥 2 + 2𝑥 )
1
= (𝑥 3 − 2𝑥 2 + 𝑥 + 2)
2
Therefore, P3(𝑥) = 𝐿0 (x) 𝑓(𝑥0 ) + 𝐿1 (x) f(𝑥1 ) + 𝐿2 (x) f(𝑥2 )+ 𝐿3 (x) 𝑓(𝑥3 )
1 1
=− 6 (𝑥 3 − 3𝑥 2 + 2𝑥)(1) + 2 (𝑥 3 − 2𝑥 2 − 𝑥 + 2). (1)
1 1
= 2 (𝑥 3 − 𝑥 2 − 2𝑥 ). (1) + 6 (𝑥 3 − 𝑥 ). (5)
1 1 1 5 1 1
= (− 6 + 2 − 2 + 6 ) 𝑥 3 + (2 − 1 + 2) 𝑥 2
1 1 5
+ (− − − 1 − ) 𝑥 + 1
3 2 6
2 2
= 3 𝑥3 − 3 𝑥 + 1
11
2.4 Numerical Integration
The general problem of numerical integration is to find an approximate value of the definite
integral.
𝑏
I= ∫𝑎 𝑓 (𝑥 )𝑑𝑥
The most common numerical integration schemes are the Newton-cotes formulas. They are
based on the strategy of replacing a complicated function or tabulated data with an
approximating (or interpolating) function that is easy to integrate.
Thus
𝑏 𝑏
I= ∫𝑎 𝑓 (𝑥 )𝑑𝑥 ≅ ∫𝑎 𝑝𝑛 (𝑥 )𝑑𝑥
Where 𝑛 is the order of the polynomial. The numerical integration formulas include:Trapezoidal
rule, simpson’s one-third rule and simpson’s three eighth rule.
𝑏 𝑏
I= ∫𝑎 𝑓 (𝑥)𝑑𝑥 ≅ ∫𝑎 𝑝1 (𝑥 )𝑑𝑥
𝑓(𝑏)−𝑓(𝑎)
Where 𝑝1 (x) = f(a) + (𝑥 − 𝑎)
𝑏−𝑎
Which is the linear interpolating polynomial. Now consider the following graph.
(𝑏, 𝑓(𝑏))
12
(𝑎, 𝑓(𝑎) 𝑓
The area of the region under this straight line and above the 𝑥-axis is an estimate of the integral
of 𝑓(𝑥) between the limits 𝑎 and 𝑏.
13
Thus
𝑏 𝑓(𝑏)−𝑓(𝑎)
𝐼 = ∫𝑎 𝑓 (𝑥)𝑑𝑥 ≅ [𝑓(𝑎) + (𝑥 − 𝑎)] 𝑑𝑥
𝑏−𝑎
𝑓(𝑏)−𝑓(𝑎) 𝑥 2 𝑏
= [𝑓(𝑎). 𝑥 + ( 2 − 𝑎𝑥)] |
𝑏−𝑎 𝑎
𝑓(𝑏)−𝑓(𝑎) 𝑏2 𝑎2
= 𝑓(𝑎)(𝑏 − 𝑎) + [ − 𝑎𝑏 − 2 + 𝑎2 ]
𝑏−𝑎 2
𝑓(𝑏)−𝑓(𝑎) (𝑏−𝑎(𝑎+𝑏)
= 𝑓(𝑎) (𝑏 − 𝑎) + ( − 𝑎(𝑏 − 𝑎))
𝑏−𝑎 2
𝑓(𝑏)−𝑓(𝑎) 𝑎+𝑏
= 𝑓(𝑎) (𝑏 − 𝑎) + (𝑏 − 𝑎) ( − 𝑎)
𝑏−𝑎 2
𝑓(𝑏)−𝑓(𝑎)
= 𝑓(𝑎) (𝑏 − 𝑎) + (𝑏 − 𝑎)
2
𝑓 (𝑏) − 𝑓(𝑎)
= (𝑏 − 𝑎) [𝑓(𝑎) + ]
2
(𝑏 − 𝑎 )
= (𝑓 (𝑎) + 𝑓(𝑏))
2
𝑏 𝑏−𝑎
Therefore, 𝐼 = ∫𝑎 𝑓 (𝑥 )𝑑𝑥 = (𝑓(𝑎) + 𝑓(𝑏)) which is called trapezoidal rule
2
𝑏 ℎ
𝐼 = ∫𝑎 𝑓(𝑥 )𝑑𝑥 = 2 (𝑓(𝑎) + 𝑓(𝑏)), whereℎ = 𝑏 − 𝑎.
An estimate for the local truncation error of the trapezoidal rule can be given by
(𝑏−𝑎)3
or Ε𝑡 = 𝑓′′(𝜀 )
12
Thus, if the function being integrated is linear the trapezoidal rule will be exact. Other wise, for
functions with second and higher-order derivatives some error can be occur.
14
One way to improve the accuracy of the trapezoidal rule is to divide the integration interval form
𝑎 to 𝑏 into a number of segments. There are 𝑛 + 1 equally spaced base points ( 𝑥0 , 𝑥1 , … , 𝑥𝑛 )
Consequently, there are n segments of equal width.
𝑏−𝑎
ℎ= 𝑛
If 𝑎 and 𝑏 are denoted by 𝑥0 and 𝑥𝑛 respectively, then the total integral can be represented as
𝑥 𝑥 𝑥
𝐼 = ∫𝑥 1 𝑓(𝑥 ) + ∫𝑥 2 𝑓(𝑥 )𝑑𝑥 + ⋯ + ∫𝑥 𝑛 𝑓 (𝑥)𝑑𝑥
0 1 𝑛−1
ℎ ℎ ℎ
=2 [𝑓(𝑥0 ) + 𝑓 (𝑥1 )] + 2 [𝑓(𝑥1 ) + 𝑓 (𝑥2 )] + ⋯ 2 [𝑓(𝑥𝑛−1 ) + 𝑓(𝑥𝑛 )]
ℎ
=2 [𝑓(𝑥0 ) + 𝑓 (𝑥1 ) + 𝑓 (𝑥1 ) + 𝑓(𝑥2 ) + ⋯ + 𝑓(𝑥𝑛−1 ) + 𝑓(𝑥𝑛 )]
ℎ
=2 [𝑓(𝑥0 ) + 2(𝑓 (𝑥1 ) + 𝑓 (𝑥2 ) + ⋯ + 𝑓 (𝑥𝑛−1 )) + 𝑓(𝑥𝑛 )]
ℎ
=2 [𝑓 (𝑥0 ) + 𝑓 (𝑥𝑛 ) + 2 ∑𝑛−1
𝑖=1 𝑓 (𝑥𝑖 )]
(𝑏−𝑎)
𝑜𝑟 𝐼 = [𝑓(𝑥0 ) + 𝑓(𝑥𝑛 ) + 2 ∑𝑛−1
𝑖=1 𝑓 (𝑥𝑖 )]
2𝑛
Which is called the multiple –application trapezoidal rule or composite integration method
Note an error for the multiple –application trapezoidal rule can be obtained by summing the
individual errors for each segment to give.
𝑛
(𝑏 − 𝑎 )3
𝐸𝑡 = − ∑ 𝑓′′(𝜀𝑖 )
12𝑛3
𝑖=1
1 𝑑𝑥
Example 2.10: Evaluate the integral I = ∫0 using Composite Simpson’s rule taking 8
1+𝑥
1 1 2 3 4 5 6 7
Solution:- When 𝑛 = 8, We have ℎ = 8 𝑎𝑛𝑑 nine nodes 0, , 8 , 8 , 8 , 8 , 8 , 8 , 8 and 1
15
So, we get the table below
𝑥 0 1 1 3 1 5 3 7 1
8 4 8 2 8 4 8
𝑓(𝑥) 1 8 4 8 2 8 4 8 1
9 5 11 3 13 7 15 2
Here we have eight subintervals for trapezoidal rule and four subintervals for Simpson’s rule .
So, we get
1
𝐼𝑇 = [𝑓(𝑜) + 𝑓 (1) + 2 ∑7𝑖=1 𝑓(𝑖⁄8)]
16
1 1 1 1 3 1 5 3 7
= 16 [1 + 2 + 2 (𝑓 (8) + 𝑓 (4) + 𝑓 (8) + 𝑓 (2) + 𝑓 (8) + 𝑓 (4) + 𝑓 (8))]
1 3 8 4 8 2 8 4 8
= [ +2( + + + + + + )]
16 2 9 5 11 3 13 7 15
= 0.694122
16
CHAPTER THREE
𝑑 𝑏
𝐼 = ∫𝑐 (∫𝑎 𝑓 (𝑥, 𝑦)𝑑𝑥 ) 𝑑𝑦 … … … … … … … … … … … … . (1)
In trapezoidal method, first we evaluate the inner integral by trapezoidal rule. That is
𝑑 𝑏 𝑏−𝑎 𝑑
𝐼 = ∫𝑐 (∫𝑎 𝑓 (𝑥, 𝑦)𝑑𝑥 ) 𝑑𝑦= ∫𝑐 [𝑓 (𝑎, 𝑦) + 𝑓(𝑏, 𝑦)]𝑑𝑦
2
(𝑏 − 𝑎)(𝑑 − 𝑐)
= [𝑓(𝑎, 𝑐 ) + 𝑓 (𝑎, 𝑑 ) + 𝑓(𝑏, 𝑐 ) + 𝑓(𝑏, 𝑑)]
4
ℎ𝑘
= [𝑓(𝑎, 𝑐 ) + 𝑓 (𝑎, 𝑑 ) + 𝑓 (𝑏, 𝑐 ) + 𝑓(𝑏, 𝑑)]
4
Where ℎ = 𝑏 − 𝑎 and 𝑘 = 𝑑 − 𝑐
𝑏−𝑎 𝑑−𝑐
If we apply Simpson’s rule to evaluate (1) with ℎ = and 𝑘 = , then we obtain the
2 2
following result.
17
𝑑 𝑏
ℎ𝑘
𝐼 = ∫ ∫ 𝑓 (𝑥, 𝑦)𝑑𝑥𝑑𝑦 = [𝑓 (𝑎, 𝑐 ) + 𝑓 (𝑎, 𝑑 ) + 𝑓(𝑏, 𝑐 ) + 𝑓(𝑏, 𝑑) + 4{ f(a, c + k) + f(a
𝑐 𝑎 4
+ h, c) + f(a + h, d) + f(b, c + k)} + 16f(a + h, c + k)]
1.5 2
Example 3.1: Evaluate the integral 𝐼 = ∫𝑦=1 ∫𝑥=1 𝑑𝑥𝑑𝑦
𝑥+𝑦
using Simpson’s rule with ℎ = 0.5
1.5 2 1 ℎ𝑘
I= ∫1 ∫1 𝑑𝑥𝑑𝑦 = [𝑓 (1,1) + 𝑓(2,1) + 𝑓 (1,1.5) + 𝑓(2,1.5) + 4{ f(1.5,1) +
𝑥+𝑦 9
1 1 1 2 2 2 4 1 4 4
=72 [2 + 3 + 5 + 7 + 4 {5 + 9 + 3 + 13} + 16𝑥 11]
= 0.184432
18
CHAPTER FOUR
CONCLUSION
The study was prepared to investigate the theoretical back ground of Simpson’s rule and its
application to evaluate double integral.
As mentioned, the Simpson’s method is used to evaluate the multiple definite integral. To
approximate the value of the definite multiple integral, particularly the double integral
Simpson’s Rule first divides the limit of integration into n equal parts; Then integrate using the
formula. We have implemented this method on total one problems to show the efficiency and
accuracy of the method.
Generally, we have shown that the Simpson’s method is capable of evaluating double integral
whose limit of integration is given. This method provides an alternative and supplementary
technique to the convectional ways of evaluating double integral. It is practical method, easily
adaptable on a computer to solve such problems with a modest amount of problem preparation.
19
References
1. E. Angel, R. Bellman, Dynamic programming and Partial Differential Equation,
Academic press, New York , 1972
2. C.M. Bender ,S.A.Orszag, Advanced Mathematical Method for scientists and Engineers,
McGraw-Hill, New York , 1978
3. L.E. El’sgol’ts,S.B. Norkin, Introduction to the Theory and Application of Differential
Equation with deviating Arguments, Academic press, New York , 1973
4. Dr. M. Shantha Kumar, Computer Based Numerical Analysis, First Edition, New Delhi,
1999
5. M. K. Jain, Numerical Solution of Differential Equations, Second Edition, India, 1984
6. Gerald C. F. and Wheatlly P. O., Applied numerical Analysis 5 th ed, Edsion
Wesley,1989
7. Richard L. Burden, Numerical Analysis, 2 nd Ed, 1981.
8. P.A. Stock, Introduction to numerical analysis, third edition ,India ,1998
9. Frank Ayres, Theory and Differential Equations (Schuam’s outline series, 1981)
20
Table of Contents
CHAPTER ONE .......................................................................................................................1
INTRODUCTION .....................................................................................................................1
PRELIMINARY .......................................................................................................................2
CONCLUSION ....................................................................................................................... 19
References................................................................................................................................ 20
22