Numerical Method 4th Semester Study Notes Nepal
Numerical Method 4th Semester Study Notes Nepal
1
Maxima and minima of tabulated function ....................................................................................... 68
Differentiating continuous function .................................................................................................. 70
forward Difference Quotient ......................................................................................................... 70
Central Difference Quotient .......................................................................................................... 71
Numerical Integration ....................................................................................................................... 73
Newtons Cotes Formula ................................................................................................................ 73
Trapezoidal rule (2 point formula) ................................................................................................ 74
Composite Trapezoidal Rule ......................................................................................................... 75
Simpson’s 𝟏𝟑 rule( 3 point formula) ............................................................................................ 76
Simpson’s 3/8 rule( 4 point rule) .................................................................................................. 77
Romberg integration formula/ Richardson’s deferred approach to the limit or Romberg method 80
Gaussian integration: ........................................................................................................................ 82
Changing limits of Integration ...................................................................................................... 84
Values for gaussian quadrature ..................................................................................................... 86
Chapter 4: solution of Linear Algebraic Equations............................................................................... 88
Matrices and properties ........................................................................ Error! Bookmark not defined.
Existence of solution ..................................................................................................................... 89
Methods of solutions ......................................................................................................................... 90
Elimination method ....................................................................................................................... 90
Gauss Elimination Method ........................................................................................................... 91
Gauss Jordan Method .................................................................................................................... 94
The inverse of a matrix ................................................................................................................. 97
Method of factorization................................................................................................................. 98
Iterative methods ............................................................................................................................. 106
2.Gauss Seidel Iteration method ................................................................................................. 110
3.Relaxation Iterative method: .................................................................................................... 114
Now, the increments in x, y, z are dx, dy, dz so, dx= -R1/-10 , dy = -R2/-9 and dz = -R3/-11 .... 115
Iterative Table: ............................................................................................................................ 115
Power method: ............................................................................................................................ 115
Chapter 5 Ordinary Differential Equation/ Solution of ordinary differential Equation ...................... 118
Initial value problem ....................................................................................................................... 119
Solution of ordinary differential equations ..................................................................................... 119
1. Taylor’s series ..................................................................................................................... 120
2. Euler’s method ................................................................................................................... 123
2
3. Heun’s method .................................................................................................................... 125
Runge Kutta method ................................................................................................................... 131
Higher order equation ................................................................................................................. 137
RK method for second order differential equations .................................................................... 138
Picard method of successive approximation ................................................................................... 141
Shooting method ............................................................................................................................. 144
Chapter 6: Solution of partial differential equations ........................................................................... 153
Finite difference method: ................................................................................................................ 153
Elliptical equations...................................................................................................................... 155
Chapter 6: Solution of partial differential equations ........................................................................... 170
Finite difference method ................................................................................................................. 170
Elliptical equations...................................................................................................................... 172
3
Chapter 1: Solution of Nonlinear Equations
Introduction
Example:
3
We need to find the cube root of 2 i.e. √2 using only arithmetic operations. One
way of solving this could be using trail and error method. We try choosing a value
and multiply itself 3 times so that the value is close to 2. We take new
approximation at get closer the number 2.
Now we can say that the cube root of 2 lies between 1.2-1.26, and we can choose
the value according to our need, how accurate we need. Here in above example
we calculated the cube root of 2 just by using simple arithmetic and compare.
4
result in terms of mathematical function that can evaluate for specific instances.
This also has the advantage that the behavior and properties of the function are
often apparent, this is not the case for numerical answer, however numerical
results can be plotted to show some of the behavior of the solution.
While the numerical results are an approximation this can usually be as accurate
as needed. The necessary accuracy is of course determined by application and
need. To achieve high accuracy many operations must be carried out, but as these
operations are carried out the computer so that’s not a big problem.
Here 𝑦0′ , 𝑦0′′ , 𝑦0′′′ … can be found using equation (1) and its successive
differentiation at 𝑥 = 𝑥0 . The series in (4) can be truncated at any stage if ‘h’ is
small. Now having obtained 𝑦1 we can calculate 𝑦1′ , 𝑦1′′ , 𝑦1′′′ from equation (1) at
𝑥 = 𝑥0 + h
5
Proceeding further we get
′
ℎ𝑦𝑛−1 ′′
ℎ2 𝑦𝑛−2 ′′′
ℎ3 𝑦𝑛−3
𝑦𝑛 = 𝑦𝑛−1 + + + … (6)
1! 2! 3!
If a Taylor series is truncated while there are still non-zero derivatives of higher
order the truncated power series will not be exact. The error term for a truncated
Taylor Series can be written in several ways but the most useful form when the
series is truncated after 𝑛𝑡ℎ term is
Example:
𝑑𝑦
Using Taylor series method solve = 𝑥 2 − 𝑦, 𝑦(0) = 1 at 𝑥=
𝑑𝑥
0.1,0.2,0.3 &0.4. Compare the values with exact solution.
Solution
Given 𝑦 ′ = 𝑥 2 − 𝑦,𝑦(0) = 1,
Now
𝑦′ = 𝑥2 − 𝑦 𝑦0′ = 𝑥02 − 𝑦0 = 0 − 1 = −1
By Taylor Series
6
=0.90516
Now
By Taylor Series
=0.821266
Now
By Taylor Series
Now
By Taylor Series
Total Error
Human
Imperfection
Computing Numerical
Measurin
machine Method
g method 8
Figure: 1.1 (Taxonomy of Errors)
Modeling errors
Mathematical models are the basis for numerical solution. They are formulated
to represent physical process using certain parameters involved in the situations.
In many situations it is impractical or impossible to include all of the real
problems, so we use certain assumptions for easy calculations. For example while
developing a model for calculating the for acting on a falling body, we may not
be able to estimate the air resistance coefficient properly or determine the
direction and magnitude of wind force acting on the body and so on. To simply
the model we may assume that the force due to the air resistance is linearly
proportional to the velocity of the falling body or we assume that there is no wind
force acting on the body. All such assumption certainly results in errors in the
output from such models.
Inherent Errors
Inherent errors are those that are present in the data supplied to the model.
Inherent error contains data errors and conversion error.
Data error
Data error (known as empirical errors) arises when data for a problem are
obtained by some experimental mean and are therefore of limited accuracy and
precision. This may be due to some limitations in instruments and reading and
therefore may be unavoidable, for example there is no use in performing
arithmetic operations to 4 decimal places when the original data themselves are
correct up to 2 decimal places.
Conversion Error
9
Example
0.110 = 0.00011001
0.410 = 0.01100110
Sum = 0.01111111
0.510 = 0.25+0.125+0.0625+0.03125+0.015625+0.0078125+0.00390625
= 0.49609373
Now from above example we can see that the addition of binary number
conversion to decimal we do not get exact value as decimal number of 0.1 has
non termination binary form 0.000110011001… and so on. The computer has
fixed memory so it uses only certain number for digits after decimal so we get
this type of errors which is caused by conversion.
Numerical Errors
Round off errors occurs when a fixed number of digits are used to represent exact
number, since the number are stored at every stage of computation, round off
error is introduced at the end of every arithmetic operations. Consequently even
though an individual roundoff error could be very small the cumulative effect of
a series of computation can be very significant.
Rounding a number can be done in two way, chopping and symmetric rounding.
Chopping
In chopping the extra digits are dropped, this is also called truncating a number.
Suppose we are using a computer with a fixed word length of four digits then a
number like 42.7893will be stored as 42.78 and the digit 93 will be dropped.
In symmetric round off method, the last retained significant digit is “rounded off”
by 1 if the first discarded digit is larger or equal to 5, otherwise the last retained
10
digit is unchanged. For example the number 42.7893 would become 42.79 and
the number 76.5432 would become 76.54.
Sometimes a slightly more refined rule is used when the last the last number is 5,
then the number is unchanged if the last digit is even and is increased by 1 it is
odd.
Truncation error
x3 𝑥 5 𝑥 7
sin(x) = 𝑥 − + − …
3! 5! 7!
Truncation error can be reduced by using a better numerical model which usually
increases the number of arithmetic operations. E.g. in numerical integration the
truncation error can be reduced by increasing the number of points at which the
function is integrated, but care should be exercised to see that the round off error
which is bound to increase due to increased arithmetic operations does not offset
the reduction in truncation error.
Blunders
Blunders are errors that are caused due to human imperfection. As the name
indicated such errors may cause a very serious disaster in the result since these
errors are due to human mistake. It should be possible to avoid them to a large
extent by acquiring a sound knowledge of all aspect of the problem as well as
numerical process.
Human errors can occur at any stage of the numerical processing cycle, some
common types of errors are:
Let us suppose that true value of a date item is denoted by 𝑥𝑡 and its approximated
value is denoted by𝑥𝑎 , then they are related as True value, (𝑥𝑡 )=Approximate
value(𝑥𝑎 )+error.
𝑒𝑟𝑟𝑜𝑟 = |𝑥𝑡 − 𝑥𝑎 |
In many case absolute error may not reflect its influence correctly as it does not
take into account the order of magnitude of the value under study. For example
an error of 1gm is much more significant in the weight of 10gm of gold than in
weight of a bag of sugar of 1 kg. In view of this we introduce the concept of
relative error which is nothing but the normalized value of absolute error. The
relative error is defined as follows:
𝑎𝑏𝑠𝑜𝑙𝑢𝑡𝑒 𝑒𝑟𝑟𝑜𝑟
𝑒𝑟 =
|𝑡𝑟𝑢𝑒 𝑣𝑎𝑙𝑢𝑒|
|𝑥𝑡 − 𝑥𝑎 |
=
|𝑥𝑡 |
𝑥𝑎
= |1 − |
𝑥𝑡
12
Minimizing the total error
Significant digits
We all know that all computers operate with fixed length so that all the floating
point representation requires the mantissa to be specified number of digits. Some
numbers such as the value of 𝜋 = 3.141592 ….. we have to write as 3.14 or
3.14159 in all these cases we have omitted some digits. Now 2⁄7 =
0.285714 𝑜𝑟 𝜋 = 3.14159 is said to have number containing 6 significant digits.
The concept of significant digit has been introduced primarily to indicate the
accuracy if a numerical value. For example if the number y=23.40657 has correct
value of only 23.406 then we may say that y has 5 significant digits and is correct
up to 3 decimal places. The omission of certain digits from a number of results in
roundoff error. The following statements describe the notion of significant digits.
0.0001234(1234𝑥10−7 )
0.001234(1234𝑥10−6 )
5. When the decimal point is not written trailing zeros are not considered to
be significant. E.g. 4500 may be written as 45𝑥102 contains only two
significant digits however.
4500.0 4 𝑠𝑖𝑔𝑛𝑖𝑓𝑖𝑐𝑎𝑛𝑡 𝑑𝑖𝑔𝑖𝑡𝑠
4
7.56𝑥10 3 𝑠𝑖𝑔𝑛𝑖𝑓𝑖𝑐𝑎𝑛𝑡 𝑑𝑖𝑔𝑖𝑡𝑠
4
7.560𝑥10 4 𝑠𝑖𝑔𝑛𝑖𝑓𝑖𝑐𝑎𝑛𝑡 𝑑𝑖𝑔𝑖𝑡𝑠
7.5600𝑥104 5 𝑠𝑖𝑔𝑛𝑖𝑓𝑖𝑐𝑎𝑛𝑡 𝑑𝑖𝑔𝑖𝑡𝑠
13
The concept of accuracy and precision are closely related to significant digits.
They are related as follows:
1. Accuracy refers to the number of significant digits in a value e.g. 57.396 is
accurate to five significant digits.
2. Precision refers to the number of decimal position i.e. the order of
magnitude if the last digit value. the number 57.396 has a precision of
0.001 or 10−3
Iterative methods
There are number of iterative methods that have been tried and used successfully
in various problem situations. All these methods typically generate a sequence of
estimates of the solution which is expected to converge to the true solution. All
iterative methods begin their process of solution with one or more guess of the
solution and then using those guesses to find another better approximation and so
on to get to required solution with desired accuracy. Iterative method can be
grouped as:
1. Bracketing methods
2. Open end methods
Before we start to go further into the methods first we need to know about the
starting and stopping criteria in an iterative process.
Starting criterion
𝑥𝑖 Represents the estimate of the root at 𝑖 𝑡ℎ iteration and f (𝑥𝑖 ) is the value of the
function at𝑥𝑖 . There may be situations where these tests may fail when used alone
so we use combination of many.
Bracketing method
This method starts with two initial guess that bracket the root and then
systematically reduce the width of the bracket until the solution is reached. Two
popular methods are:
1. Bisection method
2. False position method
Bisection method
The Bisection method is one of the simplest and most reliable method for solution
of non-linear equations. This method relies on the fact that if 𝑓(𝑥) is real and
continuous in the interval 𝑎 < 𝑥 < 𝑏 and 𝑓(𝑎)&𝑓(𝑏) have opposite signs i.e
𝑓(𝑎) ∗ 𝑓 (𝑏) < 0 then there is at least one real root in the interval between a & b.
let 𝑥1 = 𝑎 𝑎𝑛𝑑 𝑥2 = 𝑏 . now determine another point 𝑥3 to be mid point between
𝑥1 +𝑥2
a and b i.e 𝑥3 = now there exists the following three conditions;
2
15
Figure 1.2: Illustration of Bisection method
S.N x f(x)
1 1 -11
2 2 -5
sign
3 3 13 changed
4 4 49
5 5 109
6 6 199
7 7 325
8 8 493
16
From above table we can that the values of f(x) changes at x=2 & x=3, we can
randomly test for the values without creating the table but it will be easy to find
out if we use table.
2+3
e 𝑥3 = = 2.5, 𝑓(2.5) = 2.125. since 𝑓(2)𝑓(2.5) < 0, a root lies in between
2
Therefore, the root of the equation is 2.3737, since the value of error is 0.0000 or
we can also say that the new root is same as old so this is the required roots for
given stopping criteria.
17
Solution:𝑓(𝑥 ) = 𝑥 2 − 4𝑥 − 10 = 0, let the initial approximation be -2& -1,
chosen from table below .
S.N x f(x)
1 -3 11
2 -2 2
3 -1 -5
4 0 -10
5 1 -13
6 2 -14
7 3 -13
now the initial approximation be 𝑥1 = −2, 𝑥2 = −1, then 𝑓(−2) = 2 &𝑓(−1) =
−5, where root lies in between 2 & 3, hence next approximation will be 𝑥3 =
𝑥1 +𝑥2
2
−1−2
e 𝑥3 = = −1.5, 𝑓(−1.5) = −1.7500, since 𝑓(−2)𝑓(−1.5) < 0, a root lies
2
Therefore, the root of the equation is -1.7417, since the value of error is 0.0000.
Practice: find the roots of the equation for following equations, correct up to
5 decimal places.
1. 𝟑𝒙 + 𝐬𝐢𝐧 (𝒙) − 𝒆𝒙 = 𝟎
2. 𝐬𝐢 𝐧(𝒙) − 𝟐𝒙 + 𝟏 = 𝟎
3. 𝒆𝒙 − 𝒙 − 𝟐 = 𝟎
4. 𝒙𝟑 − 𝒙 − 𝟑 = 𝟎
5. 𝟒𝒙𝟑 − 𝟐𝒙 − 𝟔 = 𝟎
NOTE: When there are trigonometric functions, use radian measure in calculator.
F(x)
X2, f(x2)
X3, f(x3)
X1
X3 X2 X
X1, f(x1)
The equation of the line joining (𝑥1 , 𝑓(𝑥1 ))& (𝑥2 , 𝑓(𝑥2 )) is
𝑦 − 𝑦1 = 𝑚(𝑥 − 𝑥1 )
𝑦2 − 𝑦1
𝑦 − 𝑦1 = (𝑥 − 𝑥1 )
𝑥2 − 𝑥1
𝑓(𝑥2 ) − 𝑓(𝑥1 )
𝑦 − 𝑓(𝑥1 ) = (𝑥 − 𝑥1 )
𝑥2 − 𝑥1
Let the line joining the points (𝑥1 , 𝑓(𝑥1 ))& (𝑥2 , 𝑓 (𝑥2 )) cuts x-axis at (𝑥3 , 0), then
the point lies in the line, putting the value in the equation we get
𝑓(𝑥2 ) − 𝑓(𝑥1 )
0 − 𝑓(𝑥1 ) = (𝑥3 − 𝑥1 )
𝑥2 − 𝑥1
This is the formula for calculating the new approximation in false position
method.
20
Example 1: find the real root of the equation 𝑥 3 − 2𝑥 − 5 = 0 by the method of
false position correct up to 4 decimal places.
S.N x f(x)
1 1 -6
2 2 -1
Sign
3 3 16 changed
4 4 51
5 5 110
6 6 199
7 7 324
8 8 491
From above table we can that the values of f(x) changes at x=2 & x=3, we can
randomly test for the values without creating the table but it will be easy to find
out if we use table.
𝑓(𝑥1 )(𝑥2 − 𝑥1 )
𝑥3 = 𝑥1 −
𝑓(𝑥2 ) − 𝑓(𝑥1 )
(−1)(3 − 2)
𝑥3 = 2 −
(16 − (−1))
𝑥3 = 2.0588
21
4 2.0896 -0.0547 3.0000 16.0000 2.0927 -0.0202 0.9104
5 2.0927 -0.0202 3.0000 16.0000 2.0939 -0.0075 0.9073
6 2.0939 -0.0075 3.0000 16.0000 2.0943 -0.0027 0.9061
7 2.0943 -0.0027 3.0000 16.0000 2.0945 -0.0010 0.9057
8 2.0945 -0.0010 3.0000 16.0000 2.0945 -0.0004 0.9055
9 2.0945 -0.0004 3.0000 16.0000 2.0945 -0.0001 0.9055
10 2.0945 -0.0001 3.0000 16.0000 2.0945 -0.0001 0.9055
11 2.0945 -0.0001 3.0000 16.0000 2.0945 0.0000 0.9055
12 2.0945 0.0000 3.0000 16.0000 2.0946 0.0000 0.9055
Therefore, the root of the equation is 2.0946, since the value of f(x3) = 0.0000.
Solution
S.N x f(x)
1 -3 11
2 -2 2
3 -1 -5
4 0 -10
5 1 -13
6 2 -14
7 3 -13
From above table we can that the values of f(x) changes at x=-2 & x=-1, we can
randomly test for the values without creating the table but it will be easy to find
out if we use table.
𝑓(𝑥1 )(𝑥2 − 𝑥1 )
𝑥3 = 𝑥1 −
𝑓(𝑥2 ) − 𝑓(𝑥1 )
22
(2)(−1 − (−2))
𝑥3 = −2 −
(−5 − 2)
𝑥3 = −1.714286
Therefore, the root of the equation is -1.741657, since the value of f(x3) =
0.000000.
Practice: Find the real roots for the following equations , correct up to 5
decimal places.
1. 𝟑𝒙 + 𝐬𝐢𝐧 (𝒙) − 𝒆𝒙 = 𝟎
2. 𝒙 − 𝒆−𝒙 = 𝟎
3. 𝒙𝟑 − 𝟒𝒙𝟐 + 𝒙 + 𝟔 = 𝟎
4. 𝟑𝒙𝟐 + 𝟔𝒙 − 𝟒𝟓 = 𝟎
5. 𝟒𝒙𝟑 − 𝟐𝒙 − 𝟔 = 𝟎
23
Open end methods:
This method uses single starting value or two values that do not necessarily
bracket the root. The following methods fall under open end method:
1. Secant method
2. Newton method
3. Fixed point method
Secant Method
The secant method begins by finding two points on the curve of f(x), hopefully
near to root. We draw a line through these two points and find the point where it
intersects the x -axis. The two points may both be on one side of the root as seen
in figure, but they can also be on opposite side.
If f(x) were truly linear, the straight line would intersect x-axis at the roots, but
f(x) will never be linear because we would never use a root finding method on a
linear function. That means the intersection of the line with x-axis in not at root,
but that should be close to it. From the obvious similar triangles we can write
F(X1)
F(X2)
X3 X2 X1
24
𝑓(𝑥2 )(𝑥1 − 𝑥2 )
𝑥3 = 𝑥2 −
𝑓(𝑥1 ) − 𝑓(𝑥2 )
Because f(x) is not exactly linear, 𝑥3 is not equal to root, but it should be closer
than either of the two points.
Solution
𝑓(𝑥2 )(𝑥1 − 𝑥2 )
𝑥3 = 𝑥2 −
𝑓(𝑥1 ) − 𝑓(𝑥2 )
(2)(4 − 6)
𝑥3 = 6 −
(−10 − 2)
𝑥3 = 5.666667
1. 𝟑𝒙 + 𝐬𝐢𝐧 (𝒙) − 𝒆𝒙 = 𝟎
25
2. 𝟒𝒙𝟑 − 𝟐𝒙 − 𝟔 = 𝟎
3. 𝒙𝟐 − 𝟓𝒙 + 𝟔 = 𝟎
4. 𝒙 𝐬𝐢𝐧 𝒙 − 𝟏 = 𝟎
5. 𝒆𝒙 − 𝟑𝒙 = 𝟎
Newton’s method
One of the most widely used methods of solving non-linear equations is Newton’s
method (also called Newton Raphson Method). This method is also based on the
linear approximation of the function, but does so using a tangent line to the curve.
Figure gives the graphical description starting from a single initial estimate 𝑥0 ,
that is not too far from a root. We move along the tangent to its intersection with
x-axis and take the next approximation. This is continued until either the
successive x-value are sufficiently close or the value of the function is sufficiently
near to zero.
The calculation scheme follows immediately from the right triangle shown in
figure, which has the angle of inclination of the tangent line to the curve at𝑥 = 𝑥0 .
ᶿ
X1 X1
26
𝑓(𝑥0 )
tan 𝜃 = 𝑓 ′ (𝑥0 ) =
𝑥0 − 𝑥1
𝑓(𝑥0 )
𝑥1 = 𝑥0 −
𝑓 ′ (𝑥0 )
In general,
𝑓(𝑥𝑛 )
𝑥𝑛+1 = 𝑥𝑛 − 𝑤ℎ𝑒𝑟𝑒 𝑛 = 0,1,2,3 ….
𝑓 ′ (𝑥𝑛 )
Newton’s method is widely used because it is more rapidly convergent then any
of the methods. Some important things that should be kept in mind while using
Newton method:
1. When 𝑓 ′ (𝑥0 ) is very large, i.e. when the slope is large the root can be
calculated in even less time.
2. If we choose the initial approximation𝑥0 close to the root then we get the
root of the equation very quickly.
3. The process will evidently fail if 𝑓 ′ (𝑥 ) = 0, in that case use other methods
4. If the initial approximation to the root is not given choose two values of x
such that its functional values are opposite, as this will ensure that the
chosen point in near the root.
𝑓(𝑥0 )
𝑥1 = 𝑥0 −
𝑓 ′ (𝑥0 )
−10
𝑥1 = 2 −
2
27
=7
Now
𝑥0 = 𝑥1 = 7
Then doing further calculation, we get required root of the equation. In tabular
form we get
Practice :
1. 𝐬𝐢𝐧(𝒙) = 𝟏 + 𝒙𝟑
2. 𝒇(𝒙) = 𝒙𝟐 − 𝟐𝒙 − 𝟏
3. 𝒇(𝒙) = 𝒙𝟑 − 𝒙 − 𝟑
4. 𝒇(𝒙) = 𝒙𝟑 − 𝟑𝒙 − 𝟐
5. 𝒇(𝒙) = 𝐜𝐨𝐬 𝒙
28
Suppose we arrange to given the equivalent form 𝑥 = 𝑔1 (𝑥) = √2𝑥 + 3, if we
start with x=4 and iterate. Successive values of x are
𝑥0 = 4
𝑥1 = √11 = 3.31662
𝑥2 = 3.10375
𝑥3 = 3.03439
𝑥4 = 3.01144
𝑥5 = 3.00381
𝑥6 = 3.00127
𝑥0 = 4
𝑥1 = 1.5
𝑥2 = −6
𝑥3 = −0.375
𝑥4 = −1.26316
𝑥5 = −0.91935
𝑥6 = −1.02763
𝑥7 = −0.99087
𝑥8 = −1.00305
𝑥9 = −0.99898
𝑥10 = −1.00034
29
𝑥11 = −0.99989
𝑥12 = −1.0004
𝑥13 = −1.00000
It seems that we now converge to another root at 𝑥 = −1, we also see that the
converge is oscillatory rather than monotonic.
𝑥2 − 3
𝑥 = 𝑔2 (𝑥 ) =
2
Starting with
𝑥0 = 4
𝑥1 = 6.5
𝑥2 = 19.625
𝑥3 = 191.070
NOTE: the g(x) formed must be such that |𝑔′ (𝑥)| around the real root should be
less than 1, if this is not case change g(x).
Practice
Use the Fixed point iteration method to evaluate a root of the equation 𝒙𝟐 −
𝒙 − 𝟏 = 𝟎, using the following forma of g(x)
a. 𝒙 = 𝒙𝟐 − 𝟏
b. 𝒙 = 𝟏 + 𝟐𝒙 − 𝒙𝟐
𝟏+𝟑𝒙−𝒙𝟐
c. 𝒙 =
𝟐
Starting with 𝒙𝟎 = 𝟏 𝒂𝒏𝒅 𝒙𝟎 = 𝟐 and discuss the results
30
Convergence:
∆𝑥
After n iterations the root must lie within ± of our estimate. This means that
2𝑛
∆𝑥
error bounds at 𝑛𝑡ℎ iteration is 𝐸𝑛 = | 𝑛 |
2
∆𝑥 ∆𝑥 𝐸𝑛
Similarly𝐸𝑛+1 = | 𝑛+1
|=| |=
2 2𝑛 .2 2
that is the error decreases linearly with each step by a factor of 0.5. the bisection
method is therefore linearly convergent. Since the convergence is slow to achieve
a high degree of accuracy. Large number of iterations may be needed; however
the bisection algorithm is guaranteed to converge.
Which is similar to𝑥 = 𝑔(𝑥), except 𝑥 = 𝑔(𝑥𝑛 , 𝑥𝑛−1 ) when we apply Taylor
series the derivatives are pretty complicated, so we omit the details it turns out
that the error relation is
𝑔(𝜉1 , 𝜉2 )
𝑒𝑛+1 = ∗ 𝑒𝑛 𝑒𝑛+1
2
Showing that the error is proportional to the product of the two pervious errors,
we can conclude that the convergence is better than linear but poorer than
quadratic
𝑒𝑛+1 ∝ 𝑒𝑛 ∗ 𝑒𝑛−1
Now we can use the mean value theorem, if 𝑔(𝑥) and 𝑔′ (𝑥) are continuous, to
say that
′
𝑓(𝑥 ) ∗ 𝑓 ′′ (𝑥)
|𝑔 (𝑥)| = | | < 1………………1
(𝑓 ′ (𝑥))2
Which requires that f(x) and its derivatives exits and be continuous. Newton
method is shown to be quadratically convergent by the following as before
′(
𝑔′′ (𝜉 )
𝑔(𝑥𝑛 ) = 𝑔(𝑅) + 𝑔 𝑅) ∗ (𝑅 − 𝑥𝑛 ) + ( ) ∗ (𝑅 − 𝑥𝑛 )2 … … … … .2
2
32
′
𝑓(𝑅) ∗ 𝑓 ′′ (𝑅)
|𝑔 (𝑅)| = | |=0
(𝑓 ′ (𝑅))2
𝑔 ′ (𝜉 )
𝑔(𝑥𝑛 ) = 𝑔(𝑅) + ( ) ∗ (𝑅 − 𝑥𝑛 )2 … … … … . .3
2
𝑒𝑛+1 ∝ 𝑒𝑛 2
Interpolation
𝑒 𝑥 = 𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥 2 + 𝑎3 𝑥 3 + ⋯ … … . +𝑎𝑛 𝑥 𝑛
33
Taylor series expansion of 𝑒 𝑥 about x=0, 𝑎0 , 𝑎1 , 𝑎2 … ..are coefficient to be
determined
If we have to find the value of 𝑒 2.2 𝑜𝑟 𝑒 0.75 then interpolation inside the given
range.
If we have to find the value of 𝑒 3.2 then extrapolation outside the given range.
1. Lagrange interpolation
2. Newton’s interpolation
3. Newton’s Gregory forward interpolation
4. Spline interpolation
Polynomial form
Linear interpolation
The simplest form of interpolation is to approximate two data points by straight
line, suppose we have two points (𝑥1 𝑓(𝑥1 )) &(𝑥2 𝑓(𝑥2 )). These two points can
be connected linearly as shown in figure, using the concept of similar triangles
we show that :
34
Figure: Graphical representation of Linear interpolation
Example : The table below shows square root for the integers, determine the
square root of 2.5
x 1 2 3 4 5
F(x) 1 1.4142 1.7321 2 2.2361
𝑓 (𝑥2 ) − 𝑓(𝑥1 )
𝑓(2.5) = 𝑓(𝑥1 ) + (𝑥 − 𝑥1 )
𝑥2 − 𝑥1
𝑓 (3) − 𝑓(2)
= 𝑓(2) + (2.5 − 2)
3−2
35
1.7321 − 1.4142
= 1.4142 + (2.5 − 2)
3−2
= 1.5732
𝑓 (4) − 𝑓(2)
𝑓(2.5) = 𝑓(1) + (2.5 − 2)
4−2
(2 − 1.4142)
𝑓(2.5) = 1.4142 + 0.5
2
=1.5607
The correct answer is 1.5811, so from above values we can say that closer the
points the more accurate results.
𝑦 = 𝑓(𝑥 ) = 𝑎0 (𝑥 − 𝑥1 )(𝑥 − 𝑥2 ) … … (𝑥 − 𝑥𝑛 )
+𝑎1 (𝑥 − 𝑥0 )(𝑥 − 𝑥2 ) … … (𝑥 − 𝑥𝑛 )
+𝑎2 (𝑥 − 𝑥0 )(𝑥 − 𝑥1 ) … … (𝑥 − 𝑥𝑛 ) … ….
36
𝑦1 = 𝑎1 (𝑥1 − 𝑥0 )(𝑥1 − 𝑥2 ) … … (𝑥1 − 𝑥𝑛 )
𝑦1
𝑎1 =
(𝑥1 − 𝑥0 )(𝑥1 − 𝑥2 ) … … (𝑥1 − 𝑥𝑛 )
On preceding
𝑦𝑛
𝑎𝑛 =
(𝑥𝑛 − 𝑥0 )(𝑥𝑛 − 𝑥1 ) … … (𝑥𝑛 − 𝑥𝑛−1 )
(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ) … … (𝑥 − 𝑥𝑛 )
𝑦 = 𝑓 (𝑥 ) = 𝑦
(𝑥0 − 𝑥1 )(𝑥0 − 𝑥2 ) … … (𝑥0 − 𝑥𝑛 ) 0
(𝑥 − 𝑥0 )(𝑥 − 𝑥2 ) … … (𝑥 − 𝑥𝑛 )
+ 𝑦
(𝑥1 − 𝑥0 )(𝑥1 − 𝑥2 ) … … (𝑥1 − 𝑥𝑛 ) 1
+ … ….
(𝑥 − 𝑥0 )(𝑥 − 𝑥1 ) … … (𝑥 − 𝑥𝑛−1 )
𝑦
(𝑥𝑛 − 𝑥0 )(𝑥𝑛 − 𝑥1 ) … … (𝑥𝑛 − 𝑥𝑛−1 ) 𝑛
in general
𝑛 𝑦𝑗 ∏𝑛𝑖=0(𝑥 − 𝑥𝑖 )
𝑖≠𝑗
𝑓 (𝑥 ) = ∑ 𝑛
∏𝑖=0(𝑥𝑗 − 𝑥𝑖 )
𝑗=0 𝑖≠𝑗
Note:
1. This formula can be used irrespective of whether the values 𝑥0, 𝑥1, 𝑥2,…. 𝑥𝑛,
are equally spaced or not.
2. It is simple and easy to remember but its application is not speedy.
3. The main drawback of it is that if another interpolation value is inserted,
then the interpolation coefficients are required to be recalculated.
Example 1: Consider the problem to find the square root of 2.5 using the
second order Lagrange interpolation polynomial.
37
Consider the following three points
x0 = 2 x1 = 3 x2 = 4
f0 = 1.4142 f1 = 1.7321 f2 = 2
We know that
2
𝑓(𝑥 ) = ∑ 𝑦𝑖 𝑙𝑖
𝑖=0
Where
∏2𝑗=0(𝑥 − 𝑥𝑗 )
𝑗≠𝑖
𝑙𝑖 =
∏2𝑗=0(𝑥𝑖 − 𝑥𝑗 )
𝑗≠𝑖
so
(𝑥 − 𝑥1 )(𝑥 − 𝑥2 )
𝑙0 (𝑥 ) =
(𝑥0 − 𝑥1 )(𝑥0 − 𝑥2 )
(𝑥−3)(𝑥−4)
= (2−3)(2−4)
𝑥 2 − 7𝑥 + 12
=
2
(𝑥 − 𝑥0 )(𝑥 − 𝑥2 )
𝑙1 (𝑥 ) =
(𝑥1 − 𝑥0 )(𝑥1 − 𝑥2 )
(𝑥−2)(𝑥−4)
= (3−2)(3−4)
𝑥 2 − 6𝑥 + 8
=
−1
(𝑥 − 𝑥0 )(𝑥 − 𝑥1 )
𝑙2 (𝑥 ) =
(𝑥2 − 𝑥0 )(𝑥2 − 𝑥1 )
(𝑥−2)(𝑥−3)
= (4−2)(4−3)
38
𝑥 2 − 5𝑥 + 6
=
2
We know, 𝑓(𝑥 ) = 𝑦0 𝑙0 (𝑥 ) + 𝑦1 𝑙1 (𝑥 ) + 𝑦2 𝑙2 (𝑥 )
𝑥 2 − 7𝑥 + 12 𝑥 2 − 6𝑥 + 8 𝑥 2 − 5𝑥 + 6
= 1.4142 ∗ + 1.7321 ∗ +2∗
2 −1 2
= 0.7071 ∗ (𝑥 2 − 7𝑥 + 12) − 1.7321 ∗ (𝑥 2 − 6𝑥 + 8) + (𝑥 2 − 5𝑥 + 6)
= 1.5794
Practice :
𝑝𝑛 (𝑥 ) = 𝑎0 + 𝑎1 (𝑥 − 𝑥0 ) + 𝑎2 (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )
+ 𝑎3 (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ) + ⋯ + 𝑎𝑛 (𝑥 − 𝑥1 ) … (𝑥 − 𝑥𝑛−1 )
Of the order n which passes through all the given data points
At 𝑥 = 𝑥0 , 𝑝𝑛 (𝑥0 ) = 𝑎0 = 𝑦0
39
𝐴𝑡 , 𝑥 = 𝑥1 , 𝑝𝑛 (𝑥1 ) = 𝑎0 + 𝑎1 (𝑥1 − 𝑥0 ) = 𝑦1
𝑦1 − 𝑎0
𝑎1 =
𝑥1 − 𝑥0
𝑦1 − 𝑦0
=
𝑥1 − 𝑥0
𝑦2 −𝑦1 𝑦1 −𝑦0
−
𝑥2 −𝑥1 𝑥1 −𝑥0
𝑎2 =
(𝑥2 − 𝑥0 )
𝑎2 = 𝑓[𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 ] … …
𝑎𝑛 = 𝑓[𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 , … … 𝑥𝑛 ]
Example : Given below is a table of data for log 𝑥, estimate log 2.5using second
order newton
𝑖 0 1 2 3
𝑥𝑖 1 2 3 4
log 𝑥𝑖 0 0.3010 0.4771 0.6021
Solution
𝑎0 = 𝑓 [𝑥0 ] = 𝑦0 = 0
𝑦1 − 𝑦0 0.3010 − 0
𝑎1 = 𝑓[𝑥0 , 𝑥1 ] = = = 0.3010
𝑥1 − 𝑥0 2−1
𝑦2 −𝑦1 𝑦1 −𝑦0 0.4771−0.3010 0.3010−0
− −
𝑥2 −𝑥1 𝑥1 −𝑥0 3−2 2−1
𝑎2 = 𝑓[𝑥0 , 𝑥1 , 𝑥2 ] = = = −0.0625use
(𝑥2 −𝑥0 ) (3−1)
Now,
= 0.4046
x 5 6 9 11
f(x) 12 13 14 16
41
Since there are four data points the required polynomial will be of the order 3
𝑝3 (𝑥 ) = 𝑎0 + 𝑎1 (𝑥 − 𝑥0 ) + 𝑎2 (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )
+ 𝑎3 (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥2 )
42
Evenly spaced data
let 𝑦 = 𝑓(𝑥) be a function which takes the values 𝑦0 , 𝑦1 … . . 𝑦𝑛 for values (n+1),
at 𝑥0 , 𝑥1 , 𝑥2 … . 𝑥𝑛 of the independent variables x. let these values of x be
equidistance i.e𝑥𝑖 = 𝑥0 + 𝑖ℎ, 𝑖. 𝑒 𝑖 = 0,1,2 … 𝑛. Let y(x) be the polynomial in x
of the nth degree such that 𝑦𝑖 = 𝑓(𝑥𝑖 ), 𝑖 = 0,1,2 … 𝑛.
𝑦(𝑥 ) = 𝐴0 + 𝐴1 (𝑥 − 𝑥0 ) + 𝐴2 (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )
+ 𝐴3 (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ) … … . . (𝑎)
Putting 𝑥 = 𝑥0 , 𝑥1 … … . 𝑥𝑛 𝑠𝑢𝑐𝑐𝑒𝑠𝑠𝑖𝑣𝑒𝑙𝑦
We get, putting 𝑥 = 𝑥0
𝑦0 = 𝐴0
𝐴0 = 𝑦0
𝑦1 = 𝐴0 + 𝐴1 (𝑥1 − 𝑥0 )
𝑦1 − 𝐴0 𝑦1 − 𝑦0 ∆𝑦0
𝐴1 = = =
𝑥1 − 𝑥0 𝑥1 − 𝑥0 ℎ
𝑦2 − 𝐴0 − 𝐴1 (𝑥2 − 𝑥0 )
𝐴2 =
(𝑥2 − 𝑥0 )(𝑥2 − 𝑥1 )
43
𝑦1 −𝑦0
𝑦2 − 𝑦0 − (𝑥2 − 𝑥0 )
𝑥1 −𝑥0
=
(𝑥2 − 𝑥0 )(𝑥2 − 𝑥1 )
𝑥2 − 𝑥0 = (𝑥2 − 𝑥1 ) + (𝑥1 − 𝑥0 ) = ℎ + ℎ = 2ℎ
𝑦1 −𝑦0
𝑦2 − 𝑦0 − 2ℎ
ℎ
=
(2ℎ)(ℎ)
𝑦2 − 𝑦0 −2𝑦1 + 2𝑦0
=
2ℎ2
𝑦2 −2𝑦1 + 𝑦0
=
2ℎ2
∆2 𝑦0
=
2! ℎ2
Similarly
∆3 𝑦0
𝐴3 =
3! ℎ3
∆𝑦0 (𝑥 − 𝑥0 ) ∆2 𝑦0 (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )
𝑦(𝑥 ) = 𝑦0 + +
ℎ 2! ℎ2
∆3 𝑦0 (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥2 )
+ … … . . (𝑏)
3! ℎ3
(𝑥−𝑥0 )
Putting = 𝑝, 𝑖. 𝑒 𝑥 = 𝑥0 + 𝑝ℎ where p is a real number
ℎ
Finally we get
44
Gregory Newton Backward Interpolation Formula
let 𝑦 = 𝑓(𝑥) be a function which takes the values 𝑦0 , 𝑦1 … . . 𝑦𝑛 for values (n+1),
at 𝑥0 , 𝑥1 , 𝑥2 … . 𝑥𝑛 of the independent variables x. let these values of x be
equidistance i.e𝑥𝑖 = 𝑥0 + 𝑖ℎ, 𝑖. 𝑒 𝑖 = 0,1,2 … 𝑛. Let y(x) be the polynomial in x
of the nth degree such that 𝑦𝑖 = 𝑓(𝑥𝑖 ), 𝑖 = 0,1,2 … 𝑛. Suppose it is required to
evaluate y(x) near the end of the table value, then we can assume that,
We get, putting 𝑥 = 𝑥𝑛
𝐴0 = 𝑦(𝑥𝑛 ) = 𝑦𝑛
putting 𝑥 = 𝑥𝑛−2
45
𝑦𝑛−1 − 𝐴0 𝑦𝑛−1 − 𝑦𝑛 𝑦𝑛 − 𝑦𝑛−1 ∇𝑦𝑛
𝐴1 = = = =
𝑥𝑛−1 − 𝑥𝑛 𝑥𝑛−1 − 𝑥𝑛 𝑥𝑛 − 𝑥𝑛−1 ℎ
∇3 𝑦𝑛
𝐴3 =
3! ℎ3
And so on
Finally we get
46
Example 1: Estimate the value of sin 𝜃 𝑎𝑡 𝜃 = 25° , using Newton Gregory
Forward difference formula with the help of following table
𝜃 10 20 30 40 50
sin 𝜃 0.1736 0.3420 0.5 0.6428 0.7660
Solution
𝜃 sin 𝜃 ∆𝑦𝑛 ∆2 𝑦𝑛 ∆3 𝑦𝑛 ∆4 𝑦𝑛
10 0.1736
0.1684
20 0.3420 -0.0104
0.1580 -0.0048
30 0.5 -0.0152 0.0004
0.1428 -0.0041
40 0.6428 -0.0196
0.1232
50 0.7660
𝑥−𝑥0 25−10
Where 𝑝 = for 𝜃 = 25, 𝑝 = = 1.5
ℎ 10
1.6(1.5 − 1)(−0.0104)
𝑦𝑝 = 0.1736 + 1.5 × 0.1684 +
2!
1.5(1.5 − 1)(1.5 − 2)(−0.0048)
+
3!
1.5(1.5 − 1)(1.5 − 2)(1.5 − 3)0.004
+
4!
= 0.1736 + 0.2526 − 0.0039 + 0.0003 + 0.0000
47
= 0.4226
X y ∇𝑦𝑛 ∇2 𝑦𝑛 ∇3 𝑦𝑛 ∇4 𝑦𝑛 ∇5 𝑦𝑛
0.5 2.1990
1 2.5 0.3010
1.5 2.6761 0.1761 -0.1249
2 2.8010 0.1249 -0.0512 0.0737
2.5 2.8979 0.0969 -0.0280 0.0232 -0.0505
3 2.9771 0.0792 -0.0177 0.0103 -0.0129 0.0376
48
Now, Newton’s Backward Interpolation Formula
= 2.4013
49
Spline curves
There are many times when fitting an interpolating polynomial to data points is
very difficult. Here is an example where we try to fit to data pairs from known
functions.
One approach to overcome this problem is to divide the entire range of points into
sub intervals and use local low order polynomials to interpolate each sub
intervals, such polynomials are called piecewise polynomials.
50
Figure piecewise polynomial interpolation
Such piecewise polynomials are called splines functions. So, the splines functions
look smooth at the connecting points, the connecting points are called knots or
nodes.
si(x) = (ai-1/6hi) (hi2 ui-ui3) +(ai/6hi) (ui-13 – hi2 ui-1) +1/hi (fi ui-1 -fi-1 ui)
where, ui = x-xi
51
𝑓𝑖+1 −𝑓𝑖 𝑓𝑖 −𝑓𝑖−1
ℎ𝑖 𝑎𝑖−1 + 2(ℎ𝑖 + ℎ𝑖+1 )𝑎𝑖 + ℎ𝑖+1 𝑎𝑖+1 = 6 ( − )
ℎ𝑖+1 ℎ𝑖
i 0 1 2
xi 4 (x0) 9 (x1) 16 (x2)
f(xi) 2 (f0) 3 (f1) 4 (f2)
Estimate the functional value at x=7 using cubic spline technique.
Now, we know from the cubic spline technique, a0= an=0 i.e. a0= a2=0
si(x) = (ai-1/6hi) (hi2 ui-ui3) +(ai/6hi) (ui-13 – hi2 ui-1) +1/hi (fi ui-1 -fi-1 ui)…(C)
where, ui = x-xi
s1(x) = (a0/6h1) (h12 u1-u13) +(a1/6h1) (u03 – h12 u0) +1/h1 (f1u0 -f0 u1)
i 0 1 2 3
xi 1(x0) 2(x1) 3(x2) 4(x3)
f(xi) 0.5(f0) 0.3333(f1) 0.25(f2) 0.20(f3)
Estimate the functional value at x=2.5 using cubic spline technique.
Solution: h1 =1, h2 =1, h3 =1 and a0= 0, a1, a2, a3=0 (from cubic spline technique)
2(1 + 1) 1 𝑎1 𝐷
[ ] [𝑎 ] = [ 1 ]
1 2(1 + 1) 2 𝐷2
𝑓 −𝑓1 𝑓1 −𝑓0
Now D1 (i=1) = 6 ( 2 − ) = 0.5004
ℎ2 ℎ1
𝑓 −𝑓2 𝑓2 −𝑓1
and D2(i=2) = 6 ( 3 − ) =0.1995
ℎ3 ℎ2
s2(x) = (a1/6h2) (h22 u2-u23) +(a2/6h2) (u13 – h22 u1) +1/h2 (f2 u1 -f1 u2)
Practice questions:
𝜃 10 20 30 40 50
sin 𝜃 0.1736 0.3420 0.5 0.6428 0.7660
53
Curve Fitting:
In many applications it is often necessary to establish a mathematical relationship
between experimental values, this relationship may be used for either testing
existing mathematical model or establishing new ones. The mathematical
equation can also be used to predict or forecast values of the dependent variables.
Suppose the value of y for the different values of x are given, if we want to know
the effect of x and y then we may write a functional relationship 𝑦 = 𝑓(𝑥)
Relationship is linear
Fitting a straight line is the simplest approach of regression analysis. Let us
consider the mathematical equation of a straight line
𝑦 = 𝑎 + 𝑏𝑥 = 𝑓(𝑥)
We know that a is the intercept of the line and b is the slope. Consider the
points(𝑥𝑖 , 𝑦𝑖 ), then the vertical distance of this point from the line 𝑓(𝑥 ) = 𝑎 +
𝑏𝑥 is 𝑞𝑖 , then
𝑞𝑖 = 𝑦𝑖 − 𝑓(𝑥𝑖 )
= 𝑦𝑖 − (𝑎 + 𝑏𝑥𝑖 )
There are various approaches that would be tries for fitting the best line through
the data:
54
Least Square regression:
𝑖=1 𝑖=1
𝑛
= ∑[𝑦𝑖 − 𝑎 − 𝑏𝑥𝑖 ]2
𝑖=1
In this method of least squares we choose a & b such that Q is minimum, since Q
depends on a & b, a necessary condition for Q to be minimum is
𝜕𝑄 𝜕𝑄
= 0, =0
𝜕𝑎 𝜕𝑏
Then
𝑛
𝜕𝑄
= −2 ∑(𝑦𝑖 − 𝑎 − 𝑏𝑥𝑖 ) = 0
𝜕𝑎
𝑖=1
𝑛
𝜕𝑄
= −2 ∑ 𝑥𝑖 (𝑦𝑖 − 𝑎 − 𝑏𝑥𝑖 ) = 0
𝜕𝑏
𝑖=1
∑(𝑦𝑖 − 𝑎 − 𝑏𝑥𝑖 ) = 0
𝑖=1
∑ 𝑦𝑖 = 𝑛𝑎 + 𝑏 ∑ 𝑥𝑖 … … … … … 𝑎
∑ 𝑥𝑖 𝑦𝑖 = 𝑎 ∑ 𝑥𝑖 + 𝑏 ∑ 𝑥𝑖 2 … … … … … 𝑏
55
Where 𝑥̅ 𝑎𝑛𝑑 𝑦̅ are the average of x and y values.
X 1 2 3 4 5
Y 3 4 5 6 8
Solution
𝒙𝒊 𝒚𝒊 𝒙𝒊 𝟐 𝒙𝒊 𝒚𝒊
1 3 1 3
2 4 4 8
3 5 9 15
4 6 16 24
5 8 25 40
𝚺 15 26 55 90
now
𝑛 ∑ 𝑥𝑖 𝑦𝑖 − ∑ 𝑥𝑖 ∑ 𝑦𝑖
𝑏=
𝑛 ∑ 𝑥𝑖 2 − (∑ 𝑥𝑖 )2
5 ∗ 90 − 15 ∗ 26
𝑏=
5 ∗ 55 − 152
= 1.20
∑ 𝑦𝑖 ∑ 𝑥𝑖
𝑎= −𝑏
𝑛 𝑛
26 15
= − 1.2
5 5
= 1.6
56
Practice :
X 1 2 3 4 5 6 7
Y 0.5 2.5 2.0 4.0 3.5 6.0 5.5
Answer :𝑦 = 0.0714 + 0.839𝑥
3. The following table shows heights(h) and weights, find the regression line
and estimate the weights of the person with the following heights.
a) 140cm
b) 163 cm
c) 172.5 cm
𝑦 = 𝑎𝑥 𝑏 𝑜𝑟 𝑦 = 𝑎𝑒 𝑏𝑥
ln 𝑦 = ln 𝑎 + bln 𝑥
𝑧 = 𝐴 + 𝑏𝑥
𝑧 = ln 𝑦, 𝐴 = ln 𝑎 , 𝑋 = ln 𝑥
This equation is similar in form to linear equation and therefore using the same
procedure we can evaluate the parameters A & B.
57
𝑛 ∑ ln 𝑥𝑖 ln 𝑦𝑖 − ∑ ln 𝑥𝑖 ∑ ln 𝑦𝑖
𝑏=
𝑛 ∑ (ln 𝑥𝑖 )2 − (∑ ln 𝑥𝑖 )2
(∑ ln 𝑦𝑖 − 𝑏 ∑ ln 𝑥𝑖 )
ln 𝑎 = 𝐴 =
𝑛
𝑎 = 𝑒𝐴
Example : given the data table below , fit a power function model of the form 𝑦 =
𝑎𝑥 𝑏
𝑥 1 2 3 4 5
𝑦 0.5 2 4.5 8 12.5
𝑛 ∑ ln 𝑥𝑖 ln 𝑦𝑖 − ∑ ln 𝑥𝑖 ∑ ln 𝑦𝑖
𝑏=
𝑛 ∑ (ln 𝑥𝑖 )2 − (∑ ln 𝑥𝑖 )2
5 ∗ 9.0804 − 4.7874 ∗ 6.1092
=
5 ∗ 6.1993 − 4.78742
=2
(∑ ln 𝑦𝑖 − 𝑏 ∑ ln 𝑥𝑖 )
𝐴=
𝑛
6.1092 − 2 ∗ 4.7874
=
5
58
= −0.6981
𝑎 = 𝑒 𝐴 = 𝑒 −0.6931 = 0.5
𝑦 = 𝑎𝑥 𝑏 = 0.5𝑥 2
𝑦 = 0.5𝑥 2
Practice:
When a given set of data does not appear to satisfy a linear equation, we can try
a suitable polynomial as regression curve to fit the data. The least squares
technique can be readily used to fit the data to a polynomial
𝑓(𝑥 ) = 𝑦 = 𝑎1 + 𝑎1 𝑥 + 𝑎3 𝑥 2 + 𝑎4 𝑥 3 + ⋯ … + 𝑎𝑚 𝑥 𝑚−1
If the data contains n set of x any y values, then the sum squares of the errors is
given by
𝑛
59
Since f(x) is a polynomial and contains coefficient 𝑎1 , 𝑎2 , 𝑎3…. we have to
estimate all m coefficients as before we have the following m equations that can
be solved for these coefficients i.e
𝜕𝑄 𝜕𝑄 𝜕𝑄
= 0, = 0…. =0
𝜕𝑎1 𝜕𝑎2 𝜕𝑎𝑚
𝜕𝑓(𝑥𝑖 )
= 𝑥𝑖 𝑗−1
𝜕𝑎𝑗
Thus, we have
𝑛
60
The set of m equation can be represented in matrix as CA=B
𝑎1 ∑𝑦𝑖
𝑎2 ∑𝑦𝑖 𝑥𝑖
𝑎
𝐴 = 3 𝐵 = ∑𝑦𝑖 𝑥𝑖 2
⋮ ⋮
[𝑎𝑚 ] [∑𝑦𝑖 𝑥𝑖 𝑚−1 ]
x 1.0 2 3 4
y 6 11 18 27
𝒙 𝒚 𝒙𝟐 𝒙𝟑 𝒙𝟒 𝒚𝒙 𝒚𝒙𝟐
1 6 1 1 1 6 6
2 11 4 8 16 22 44
3 18 9 27 81 54 162
4 27 16 64 256 108 432
∑ 10 62 30 100 354 190 644
61
Substituting these values, we get,
𝑎1 = 3, 𝑎2 = 2, 𝑎3 = 1
62
Chapter 3: Numerical Differentiation and Integration
Let us consider a set of values (𝑥𝑖 , 𝑦𝑖 ) of a function. The process of computing
the derivative or derivatives of that function at some values of x from the given
set of values is called Numerical Differentiation. This may be done by first
approximating the function by suitable interpolation formula and then
differentiating.
Now
𝑑𝑦 𝑑𝑦 𝑑𝑝 𝑑𝑦 1
= =
𝑑𝑥 𝑑𝑝 𝑑𝑥 𝑑𝑝 ℎ
𝑑𝑝 1
∵ =
𝑑𝑥 ℎ
𝑑𝑦 1 (2𝑝 − 1)∆2 𝑦0 (3𝑝2 − 6𝑝 + 2 )∆3 𝑦0
= [∆𝑦0 + +
𝑑𝑥 ℎ 2! 3!
(4𝑝 − 18𝑝 + 22𝑝 − 6 )∆4 𝑦0
3 2
+ ] … (3)
4!
63
At 𝑥 = 𝑥0 , 𝑝 = 0, hence putting p=0 in equation 3 we get
𝑑𝑦 1 ∆2 𝑦0 ∆3 𝑦0 ∆4 𝑦0
| = [∆𝑦0 − + − ] … (4)
𝑑𝑥 𝑥=𝑥0 ℎ 2 3 4
𝑑2 𝑦 𝑑 𝑑𝑦 𝑑𝑝 1 𝑑 𝑑𝑦
= ( ) = ( )
𝑑𝑥 2 𝑑𝑝 𝑑𝑥 𝑑𝑥 ℎ 𝑑𝑝 𝑑𝑥
Putting 𝑝 = 0 in equation 5
𝑑2 𝑦 1 2 3
11 4
| = [∆ 𝑦0 − ∆ 𝑦0 + ∆ 𝑦0 ]
𝑑𝑥 2 𝑥=𝑥 ℎ2 12
0
Similarly
𝑑3 𝑦 1 3 3 4
| = [∆ 𝑦0 − ∆ 𝑦0 … ]
𝑑𝑥 3 𝑥=𝑥 ℎ3 2
0
𝑑𝑦 1 2𝑝 + 1 2 3𝑝2 + 6𝑝 + 2 3
= [∇𝑦𝑛 + ∇ 𝑦𝑛 + ∇ 𝑦0 + ⋯ ] … (10)
𝑑𝑝 ℎ 2! 3!
64
At 𝑥 = 𝑥𝑛 , 𝑝 = 0, hence putting p=0 in equation 10 we get
𝑑𝑦 1 ∇2 𝑦𝑛 ∇3 𝑦𝑛 ∇4 𝑦𝑛
| = [∇𝑦𝑛 + + + ] … (11)
𝑑𝑥 𝑥=𝑥𝑛 ℎ 2 3 4
𝑑2 𝑦 𝑑 𝑑𝑦 𝑑𝑝 1 𝑑 𝑑𝑦
= ( ) = ( )
𝑑𝑥 2 𝑑𝑝 𝑑𝑥 𝑑𝑥 ℎ 𝑑𝑝 𝑑𝑥
1 2 6𝑝 + 6 3 6𝑝2 + 18𝑝 + 11 4
= 2 [∇ 𝑦𝑛 + ∇ 𝑦𝑛 + ∇ 𝑦𝑛 + ⋯ ] … (12)
ℎ 3! 4!
Putting 𝑝 = 0
𝑑2 𝑦 1 2 3
11 4
| = [∇ 𝑦𝑛 + ∇ 𝑦𝑛 + ∇ 𝑦𝑛 + ⋯ ] … (13)
𝑑𝑥 2 𝑥=𝑥 ℎ2 12
𝑛
Similarly
𝑑3 𝑦 1 3 3 4
| = [∇ 𝑦𝑛 + ∇ 𝑦𝑛 + ⋯ ] … (14)
𝑑𝑥 3 𝑥=𝑥 ℎ3 2
𝑛
Note: first derivative is also as rate of change, so it can also be asked to find
the velocity, second derivative to find acceleration etc.
Example: Find the first, second and third derivate of 𝑓(𝑥 )𝑎𝑡 𝑥 = 1.5 if
65
Solution
We have to find the derivate at the points 𝑥 = 1.5, which is at the beginning of
the given data. Therefore, we use the derivate of Newton’s Forward Interpolation
formula.
𝑥 𝑦 = 𝑓(𝑥) ∆𝑦 ∆2 𝑦 ∆3 𝑦 ∆4 𝑦 ∆5 𝑦
1.5 3.375
3.625
2.0 7.0 3
6.6250 0.75
2.5 13.625 3.750 0
10.3750 0.75 0
3.0 24.0 4.5 0
14.8750 0.75
3.5 38.875 5.25
20.1250
4.0 59.0
𝑑𝑦 ′( )
1 ∆2 𝑦0 ∆3 𝑦0 ∆4 𝑦0
| = 𝑓 𝑥0 = [∆𝑦0 − + − + ⋯]
𝑑𝑥 𝑥=𝑥0 ℎ 2 3 4
1 3 0.75 0
𝑓 ′ (1.5) = [3.625 − + − + ⋯]
0.5 2 3 4
= 4.75
Now
𝑑2 𝑦 ′′ (
1 2 3
11
| = 𝑓 1.5) = [∆ 𝑦0 − ∆ 𝑦0 + ∗ 0]
𝑑𝑥 2 𝑥=𝑥 ℎ2 12
0
1 11
= [3 − 0.75 + ∗ 0]
1.52 12
=9
66
Again
𝑑3 𝑦 1 3 3 4
| = [∆ 𝑦0 − ∆ 𝑦0 … ]
𝑑𝑥 3 𝑥=𝑥 ℎ3 2
0
1
= [0.75]
1.53
=6
Example:
The population of a certain town (as obtained from central data) is shown in the
following table
Solution
Here we have to find the derivate at 1981 which is near the end of the table, hence
we use the derivative of Newtons Backward difference formula. The table if
difference is as follows:
𝑥(year) 𝑦 = ∇𝑦 ∇2 𝑦 ∇3 𝑦 ∇4 𝑦
𝑓(𝑥)(population)
1951 19.96
16.69
1961 36.65 5.47
22.16 -9.23
1971 58.81 -3.76 11.99
18.40 2.76
1981 77.21 -1
17.40
1991 94.61
67
Here ℎ = 10, 𝑥𝑛 = 1991, ∇𝑦𝑛 = 17.4, ∇2 𝑦𝑛 = −1, ∇3 𝑦𝑛 = 2.76, ∇4 𝑦𝑛 = 11.99
𝑑𝑦 1 2𝑝 + 1 2 3𝑝2 + 6𝑝 + 2 3
= [∇𝑦𝑛 + ∇ 𝑦𝑛 + ∇ 𝑦0
𝑑𝑝 ℎ 2! 3!
2𝑝3 + 9𝑝2 + 11𝑝 + 3 4
+ ∇ 𝑦0 ]
4!
Now we have to find out the rate of growth of the population in year 1981, so
𝑥 − 𝑥𝑛 1981 − 1991
𝑝= = = −1
ℎ 10
∴ 𝑝 = −1, ℎ = 10
′(
1 2(−1) + 1 3(−1)2 + 6(−1) + 2
𝑦 1981) = [17.4 + ∗ (−1) + 2.76
10 2 6
2(−1)3 + 9(−1)2 + 11(−1) + 3
+ 11.99 ]
12
1
= [17.4 + 0.5 − 0.46 − 0.992]
10
= 1.6441
Therefore, the rate of growth of the population in the year is 1981 is 1.6441
68
(2𝑝 − 1)∆2 𝑦0 (3𝑝2 − 6𝑝 + 2 )∆3 𝑦0
[∆𝑦0 + +
2! 3!
(4𝑝3 − 18𝑝2 + 22𝑝 − 6 )∆4 𝑦0
+ + ⋯]
4!
𝑥 -1 1 2 3
𝑦 -21 15 12 3
Since the arguments (x -points) aren’t equally spaced we use Newton’s Divided
Difference formula
𝑦(𝑥 ) = 𝑎0 + 𝑎1 (𝑥 − 𝑥0 ) + 𝑎2 (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )
+ 𝑎3 (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ) …
𝑥 𝑓(𝑥)
-1 -21
18∗
1 15 −7∗∗
-3 1
2 12 -3
-9
3 3
Note :
15 − (−21)
∗ 18 =
1 − (−1)
69
−3 − 18
∗∗ −7 =
2 − (−1)
𝑓(𝑥 ) = 𝑥 3 − 9𝑥 2 + 17𝑥 + 6
𝑑𝑦
For maxima and minima =0
𝑑𝑥
3𝑥 2 − 18𝑥 + 17 = 0
On solving, we get,
𝑥 = 4. .8257 𝑜𝑟 1.1743
∴ 𝑦𝑚𝑎𝑥 = 𝑥 3 − 9𝑥 2 + 17𝑥 + 6
= 15.171612
70
𝑓(𝑥+ℎ)−𝑓(𝑥) ℎ2
𝑓 ′ (𝑥 ) = − 𝑓 ′′ (𝜃) … . (2)
ℎ 2
ℎ2 ′′
𝐸𝑡 (ℎ) = − 𝑓 (𝜃) … . (4)
2
Equation 3 is called first order forward difference quotient. This is also known as
two-point formula. The truncation error is in the order of h and can be decreased
by decreasing h.
Similarly, we can show that the first order backward difference quotient is
𝑓 (𝑥 ) − 𝑓 (𝑥 − ℎ )
𝑓 ′ (𝑥 ) = … (5)
ℎ
Central Difference Quotient
𝑓 (𝑥 + ℎ ) − 𝑓 (𝑥 − ℎ )
𝑓 ′ (𝑥 ) =
2ℎ
This equation is called second order difference quotient. Note that this is the
average of the forward difference quotient and backward difference equation.
This is also called as three-point formula.
We know that
𝑓 (𝑥 + ℎ ) − 𝑓 (𝑥 )
𝑓 ′ (𝑥 ) =
ℎ
𝑓 (1 + ℎ ) − 𝑓 (𝑥 )
𝑓 ′ (𝑥 ) =
ℎ
71
Derivative approximation is tabulated below as:
h 𝑓 ′ (1)
0.2 2.2
0.1 2.1
0.05 2.05
0.01 2.01
Note that the correct answer is 2. The derivative approximation approaches the
exact value as h decreases.
𝑓 (𝑥 + ℎ ) − 𝑓 (𝑥 − ℎ )
𝑓 ′ (𝑥 ) =
2ℎ
𝑓 (1 + ℎ ) − 𝑓 (1 − ℎ )
𝑓 ′ (𝑥 ) =
2ℎ
h 𝑓 ′ (1)
0.2 2
0.1 2
0.05 2
Example Practice:
1. Find the first and second derivates of the function tabulated below at point
x=19
x 1.0 1.2 1.4 1.6 1.8 2.0
f(x) 0 0.128 0.544 1.296 2.432 4.00
Result :0.63,6.6
v 2 4 6 8 10
P 105 42.07 25.3 16.7 13
a. Find the rate of change of pressure with respect to volume when v=2
72
b. Find the rate of change of volume with respect to pressure when p=105
Numerical Integration:
𝑏
The process of computing ∫𝑎 𝑦 𝑑𝑥, 𝑤ℎ𝑒𝑟𝑒 𝑦 = 𝑓(𝑥) is given by a set of tabulated
values [𝑥𝑖 , 𝑦𝑖 ], i=0,1,2 ….n ,𝑎 = 𝑥0 , 𝑏 = 𝑥𝑛 is called numerical integration since
𝑦 = 𝑓(𝑥) is a single variable function, the process in general is known as
quadrature, like that of numerical differentiation here also we replace f(X) by an
interpolation formula and integrate it in between given limits.
73
𝑝(𝑝 − 1) 2 𝑝(𝑝 − 1)(𝑝 − 2) 3
𝑓 (𝑥 ) = 𝑓0 + 𝑝∆𝑓0 + ∆ 𝑓0 + ∆ 𝑓0 + ⋯
2! 3!
Integrating term by term, since 𝑥 = 𝑥0 + 𝑝ℎ
𝑑𝑥 = ℎ𝑑𝑝
𝑛
𝑝(𝑝 − 1) 2 𝑝(𝑝 − 1)(𝑝 − 2) 3
𝐼 = ∫ [𝑓0 + 𝑝∆𝑓0 + ∆ 𝑓0 + ∆ 𝑓0 + ⋯ ] ℎ𝑑𝑝
0 2! 3!
𝑛2 1 𝑛3 𝑛2 2 1 𝑛4
𝐼 = ℎ [𝑛𝑓0 + ∆𝑓0 + ( − ) ∆ 𝑓0 + ( − 𝑛3 + 𝑛2 ) ∆3 𝑓0
2 2! 3 2 3! 4
+ ⋯ ] … . (1)
1. Trapezoidal rule
2. Simpson’s 1/3 rule
3. Simpson’s 3/8 rule
1
𝐼 = ℎ [𝑓0 + (𝑓1 − 𝑓0 )]
2
ℎ
𝐼= [𝑓 + 𝑓1 ]
2 0
74
Composite Trapezoidal Rule:
If the range to be integrated is large, the trapezoidal rule can be improved by
dividing the interval (a,b) into a number of small intervals. The sum of areas of
all the sub-intervals is the integral of the intervals (a,b) or (x0,xn). this is known
as composite trapezoidal rule.
As seen in the figure, there are n+1 equally spaced sampling point that create n
segments of equal width h given by
𝑏−𝑎
ℎ=
𝑛
𝑥𝑖 = 𝑎 + 𝑖ℎ 𝑖 = 0,1,2, … 𝑛
ℎ ℎ ℎ
𝐼= [𝑓(𝑥0 ) + 𝑓(𝑥1 )] + [𝑓(𝑥1 ) + 𝑓(𝑥2 )] + ⋯ + [𝑓(𝑥𝑛−1 ) + 𝑓(𝑥𝑛 )]
2 2 2
Now let us denote 𝑓𝑖 = 𝑓(𝑥𝑖 ) then
𝑛−1
ℎ
𝐼 = [𝑓0 + 2 ∑ 𝑓𝑖 + 𝑓𝑛 ]
2
𝑖=1
75
Simpson’s 𝟏⁄𝟑 rule ( 3 Point Formula)
Another popular method is Simpson’s 1/3 rule. Here the function f(x) is
approximated by second order polynomial 𝑝2 (𝑥) which passes through three
sampling points as shown in figure. The three points include the end point a & b
and midpoint between 𝑥1 = (𝑎 + 𝑏)/2. The width of the segment h is given by
(𝑏 − 𝑎)⁄
ℎ= 𝑛. Take n=2 and neglecting the third and higher order differences
we get (in newton’s cote formula)
22 1 23 22
𝐼 = ℎ [2𝑓0 + ∆𝑓0 + ( − ) ∆2 𝑓0 ]
2 2! 3 2
𝑏
1 8
∫ 𝑓(𝑥 )𝑑𝑥 = ℎ [2𝑓0 + 2(𝑓1 − 𝑓0 ) + ( − 2)(𝑓1 − 𝑓0 )2 ]
𝑎 2 3
1
= ℎ [2𝑓0 + 2𝑓1 − 2𝑓0 + (∆𝑓2 − ∆𝑓1 )]
3
ℎ 1
= [2𝑓0 + 2𝑓1 − 2𝑓0 + (𝑓2 − 𝑓1 − (𝑓1 −𝑓0 ))]
3 3
ℎ
= [𝑓 + 4𝑓1 + 𝑓2 ]
3 0
Composite Simpson’s 1/3 rule:
𝑏
ℎ
∫ 𝑓(𝑥 )𝑑𝑥 = [(𝑓0 + 𝑓𝑛 ) + 4(𝑓1 + 𝑓3 + 𝑓5 + ⋯ ) + 2(𝑓2 + 𝑓4 + 𝑓6 + ⋯ )]
𝑎 3
76
Simpson’s 3/8 rule ( 4 Point Rule)
We have Newton’s cotes formula
𝑛2 1 𝑛3 𝑛2 2 1 𝑛4
𝐼 = ℎ [𝑛𝑓0 + ∆𝑓0 + ( − ) ∆ 𝑓0 + ( − 𝑛3 + 𝑛2 ) ∆3 𝑓0 + ⋯ ]
2 2! 3 2 3! 4
32 1 33 32 2 1 34
𝐼 = ℎ [3𝑓0 + ∆𝑓0 + ( − ) ∆ 𝑓0 + ( − 33 + 32 ) ∆3 𝑓0 + ⋯ ]
2 2 3 2 6 4
On solving we get
3ℎ
𝐼= [(𝑓0 + 𝑓3 ) + 3𝑓1 + 3𝑓2 ]
8
Example:
10 𝑑𝑥
1. Evaluate ∫0 using
1+𝑥 2
a. Trapezoidal rule
b. Simpson’s 1/3 rule
c. Simpson’s 3/8 rule
77
Solution:
Taking h=1, divide the whole range of the integration [0,10] into ten equal parts.
The value of the integrand for each point of sub division.
𝑥𝑖 𝑥0 1 2 3 4 5 6 7 8 9 10
=𝑥 =0
𝑓𝑖 1 0. 0. 0. 0.058 0.038 0.027 0.02 0.015 0.012 0.009
=𝑦 5 2 1 8 5 0 0 4 2 9
a. By Trapezoidal rule
10
𝑑𝑥 ℎ
∫ = [𝑓 + 𝑓1 ]
0 1 + 𝑥2 2 0
1
= [1 + 0.5]
2
= 0.75
Composite methods:
a. Trapezoidal rule
10 𝑛−1
𝑑𝑥 ℎ
∫ = [𝑓 + 2 ∑ 𝑓𝑖 + 𝑓𝑛 ]
0 1+𝑥
2 2 0
𝑖=1
78
9
ℎ
= [𝑓0 + 2 ∑ 𝑓𝑖 + 𝑓10 ]
2
𝑖=1
1
= [1 + 2(0.5 + 0.2 + 0.1 + 0.0588 + 0.0385 + 0.0270 + 0.020
2
+ 0.0154 + 0.0122) + 0.0099]
= 1.4769
79
Romberg integration formula/ Richardson’s deferred approach to the limit
or Romberg method
Take an arbitrary value of h and calculate
𝑏
ℎ
𝐼1 = ∫ 𝑓(𝑥)𝑑𝑥 = [(𝑓 + 𝑓𝑛 ) + 2(𝑓1 + 𝑓2 + 𝑓3 + 𝑓𝑛−1 )]
𝑎 2 0
𝑏
ℎ
𝐼2 = ∫ 𝑓(𝑥)𝑑𝑥 = [(𝑓 + 𝑓𝑛 ) + 2(𝑓1 + 𝑓2 + 𝑓3 + 𝑓𝑛−1 )]
𝑎 4 0
𝑏
ℎ
𝐼3 = ∫ 𝑓(𝑥)𝑑𝑥 = [(𝑓 + 𝑓𝑛 ) + 2(𝑓1 + 𝑓2 + 𝑓3 + 𝑓𝑛−1 )]
𝑎 8 0
1 𝑑𝑥
Example: Evaluate ∫0 using Romberg’s method correct up to four decimal
1+𝑥 2
places. Hence find approximate value of π.
Solution
𝑏−𝑎 1−0
By taking n=2, ℎ = = = 0.5
𝑛 2
a. When h=0.5
𝑥 0 0.5 1
𝑓𝑖 1 0.8 0.5
ℎ 0.5
𝐼1 = [(𝑓0 + 𝑓2 ) + 2(𝑓1 )] = [(1 + 0.5) + 2(0.8)] = 0.7750
2 2
80
b. When h=0.5/2=0.25
𝑥 0 0.25 0.5 0.75 1
𝑓𝑖 1 0.9412 0.8 0.64 0.5
ℎ
𝐼2 = [(𝑓0 + 𝑓4 ) + 2(𝑓1 + 𝑓2 + 𝑓3 )]
2
0.25
= [(1 + 0.5) + 2(0.941 + 0.8 + 0.64)] = 0.7828
2
c. When h=0.25/2=0.125
𝑥 0 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1
𝑓𝑖 1 0.9846 0.9412 0.8767 0.8 0.7191 0.64 0.5664 0.5
ℎ
𝐼3 = [(𝑓 + 𝑓8 ) + 2(𝑓1 + 𝑓2 + 𝑓3 + 𝑓4 + 𝑓5 + 𝑓6 + 𝑓7 )]
2 0
0.125
= [(1 + 0.5) + 2(0.9846 + 0.9412 + 0.8767 + 0.8 + 0.7191 + 0.64
2
+ 0.5664)] = 0.78475
1 1
𝐼1 ∗ = 𝐼2 + (𝐼2 − 𝐼1 ) = 0.7828 + (0.7828 − 0.7750) = 0.7854
3 3
1 1
𝐼2 ∗ = 𝐼3 + (𝐼3 − 𝐼2 ) = 0.7848 + (0.7848 − 0.7828) = 0.7854
3 3
Since these two are the same value, we conclude that the value of the integral
=0.7854
1 𝑑𝑥
i.e ∫0 = 0.7854
1+𝑥 2
1
𝑑𝑥 −1 ]1 −1 −1
𝜋
∫ = [ tan 𝑥 0 = tan 1 − tan 0 = = 0.7854
0 1 + 𝑥2 4
∴ 𝜋 ≈ 3.1416
81
Gaussian integration:
Gaussian integration is based on the concept that the accuracy of numerical
integration can be improved by choosing sampling points wisely rather than on
the basis of equal sampling. The problem is to compute the values of 𝑥1 & 𝑥2 given
the value of a and b and to choose approximate weights w1 & w2 . The method of
implementing the strategy of finding approximate values of xi & wi and obtaining
the integral of f(x) is called Gaussian integration or quadrature.
1. f(x)=1
1
𝑤1 + 𝑤2 = ∫ 𝑑𝑥 = 2
−1
2. f(x)=x
1
w1 x1 + w2 x2 = ∫ f(x)dx = 0
−1
2
3. f(x)=x
1 1
2
w1 𝑥1 + w2 𝑥2 = ∫ f(x)dx = ∫ 𝑥 2 dx =
2 2
−1 −1 3
4. f(x)=x3
1 1
w1 𝑥1 + w2 𝑥2 = ∫ f(x)dx = ∫ 𝑥 3 dx = 0
3 3
−1 −1
Solving above equation we get
𝑤1 = 1, 𝑤2 = 1
82
𝑥1 = −1⁄ = −0.5773502
√3
𝑥2 = 1⁄ = 0.5773502
√3
Thus, we have the Gaussian Quadrature formula, for n=2
1
∫ 𝑓(𝑥)𝑑𝑥 = 𝑓(− 1⁄ ) + 𝑓(1⁄ )
−1 √3 √3
This formula will give correct value for the integral of f(x) in the range (-
1,1) for any function up to third order. The above equation is also called
Gauss Legendre formula.
1
Example : Compute ∫−1 𝑒 𝑥 using two point Gaussian integration formula
1
𝐼 = ∫ 𝑒 𝑥 𝑑𝑥 = 𝑤1 𝑓(𝑥1 ) + 𝑤2 𝑓(𝑥2 )
−1
𝑥1 = −1⁄ = −0.5773502, 𝑤1 = 1
√3
𝑥2 = 1⁄ = 0.5773502, 𝑤2 = 1
√3
𝑓(𝑥) = 𝑒 𝑥
we know that,
𝐼 = 𝑤1 𝑓 (− 1⁄ ) + 𝑤2 𝑓 (1⁄ )
√3 √3
−1⁄ 1⁄
=𝑒 √3 +𝑒 √3
= 0.5614 + 1.7813
= 2.3427
83
Changing limits of Integration
Note that the Gaussian formula imposed a restriction on the limits of integration
to be from -1 to 1. The restriction can be overcome by using the techniques of the
“interval transformation” used in calculus, let
𝑏 1
∫ 𝑓(𝑥)𝑑𝑥 = 𝑐 ∫ 𝑔(𝑧) 𝑑𝑧
𝑎 −1
i.e x=Az+B
this must satisfy the following conditions at x=a, z=-1 & x=b, z=1
𝑏−𝑎
𝑑𝑥 = ( )𝑑𝑧
2
𝑏−𝑎
here 𝐶 =
2
𝑏−𝑎 1
∫ 𝑔(𝑧)𝑑𝑧
2 −1
84
Where wi and zi are the weights and quadrature points for the integration domain
(-1,1)
Here n=2
𝑏−𝑎 1 𝑏−𝑎 𝑏−𝑎
𝐼= ∫−1 𝑔(𝑧)𝑑𝑧 = ∑𝑛𝑖=1 𝑤𝑖 𝑔(𝑧𝑖 ) = ( ) [𝑤1 𝑔(𝑧1 ) + 𝑤2 𝑔(𝑧2 )]
2 2 2
𝑏−𝑎 𝑏+𝑎
𝑥=( )𝑧 +
2 2
2 − (−2) 2 + (−2)
= 𝑧 +
2 2
= 2𝑧
𝑥⁄ 2𝑧⁄
∴ 𝑔(𝑧) = 𝑒 − 2 = 𝑒− 2 = 𝑒 −𝑧
𝑤1 = 𝑤2 = 1
𝑧1 = − 1⁄ , 𝑧2 = 1⁄
√3 √3
𝑏−𝑎
𝐼= [𝑤1 𝑔(𝑧1 ) + 𝑤2 𝑔(𝑧2 )]
2
2 − (−2) −(−1⁄ ) −(1⁄ )
= [𝑒 √3 + 𝑒 √3 ]
2
= 2(0.5614 + 1.7813)
= 4.8654
85
Values for gaussian quadrature
Number of terms Values of x Weighting factor Valid to degree
2 -0.5773502 1 3
0.5773502 1
3 -0.77459667 0.55555555
0 0.88888889 5
0.77459667 0.55555555
4 -0.86113631 0.34785485
-0.33998104 0.65214515 7
0.33998104 0.65214515
0.86113631 0.34785485
4
Example: Use Gaussian integration 3 point formula to evaluate ∫2 (𝑥 4 + 1)𝑑𝑥
𝑏−𝑎
𝐼= [𝑤1 𝑔(𝑧1 ) + 𝑤2 𝑔(𝑧2 ) + 𝑤3 𝑔(𝑧3 )]
2
𝑏−𝑎 𝑎+𝑏
𝑥=( )𝑧 + ( )
2 2
4−2 4+2
=( )𝑧 + ( )
2 2
=𝑧+3
∴ 𝑔(𝑧) = (𝑧 + 3)4 + 1
For n=3
𝑤1 = 0.55556 𝑧1 = −0.77460
𝑤2 = 0.88889 𝑧2 =0
𝑤3 = 0.55556 𝑧3 = 0.77460
86
𝐼 = 0.55556[(−0.77460 + 3)4 + 1] + 0.88889[(0 + 3)4 + 1]
+ 0.55556[(0.77460 + 3)4 + 1]
= 200.4014
87
Chapter 4: solution of Linear Algebraic Equations
Linear equations:
First mathematical models of many of the real world problems are either linear
or can be approximated reasonably well using linear relationships. Analysis of
linear relationship of variables is generally easier than that of non-linear
relationships.
A linear equation involving two variables x and y has the standard form 𝑎𝑥 +
𝑏𝑦 = 𝑐, where a, b& c are real numbers and a and b both cannot be equal to zero.
The equation becomes non-linear if any of the variables has the exponent other
than one, example
4𝑥 + 5𝑦 = 15 𝑙𝑖𝑛𝑒𝑎𝑟
4𝑥 − 𝑥𝑦 + 5𝑦 = 15 𝑛𝑜𝑛 − 𝑙𝑖𝑛𝑒𝑎𝑟
𝑥 2 + 5𝑦 2 = 15 𝑛𝑜𝑛 − 𝑙𝑖𝑛𝑒𝑎𝑟
𝑥1 − 2𝑥2 = −7
88
Existence of solution
In solving system of equations, we find values of variables that satisfy all
equations in the system simultaneously. There may be 4 possibilities in solving
the equations.
Elimination method
Elimination method is a method of solving simultaneous linear. This method
involves elimination of a term containing one of the unknowns in all but one
equation, one such step reduces the order of equations by one, repeated
elimination leads finally to one equation with one unknown.
𝑥1 − 𝑥2 + 3𝑥3 = 13 … … (3)
4𝑥1 − 2𝑥2 + 𝑥3 = 15
−10𝑥2 +19𝑥3 = 77
90
−2𝑥2 + 11𝑥3 = 37
Now to eliminate 𝑥2 from third equation multiply second row by 2 and third row
by -10 and adding
4𝑥1 − 2𝑥2 + 𝑥3 = 15
−10𝑥2 +19𝑥3 = 77
−72𝑥3 = −216
Now we have a triangular system and solution is readily obtained from back-
substitution
𝑥3 = 3
77 − 19 ∗ 3
𝑥2 = = −2
−10
15 + 2 ∗ (−2) − 3
𝑥1 = =2
4
Gauss Elimination Method
The procedure in above example may not be satisfactory for large systems
because the transformed coefficients can become very large as we convert to a
triangular system. So, we use another method called Gaussian Elimination
𝑎𝑖1
method that avoid this by subtracting ⁄𝑎11 times the first equation from
𝑖 𝑡ℎ equation to make the transformed numbers in the first column equal to zero
and proceed on.
91
Example (without pivoting element)
Augmented matrix is
0.143 0.357 2.01 −5.173
[−1.31 0.911 1.99 −5.458]
11.2 −4.30 −0.605 4.415
𝑅 𝑅
𝑅2 → ( 1⁄0.143) 1.31 + 𝑅2 , 𝑅3 → ( 1⁄0.143) ∗ (−11.2) + 𝑅3
𝑥3 = 1.8⁄−0.6 = −3.001
92
Example (with pivoting element)
Augmented matrix is
0.143 0.357 2.01 −5.173
[−1.31 0.911 1.99 −5.458]
11.2 −4.30 −0.605 4.415
𝑅1 ↔ 𝑅3 (𝑃𝑖𝑣𝑜𝑡𝑖𝑛𝑔)
11.2 −4.30 −0.605 4.415
[−1.31 0.911 1.99 −5.458]
0.143 0.357 2.01 −5.173
𝑅 𝑅
𝑅2 → ( 1⁄11.2) 1.31 + 𝑅2 , 𝑅3 → ( 1⁄11.2) ∗ (−0.143) + 𝑅3
𝑥3 = 0.236⁄−0.079 = −2.990
93
4.415 + 4.30 ∗ 1.953 + 0.605 ∗ −2.990
𝑥1 =
11.2
𝑥1 = 0.982
𝑥1 + 3𝑥2 + 𝑥3 = 10
Augmented matrix is
2 4 −6 −8
[1 3 1 10 ]
2 −4 −2 −12
𝑅
𝑅1 → ( 1⁄2)
1 2 −3 −4
[1 3 1 10 ]
2 −4 −2 −12
𝑅2 → 𝑅1 − 𝑅2 , 𝑅3 → −2𝑅1 + 𝑅3
94
1 2 −3 −4
[0 −1 −4 −14]
0 −8 4 −4
𝑅
𝑅2 → ( 2⁄−1)
1 2 −3 −4
0 1 4 14
0 −8 4 −4
𝑅1 → −2𝑅2 + 𝑅1 , 𝑅3 → 8𝑅2 + 𝑅3
1 0 −11 −32
0 1 4 14
0 0 36 108
𝑅
𝑅3 → ( 3⁄36)
1 0 −11 −32
0 1 4 14
0 0 1 3
𝑅1 → 11𝑅3 + 𝑅1 , 𝑅2 → −4𝑅3 + 𝑅2
1 0 0 1
0 1 0 2
0 0 1 3
Hence 𝑥1 = 1, 𝑥2 = 2, 𝑥3 = 3
𝑥1 + 3𝑥2 + 𝑥3 = 10
95
Augmented matrix is:
2 4 −6 −8
[1 3 1 10 ]
2 −4 −2 −12
𝑅
𝑅1 → ( 1⁄2)
1 2 −3 −4
[1 3 1 10 ]
2 −4 −2 −12
𝑅2 → 𝑅1 − 𝑅2 , 𝑅3 → −2𝑅1 + 𝑅3
1 2 −3 −4
[0 −1 −4 −14]
0 −8 4 −4
𝑅2 ↔ 𝑅3
1 2 −3 −4
0 −8 4 −4
0 1 4 −14
𝑅
𝑅2 → ( 2⁄−8)
1 2 −3 −4
0 1 −0.5 0.5
0 1 4 14
𝑅1 → −2𝑅2 + 𝑅1 , 𝑅3 → −𝑅2 + 𝑅3
1 0 −2 −5
0 1 −0.5 0.5
0 0 4.5 13.5
𝑅
𝑅3 → ( 3⁄4.5)
1 0 −2 −5
0 1 −0.5 0.5
0 0 1 3
96
𝑅1 → 2𝑅3 + 𝑅1 , 𝑅2 → 0.5𝑅3 + 𝑅2
1 0 0 1
0 1 0 2
0 0 1 3
Hence 𝑥1 = 1, 𝑥2 = 2, 𝑥3 = 3
Example: Given matrix A, find the inverse of A using Gauss Jordan method.
1 −1 2
𝐴 = [3 0 1]
1 0 2
1 −1 2 1 0 0
The augmented matrix with identity matrix is [3 0 1 0 1 0]
1 0 2 0 0 1
𝑅2 → −3𝑅1 + 𝑅2 , 𝑅3 → −𝑅1 + 𝑅3
1 −1 2 1 0 0
[0 3 −5 −3 1 0]
0 1 0 −1 0 1
𝑅
𝑅2 → ( 2⁄3)
1 −1 2 1 0 0
[0 1 −1.6667 −1 0.333 0]
0 1 0 −1 0 1
𝑅1 → 𝑅1 + 𝑅2 , 𝑅3 → 𝑅2 − 𝑅3
97
1 0 0.3333 0 0.3333 0
[0 1 −1.6667 −1 0.3333 0]
0 0 −1.6667 0 0.3333 −1
𝑅
𝑅3 → ( 3⁄−1.6667)
1 0 0.3333 0 0.3333 0
[0 1 −1.6667 −1 0.333 0]
0 0 1 0 −0.2 0.6
𝑅1 → −0.3333𝑅3 + 𝑅1 , 𝑅2 → 1.6667𝑅3 + 𝑅2
1 0 0 0 0.4 −0.2
[0 1 0 −1 0 1 ]
0 0 1 0 −0.2 0.6
0 0.4 −0.2
−1
𝐴 = [−1 0 1 ]
0 −0.2 0.6
Practice: Find the inverse of the following matrix using Gauss Jordan
elimination method.
2 3 4
𝐴 = [4 2 3]
3 4 2
Method of factorization
Consider the following system of equations
𝐴𝑋 = 𝐵
𝑎11 𝑎12 𝑎13 𝑥1 𝑏1
𝐴 = [𝑎21 𝑎22 𝑎23 ] 𝑋 = [𝑥2 ] 𝐵 = [𝑏2 ]
𝑎31 𝑎32 𝑎33 𝑥3 𝑏3
98
In this method, we use the fact that the square matrix A can be factorized into the
form LU, where L is lower triangular matrix and U can be upper triangular matrix
such that 𝐴 = 𝐿𝑈
𝑙11 0 0 𝑢11 𝑢12 𝑢13
𝐿 = [𝑙21 𝑙22 0] 𝑈=[ 0 𝑢22 𝑢23 ]
𝑙31 𝑙32 𝑙33 0 0 𝑢33
𝐿𝑈𝑋 = 𝐵
Dolittle LU decomposition:
1 0 0 𝑢11 𝑢12 𝑢13 𝑎11 𝑎12 𝑎13
[𝑙21 1 0] [ 0 𝑢22 𝑢23 ] = [𝑎21 𝑎22 𝑎23 ]
𝑙31 𝑙32 1 0 0 𝑢33 𝑎31 𝑎32 𝑎33
𝑢11 𝑢12 𝑢13 𝑎11 𝑎12 𝑎13
[𝑙21 𝑢11 𝑙21 𝑢12 + 𝑢22 𝑙21 𝑢13 + 𝑢23 ] = [𝑎21 𝑎22 𝑎23 ]
𝑙31 𝑢11 𝑙31 𝑢12 + 𝑙32 𝑢22 𝑙31 𝑢13 + 𝑙32 𝑢23 + 𝑢33 𝑎31 𝑎32 𝑎33
2𝑥 − 3𝑦 + 10𝑧 = 3
−𝑥 + 4𝑦 + 2𝑧 = 20
5𝑥 + 2𝑦 + 𝑧 = −12
99
Where,
2 −3 10 𝑥 3
𝐴 = [−1 4 2 ] 𝑋 = [ 𝑦 ] 𝐵 = [ 20 ] here, A=LU
5 2 1 𝑧 −12
1 0 0 𝑢11 𝑢12 𝑢13 2 −3 10
[𝑙21 1 0] [ 0 𝑢22 𝑢23 ] = [−1 4 2]
𝑙31 𝑙32 1 0 0 𝑢33 5 2 1
𝑢11 𝑢12 𝑢13 2 −3 10
[𝑙21 𝑢11 𝑙21 𝑢12 + 𝑢22 𝑙21 𝑢13 + 𝑢23 ] = [−1 4 2]
𝑙31 𝑢11 𝑙31 𝑢12 + 𝑙32 𝑢22 𝑙31 𝑢13 + 𝑙32 𝑢23 + 𝑢33 5 2 1
𝑙21 𝑢11 = −1
𝑙21 = − 1⁄2
𝑢22 = 5⁄2
𝑢23 = 7
𝑙31 𝑢11 = 5
𝑙31 = 5⁄2
𝑙32 = 19⁄5
𝑢33 = − 253⁄5
So, we have,
100
1 0 0 2 −3 10
1 5⁄
𝐿 = [− ⁄2 1 0] 𝑈 = [0 2 7 ]
5⁄ 19⁄ 1 0 0 − 253⁄5
2 5
Now 𝐿𝑍 = 𝐵 where Z is the matrix of order 3×3.
1 0 0 𝑍
1 1 3
[− ⁄2 1 0] [𝑍2] = [ 20 ]
5⁄ 19⁄ 1 𝑍3 −12
2 5
𝑍1 = 3
− 1⁄2 𝑍1 + 𝑍2 = 20
𝑍2 = 43⁄2
5⁄ 𝑍 + 19⁄ 𝑍 + 𝑍 = −12
2 1 5 2 3
𝑍3 = − 506⁄5
Now UX= 𝑍
2 −3 10 3
𝑥
5
[0 ⁄2 7 ] [𝑦] = [ 43⁄2 ]
0 0 − 254⁄5 𝑧 − 506⁄5
𝑧=2
5⁄ 𝑦 + 7𝑧 = 43⁄
2 2
𝑦=3
2𝑥 − 3𝑦 + 10𝑧 = 3
𝑥 = −4
101
𝑥 + 3𝑦 + 8𝑧 = 4
𝑥 + 4𝑦 + 3𝑧 = −2
𝑥 + 3𝑦 + 4𝑧 = 1
Crout’s method
𝑙11 0 0 1 𝑢12 𝑢13 𝑎11 𝑎12 𝑎13
[𝑙21 𝑙22 0 ] [0 1 𝑢23 ] = [𝑎21 𝑎22 𝑎23 ]
𝑙31 𝑙32 𝑙33 0 0 1 𝑎31 𝑎32 𝑎33
2𝑥 − 3𝑦 + 10𝑧 = 3
−𝑥 + 4𝑦 + 2𝑧 = 20
5𝑥 + 2𝑦 + 𝑧 = −12
102
𝑙11 𝑢12 = −3
𝑢12 = − 3⁄2
𝑙11 𝑢13 = 10
𝑢13 = 10⁄2 = 5
𝑙22 = 5⁄2
𝑢23 = 14⁄5
𝑙32 = 19⁄2
𝑙33 = − 253⁄5
So, we have,
2 0 0 1 − 3⁄2 5
5⁄
𝐿 = [−1 2 0 ] 𝑈 = [0 1 14⁄ ]
5
5 19⁄5 − 253⁄5 0 0 1
Now 𝐿𝑧 = 𝐵
2 0 0 𝑧1
5⁄ 3
[−1 2 0 ] [𝑧2 ] = [ 20 ]
5 19⁄
5 − 253⁄5 𝑧3 −12
𝑍1 = 3⁄2
103
5
−𝑍1 + 𝑍2 = 20
2
𝑍2 = 43⁄2
253
5𝑍1 + 19⁄2 𝑍2 − 𝑍 = −12
5 3
𝑍3 = 2
Now UX= 𝑍
1 − 3⁄2 5 𝑥 3⁄
2
[0 1 14⁄ ] [𝑦] = [43⁄ ]
5 𝑧 5
0 0 1 2
𝑧=2
14
𝑦+ 𝑧 = 43⁄5
5
𝑦=3
3 3
𝑥 − 𝑦 + 5𝑧 =
2 2
𝑥 = −4
𝑥 = −4, 𝑦 = 3, 𝑧 = 2
Choleskys method:
In case of A is symmetric, the LU decomposition can be modified so that upper
factor in matrix is the transpose of the lower one (vice versa)
i.e.
104
𝐴 = 𝐿𝐿𝑇 = 𝑈 𝑇 𝑈
𝑙11 0 0 𝑙11 𝑙21 𝑙31 𝑎11 𝑎12 𝑎13
[𝑙21 𝑙22 0 ][ 0 𝑙22 𝑙32 ] = [𝑎21 𝑎22 𝑎23 ]
𝑙31 𝑙32 𝑙33 0 0 𝑙33 𝑎31 𝑎32 𝑎33
Symmetric matrix
A square matrix 𝑨 = [𝒂𝒊𝒋 ] is called symmetric if 𝒂𝒊𝒋 = 𝒂𝒋𝒊 for all i and j
Equating, we get:
𝑙11 2 = 1, 𝑙11 = 1
105
Practice: Find the Cholesky decomposition of the matrix
4 1 1
𝐴 = [1 5 2]
1 2 3
When the system of equation can be ordered so that each diagonal entry of the
coefficient matrix is larger in magnitude that the sum of the magnitude of the
other coefficients in that row, then such system is called diagonally dominant and
the iteration will converge for any stating values. Formally we say that an nxn
matrix A is diagonally dominant if and only if for each i=1, 2, 3….n
The iterative method depends on the arrangement of the equations in this manner
106
𝑏𝑛 − (𝑎𝑛1 𝑥1 + 𝑎𝑛2 𝑥2 + ⋯ + 𝑎𝑛𝑛−1 𝑥𝑛 )
𝑥𝑛 =
𝑎𝑛𝑛
Now we can computer 𝑥1 , 𝑥2 … 𝑥𝑛 by using initial guess for these values. The
new values area gain used to compute the next set of x values. The process can
continue till we obtain a desired level of accuracy in x values.
Example:
6𝑥1 − 2𝑥2 + 𝑥3 = 11
𝑥1 + 2𝑥2 − 5𝑥3 = −1
6𝑥1 − 2𝑥2 + 𝑥3 = 11
𝑥1 + 2𝑥2 − 5𝑥3 = −1
Now,
11 − (−2𝑥2 + 𝑥3 )
𝑥1 =
6
5 − (−2𝑥1 + 2𝑥3 )
𝑥2 =
7
−1 − (𝑥1 + 2𝑥2 )
𝑥3 = −( )
5
We can simplify as:
11 2 1
𝑥1 = + 𝑥2 − 𝑥3
6 6 6
5 2 2
𝑥2 = + 𝑥1 − 𝑥3
7 7 7
107
1 1 2
𝑥3 = + 𝑥2 + 𝑥3
5 5 5
We begin with some initial approximation to the value of the variables, let’s take
as:
𝑥1 = 0, 𝑥2 = 0, 𝑥3 = 0,
𝑥1 =1.833333
𝑥2 =0.714286
𝑥3 =0.200000
2 Iteration
𝑥1 =2.038095
𝑥2 =1.180952
𝑥3 =0.852381
3 Iteration
𝑥1 =2.084921
𝑥2 =1.053061
𝑥3 =1.080000
4 Iteration
𝑥1 =2.004354
𝑥2 =1.001406
𝑥3 =1.038209
5 Iteration
𝑥1 =1.994100
𝑥2 =0.990327
𝑥3 =1.001433
6 Iteration
𝑥1 =1.996537
𝑥2 =0.997905
𝑥3 =0.994951
7 Iteration
𝑥1 =2.000143
𝑥2 =1.000453
108
𝑥3 =0.998469
8Iteration
𝑥1 =2.000406
𝑥2 =1.000478
𝑥3 =1.000210
9 Iteration
𝑥1 =2.000124
𝑥2 =1.000056
𝑥3 =1.000273
10 Iteration
𝑥1 =1.999973
𝑥2 =0.999958
𝑥3 =1.000047
11 Iteration
𝑥1 =1.999978
𝑥2 =0.999979
𝑥3 =0.999978
12 Iteration
𝑥1 =1.999997
𝑥2 =1.000000
𝑥3 =0.999987
12 Iteration
the final
result is :
𝑥1 =1.999997
𝑥2 =1.000000
𝑥3 =0.999987
109
−𝑥1 − 𝑥2 + 10𝑥3 − 2𝑥4 = 27
result
𝑥1 = 1, 𝑥2 = 2, 𝑥3 = 3, 𝑥4 = 0,
Now, we can compute: 𝑥1 , 𝑥2 … 𝑥𝑛 by using initial guess for these values. Here
we use the updated values of 𝑥1 , 𝑥2 … 𝑥𝑛 in calculating new values of x in each
iteration till we obtain a desired level of accuracy in x values. This method is
110
more rapid in convergence than gauss Jacobi method. The rate of convergence of
gauss seidel method is roughly twice that of gauss Jacobi.
Example
4𝑥1 + 11𝑥2 − 𝑥3 = 33
Now, first we know the equation so that coefficient matrix is diagonally dominant
4𝑥1 + 11𝑥2 − 𝑥3 = 33
Now
20 + 3𝑥2 − 2𝑥3
𝑥1 =
8
33 − 4𝑥1 + 𝑥3
𝑥2 =
11
35 − 6𝑥1 − 3𝑥2
𝑥3 =
12
We begin with some initial approximation to the value of the variables, let’s take
as:
𝑥2 = 0, 𝑥3 = 0,
111
35 − 6 ∗ 2.5 − 3 ∗ 2.0909
𝑥3 = = 1.1439
12
2 iteration
20 + 3 ∗ 2.0909 − 2 ∗ 1.1439
𝑥1 = = 2.9981
8
33 − 4 ∗ 2.9981 + 1.1439
𝑥2 = = 2.0138
11
35 − 6 ∗ 2.9981 − 3 ∗ 1.7018
𝑥3 = = 0.9142
12
3 iteration
20 + 3 ∗ 2.0138 − 2 ∗ 0.9142
𝑥1 = = 3.0266
8
33 − 4 ∗ 3.0266 + 0.9142
𝑥2 = = 1.9825
11
35 − 6 ∗ 3.0266 − 3 ∗ 1.9825
𝑥3 = = 0.9077
12
4 iteration
20 + 3 ∗ 1.9825 − 2 ∗ 0.9077
𝑥1 = = 3.0165
8
33 − 4 ∗ 3.0165 + 0.9077
𝑥2 = = 1.9856
11
35 − 6 ∗ 3.0165 − 3 ∗ 1.9856
𝑥3 = = 0.9120
12
112
5 iteration
20 + 3 ∗ 1.9856 − 2 ∗ 0.9120
𝑥1 = = 3.0166
8
33 − 4 ∗ 3.0166 + 0.9120
𝑥2 = = 1.9860
11
35 − 6 ∗ 3.0166 − 3 ∗ 1.9860
𝑥3 = = 0.9119
12
6 iteration
20 + 3 ∗ 1.9860 − 2 ∗ 0.9119
𝑥1 = = 3.0168
8
33 − 4 ∗ 3.0168 + 0.9119
𝑥2 = = 1.9859
11
35 − 6 ∗ 3.0168 − 3 ∗ 1.9859
𝑥3 = = 0.9118
12
7 iteration
20 + 3 ∗ 1.9859 − 2 ∗ 0.9118
𝑥1 = = 3.0168
8
33 − 4 ∗ 3.0168 + 0.9118
𝑥2 = = 1.9859
11
35 − 6 ∗ 3.0168 − 3 ∗ 1.9859
𝑥3 = = 0.9118
12
Since the 6th and 7th approximate are almost same up to 4 decimal places, we can
say the solution vector is :
113
𝑥1 = 3.0168, 𝑥2 = 1.9859, 𝑥3 = 0.9118
Practice:
2𝑥1 + 𝑥2 + 𝑥3 = 5
2𝑥1 + 𝑥2 + 4𝑥3 = 8
Practice: Solve the following systems using Jacobi and Gauss Seidel method
𝑥 + 9𝑦 − 𝑧 = 10
2𝑥 − 𝑦 + 11𝑧 = 20
12 − 10𝑥 + 2𝑦 − 𝑧 =R1
10 − 𝑥 − 9𝑦 + 𝑧 = R2
20 − 2𝑥 + 𝑦 − 11𝑧 =R3
114
Now, the increments in x, y, z are dx, dy, dz so, dx= -R1/-10 , dy = -R2/-9 and
dz = -R3/-11
Iterative Table:
i x y z R1 R2 R3 Incrmnts.
1 0 0 0 12 10 20 dz=-20/-11=1.8182
2 0 0 1.8182 10.1818 11.8182 -0.0002 dy=-11.8182/-9=1.3131
3 0 1.3131 1.8182 12.8080 0.0003 0.0003 dx=-12.8080/-10=1.2808
4 1.2808 1.3131 1.8182 0 -1.2805 -1.2487 dy=-(-1.2805)/-9=-0.142
5 1.2808 1.1708 1.8182 -0.2846 0.002 -1.3910 dz=-(-1.3910)/-11=-0.126
6 1.2808 1.1708 1.6917 -0.1581 -0.1263 0.0005 dx=-(-0.1581)/-10=
-0.158
7 1.2650 1.1708 1.6917 -0.0001 -0.1105 0.0321 dy=-(-0.1105)/-9= -
0.0123
𝑥 1.2650
Therefore, the solution vector, X =[𝑦] =[1.1708] Ans.
𝑧 1.6917
Power method:
Power method is a single value method used for determining the dominant eigen
value of a matrix. It as an iterative method implemented using an initial starting
vector x. the starting vector can be arbitrary if no suitable approximation is
available. Power method is implemented as follows
𝑌 = 𝐴𝑋 − − − − − (𝑎)
𝑌
𝑋= − − − − − (𝑏)
𝑘
The new value of X is obtained in b is the used in equation a to compute new
value of Y and the process is repeated until the desired level of accuracy is
obtained. The parameter k is called scaling factor is the element of Y with largest
magnitude.
115
Example: find the largest Eigen value 𝜆 and the corresponding vector v, of the
matrix using power method
1 2 0
𝐴 = [2 1 0]
0 0 −1
Solution assume X be column vector to be eigen vector of given matrix, now let
0
𝑋 = [1] be the eigen vector
0
Now iteration 1
1 2 0 0 2 𝑌 1 2 1
𝑌 = 𝐴𝑋 = [2 1 0 ] [1] = [1] 𝑋 = = [1] = [0.5]
𝑘 2
0 0 −1 0 0 0 0
iteration 2
1 2 0 1 2 𝑌 1 2 0.8
𝑌 = 𝐴𝑋 = [2 1 0 ] [0.5] = [2.5] 𝑋= = [2.5] = [ 1 ]
𝑘 2.5
0 0 −1 0 0 0 0
iteration 3
1 2 0 0.8 2.8 𝑌 1 2.8
𝑌 = 𝐴𝑋 = [2 1 0 ] [ 1 ] = [2.6] 𝑋= = [2.6]
𝑘 2.8
0 0 −1 0 0 0
1
= [0.929]
0
iteration 4
1 2 0 1 2.858 𝑌
𝑌 = 𝐴𝑋 = [2 1 0 ] [0.929] = [2.928] 𝑋=
𝑘
0 0 −1 0 0
1 2.858 0.976
= [2.928] = [ 1 ]
2.928
0 0
iteration 5
116
1 2 0 0.976 2.976 𝑌
𝑌 = 𝐴𝑋 = [2 1 0 ] [ 1 ] = [2.952] 𝑋=
𝑘
0 0 −1 0 0
1 2.976 1
= [2.952] = [0.992]
2.976
0 0
iteration 6
1 2 0 1 2.984 𝑌
𝑌 = 𝐴𝑋 = [2 1 0 ] [0.992] = [2.992] 𝑋=
𝑘
0 0 −1 0 0
1 2.984 0.997
= [2.992] = [ 1 ]
2.992
0 0
iteration 7
1 2 0 0.997 2.997 𝑌
𝑌 = 𝐴𝑋 = [2 1 0 ] [ 1 ] = [ 2.994] 𝑋=
𝑘
0 0 −1 0 0
1 2.997 1
= [2.994] = [0.999]
2.997
0 0
iteration 8
1 2 0 1 2.998 𝑌
𝑌 = 𝐴𝑋 = [2 1 0 ] [0.999] = [2.999] 𝑋=
𝑘
0 0 −1 0 0
1 2.998 1
= [2.999] = [1]
2.999
0 0
iteration 9
1 2 0 1 3 𝑌 1 3 1
𝑌 = 𝐴𝑋 = [2 1 0 ] [1] = [3] 𝑋 = = [3] = [1]
𝑘 3
0 0 −1 0 0 0 0
Since the value of X is same for 8th and 9th iteration so eigen value is 𝜆 = 3 and
1
eigen vector is 𝑋 = [1]
0
117
Practice: Determine the Numerically largest eigen value and the
corresponding eigen vector of the following matrix, using power method
𝟐𝟓 𝟏 𝟐
[𝟏 𝟑 𝟎]
𝟐 𝟎 −𝟒
𝟏
Eigen value is 25.18, eigen vector =[𝟎. 𝟎𝟒𝟓𝟎𝟖]
𝟎. 𝟎𝟔𝟖𝟓𝟒
Examples:
𝑑𝑖
1. Kirchhoff’s law 𝐿 + 𝑖𝑅 = 𝑣
𝑑𝑡
𝑑𝑣
2. 𝑚 =𝐹
𝑑𝑡
𝑑2 𝑦 𝑑𝑦
3. 𝑚 +𝑎 + 𝑘𝑦 = 0
𝑑𝑡 2 𝑑𝑡
Here
118
• If the equation contains more than one independent variable then it is called
partial differential equation
𝜕2𝑢 𝜕2𝑢
+ = 𝑓(𝑥, 𝑦)
𝜕𝑥 2 𝜕𝑦 2
Order of equation:
The highest derivative that appears in the equation is called order. If there is only
first derivative then it called first order differential equation.
Degree of equation:
The degree of differential equation is the power of the highest order derivative
𝑥𝑦 " + 𝑥 2 𝑦 ′ = 2𝑦 + 3 𝑜𝑟𝑑𝑒𝑟 = 2, 𝑑𝑒𝑔𝑟𝑒𝑒 = 1
(𝑦 "′ )2 + 5𝑦 ′ = 2𝑦 + 3 𝑜𝑟𝑑𝑒𝑟 = 3, 𝑑𝑒𝑔𝑟𝑒𝑒 = 2
Example
𝑦 " = 𝑓(𝑥, 𝑦, 𝑦 ′ ) 𝑦(𝑎) = 𝐴, 𝑦(𝑏) =
𝐵 𝑤ℎ𝑒𝑟𝑒 𝑎 & 𝑏 𝑎𝑟𝑒 𝑡𝑤𝑜 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑡 𝑝𝑜𝑖𝑛𝑡𝑠.
Here 𝑦0′ , 𝑦0′′ , 𝑦0′′′ … can be found using equation (1) and its successive
differentiation at 𝑥 = 𝑥0 . The series in (4) can be truncated at any stage if ‘h’ is
small. Now having obtained 𝑦1 we can calculate 𝑦1′ , 𝑦1′′ , 𝑦1′′′ from equation (1) at
𝑥 = 𝑥0 + h
If a Taylor series is truncated while there are still non-zero derivatives of higher
order the truncated power series will not be exact. The error term for a truncated
Taylor Series can be written in several ways but the most useful form when the
series is truncated after 𝑛𝑡ℎ term is
120
Example:
𝑑𝑦
Using Taylor series method, solve = 𝑥 2 − 𝑦, 𝑦(0) = 1 at 𝑥=
𝑑𝑥
0.1,0.2,0.3 &0.4.
Solution
Given 𝑦 ′ = 𝑥 2 − 𝑦,𝑦(0) = 1,
Now
𝑦′ = 𝑥2 − 𝑦 𝑦0′ = 𝑥02 − 𝑦0 = 0 − 1 = −1
By Taylor Series
=0.90516
Now
121
By Taylor Series
=0.821266
Now
By Taylor Series
Now
122
𝑦3𝑖𝑣 = −𝑦3′′′ = −0.740849
By Taylor Series
2. Euler’s method:
Euler’s method is the simplest one step method. It has limited application because
of its low accuracy. From Taylor’s theorem we have
(𝑥 − 𝑥0 )𝑦 ′ (𝑥0 ) (𝑥 − 𝑥0 )2 𝑦 ′′ (𝑥0 )
y(x) = 𝑦(𝑥0 ) + + +⋯
1! 2!
Now, we get,
Now let ℎ = 𝑥1 − 𝑥0
𝑦1 = 𝑦0 + ℎ𝑓(𝑥0 , 𝑦0 )
Similarly
𝑦2 = 𝑦1 + ℎ𝑓(𝑥1 , 𝑦1 )
123
In general
𝑦𝑖+1 = 𝑦𝑖 + ℎ𝑓(𝑥𝑖 , 𝑦𝑖 )
This formula is known as Euler’s method and can be used recursively to evaluate
y1, y2 … starting from the initial condition 𝑦0 = 𝑦(𝑥0 )
𝑦 ′ (𝑥 ) = 𝑓(𝑥, 𝑦) = 3𝑥 2 + 1
𝑦(1) = 2, 𝑦(𝑥0 ) = 𝑦0 , 𝑥0 = 1, 𝑦0 = 2
We know that
𝑦𝑖+1 = 𝑦𝑖 + ℎ𝑓(𝑥𝑖 , 𝑦𝑖 )
a. h=0.5
𝑦1 = 𝑦(1 + 0.5) = 𝑦(1.5) = 𝑦0 + ℎ𝑓(𝑥0 , 𝑦0 )
= 𝑦(1) + 0.5 × (3 × 12 + 1)
= 2 + 0.5 × 4
=4
𝑦1 = 𝑦(2.0) + 𝑦(1.5 + 0.5) = 𝑦1 + ℎ𝑓(𝑥1 , 𝑦1 ) = 𝑦(1.5) + 0.5 × 𝑓(𝑥1.5 , 𝑦1.5 )
= 4 + 0.5 × (3 × 1.52 + 1)
= 7.8750
∴ 𝑦(2) = 7.8750
b. h=0.25
𝑦(1) = 2
𝑦1 = 𝑦(1 + 0.25) = 𝑦(1.25) = 𝑦0 + ℎ𝑓 (𝑥0 , 𝑦0 ) = 2 + 0.25 × 𝑓(1,2)
= 2 + 0.25(3 × 12 + 1) = 3
𝑦2 = 𝑦(1.25 + 0.25) = 𝑦(1.5) = 𝑦1 + ℎ𝑓 (𝑥1 , 𝑦1 ) = 3 + 0.25 × 𝑓(1.25,3)
124
= 3 + 0.25(3 × 1.252 + 1) = 4.4218
𝑦3 = 𝑦(1.5 + 0.25) = 𝑦(1.75) = 𝑦2 + ℎ𝑓(𝑥2 , 𝑦2 )
= 4.4218 + 0.25 × 𝑓(1.5,4.4218)
= 4.4218 + 0.25(3 × 1.52 + 1) = 6.3593
𝑦4 = 𝑦(1.75 + 0.25) = 𝑦(2.0) = 𝑦3 + ℎ𝑓 (𝑥3 , 𝑦3 )
= 6.3593 + 0.25 × 𝑓(1.75,6.3593)
= 6.3593 + 0.25(3 × 1.752 + 1) = 8.9061
∴ 𝑦(2.0) = 8.9061
3. Heun’s method:
Euler’s method is the simplest of all one step methods. It is easy to implement on
computers. One of the major weakness is large truncation error in Euler’s method.
This is due to the fact that Euler’s method uses only the first two terms of Taylor’s
series. Now heun’s method also called improved Euler’s method.
In Euler’s method the slope at the beginning of the interval is used to extrapolate
yi to yi+1 over the entire interval, thus 𝑦𝑖+1 = 𝑦𝑖 + 𝑚1 ℎ. . . . . . . . 𝑎 where m1 is the
slope at(xi,yi).
Alternative is to use the line which is parallel to the tangent at the point
[𝑥𝑖+1 , 𝑦(𝑥𝑖+1 )] to extrapolate from 𝑦𝑖 𝑡𝑜 𝑦𝑖+1
𝑦𝑖+1 = 𝑦𝑖 + 𝑚2 ℎ. . . . . . . . 𝑏
Where, m2 is the slope at [𝑥𝑖+1 , 𝑦(𝑥𝑖+1 )]. Note that the estimate appears to be
overestimated.
Now a third approach is to use a line whose slope is the average of the slopes at
the end points of the interval, i.e
𝑚1 + 𝑚2
𝑦𝑖+1 = 𝑦𝑖 + ( )ℎ. . . . . . . . 𝑐
2
This gives the better approximation to 𝑦𝑖+1 , this approach is known as Heun’s
method.
𝑦 ′ (𝑥 ) = 𝑓(𝑥, 𝑦)
125
We can obtain:
𝑚1 = 𝑦 ′ (𝑥𝑖 ) = 𝑓(𝑥𝑖 , 𝑦𝑖 )
𝑦𝑖+1 = 𝑦𝑖 + ℎ𝑓(𝑥𝑖 , 𝑦𝑖 )
2𝑦
Example : Given the equation 𝑦 ′ (𝑥) = with y(1)=2, estimate y(2) using 1)
𝑥
Euler’s method 2) Heun’s method and compare the result. Take h=0.25
I. Euler’s method
h=0.25, y(1)=2
2×2
𝑦(1.25) = 𝑦1 = 𝑦0 + ℎ𝑓 (𝑥0 , 𝑦0 ) = 2 + 0.25𝑓 (1,2) = 2 + 0.25 × =3
1
2×3
𝑦(1.5) = 𝑦2 = 𝑦1 + ℎ𝑓 (𝑥1 , 𝑦1 ) = 3 + 0.25𝑓(1.25,3) = 3 + 0.25 ×
1.25
= 4.2
126
𝑦(1.75) = 𝑦3 = 𝑦2 + ℎ𝑓(𝑥2 , 𝑦2 ) = 4.2 + 0.25𝑓(1.5,4.2)
2 × 4.2
= 4.2 + 0.25 × = 5.6
1.5
𝑦(2) = 𝑦4 = 𝑦3 + ℎ𝑓 (𝑥3 , 𝑦3 ) = 5.6 + 0.25𝑓 (1.75,5.6)
2 × 5.6
= 5.6 + 0.25 × = 7.2
1.75
= 𝑓(1.25,3)
2×3
=
1.25
= 4.8
𝑚1 + 𝑚2 4 + 4.8
𝑦(1.25) = 𝑦0 + ( )ℎ = 2 + ( ) 0.25 = 3.1
2 2
Iteration 2:
𝑚1 + 𝑚2
𝑦(1.25 + 0.25) = 𝑦(1.5) = 𝑦2 = 𝑦1 + ( )ℎ
2
2 × 3.1
𝑚1 = 𝑓(𝑥1 𝑦1 ) = 𝑓 (1.25,3.1) = = 4.96
1.25
127
𝑚2 = 𝑓(𝑥1 + ℎ, 𝑦1 + 𝑚1 ℎ)
= 𝑓 (1.5,4.34)
2 × 4.34
=
1.5
= 5.7867
𝑚1 + 𝑚2 4.96 + 5.7867
𝑦(1.5) = 𝑦1 + ( ) ℎ = 3.1 + ( ) 0.25 = 4.44
2 2
Iteration 3:
𝑚1 + 𝑚2
𝑦(1.5 + 0.25) = 𝑦(1.75) = 𝑦3 = 𝑦2 + ( )ℎ
2
2 × 4.44
𝑚1 = 𝑓(𝑥2 𝑦2 ) = 𝑓(1.5,4.44) = = 5.92
1.5
𝑚2 = 𝑓(𝑥2 + ℎ, 𝑦2 + 𝑚1 ℎ)
= 𝑓(1.75,5.92)
2 × 5.92
=
1.75
= 6.77
𝑚1 + 𝑚2 5.92 + 6.77
𝑦(1.75) = 𝑦3 = 𝑦2 + ( ) ℎ = 4.44 + ( ) 0.25 = 6.03
2 2
Iteration 4:
𝑚1 + 𝑚2
𝑦(1.75 + 0.25) = 𝑦(1.5) = 𝑦4 = 𝑦3 + ( )ℎ
2
2 × 6.03
𝑚1 = 𝑓(𝑥3 𝑦3 ) = 𝑓(1.75,6.03) = = 6.89
1.75
𝑚2 = 𝑓(𝑥3 + ℎ, 𝑦3 + 𝑚1 ℎ)
128
= 𝑓(1.75 + 0.25, 6.03 + 6.89 × 0.25)
= 𝑓(2,7.75)
2 × 7.75
=
2
= 7.75
𝑚1 + 𝑚2 6.89 + 7.75
𝑦(2) = 𝑦4 = 𝑦3 + ( ) ℎ = 6.03 + ( ) 0.25 = 7.86
2 2
The above equation can be done using the following formula, note this is
same problem but done using later formula, you can use any method which
ever you feel easy to use.
Iteration 1:
We know that
ℎ
𝑦𝑖+1 = 𝑦𝑖 + [𝑓(𝑥𝑖 , 𝑦𝑖 ) + 𝑓(𝑥𝑖+1 , 𝑦𝑒 𝑖+1 )]
2
We know
2𝑦 2 × 2
𝑓(𝑥0 , 𝑦0 ) = 𝑓(1,2) = = =4
𝑥 1
𝑦𝑒 (1.25) = 𝑦(1) + ℎ × 𝑓(𝑥1 , 𝑦(1))
= 2 + 0.25𝑓 (1,2)
2×2
= 2 + 0.25 ×
1
=3
ℎ
𝑦(1.25) = 𝑦(1) + [𝑓(𝑥1 , 𝑦1 ) + 𝑓(𝑥𝑖+1 , 𝑦𝑒 𝑖+1 )]
2
0.25
=2+ [𝑓(1,2) + 𝑓 (1.25,3)]
2
129
0.25 2 × 2 2 × 3
2+ [ + ]
2 1 1.25
= 3.1
Iteration 2:
ℎ
𝑦(1.5) = 𝑦(1.25) + [𝑓(𝑥1.25 , 𝑦1.25 ) + 𝑓(𝑥1.5 , 𝑦𝑒 1.5 )]
2
ℎ
𝑦(1.5) = 𝑦(1.25) + [𝑓(1.25,3.1) + 𝑓(𝑥1.5 , 𝑦𝑒 1.5 )]
2
= 𝑦𝑖 + 𝑚ℎ
Where m represents the slope that is weighted averages of the slope at various
points in the interval h. Runge Kutta (RK) methods are known by their order. For
instance an RK method is called r-order Runge Kutta method when slope at r
points are used to construct the weighted average slope m.
Euler’s method is the first order RK method because it uses only one slope at
(xi,yi) to estimate 𝑦𝑖+1 .
131
Fourth order Runge Kutta method (Classical fourth order Runge Kutta
method)
The classical fourth order Runge Kutta method is given as:
𝑚1 + 2𝑚2 + 2𝑚3 + 𝑚4
𝑦𝑖+1 = 𝑦𝑖 + ( )ℎ
6
Where
𝑚1 = 𝑓(𝑥𝑖 , 𝑦𝑖 )
ℎ 𝑚1 ℎ
𝑚2 = 𝑓(𝑥𝑖 + , 𝑦𝑖 + )
2 2
ℎ 𝑚2 ℎ
𝑚3 = 𝑓(𝑥𝑖 + , 𝑦𝑖 + )
2 2
𝑚4 = 𝑓(𝑥𝑖 + ℎ, 𝑦𝑖 + 𝑚3 ℎ)
𝑚1 = 𝑓(𝑥𝑖 , 𝑦𝑖 )
𝑚2 = 𝑓(𝑥𝑖 + ℎ, 𝑦𝑖 + 𝑚1 ℎ)
𝑚3 = 𝑓(𝑥𝑖 + ℎ, 𝑦𝑖 + 𝑚2 ℎ)
ℎ ℎ
𝑚4 = 𝑓(𝑥𝑖 + , 𝑦𝑖 + 𝑚1 )
2 2
Example : Use the classical RK method to estimate y(0.4) when 𝑦 ′ (𝑥) = 𝑥 2 +
𝑦 2 with 𝑦(0) = 0, assume h=0.2.
Solution
Given condition
𝑦(0) = 0, 𝑓(𝑥, 𝑦) = 𝑥 2 + 𝑦 2
We know that,
132
𝑚1 + 2𝑚2 + 2𝑚3 + 𝑚4
𝑦𝑖+1 = 𝑦𝑖 + ( )ℎ
6
Where,
𝑚1 = 𝑓(𝑥𝑖 , 𝑦𝑖 )
ℎ 𝑚1 ℎ
𝑚2 = 𝑓(𝑥𝑖 + , 𝑦𝑖 + )
2 2
ℎ 𝑚2 ℎ
𝑚3 = 𝑓(𝑥𝑖 + , 𝑦𝑖 + )
2 2
𝑚4 = 𝑓(𝑥𝑖 + ℎ, 𝑦𝑖 + 𝑚3 ℎ)
iteration 1:
𝑚1 + 2𝑚1 + 2𝑚3 + 𝑚4
𝑦(0.2) = 𝑦0 + ( )ℎ
6
𝑚1 = 𝑓(𝑥0 , 𝑦0 ) = 𝑓(0,0) = 0
ℎ 𝑚1 ℎ 0.2 0 × 0.2
𝑚2 = 𝑓(𝑥0 + , 𝑦0 + ) = 𝑓(0 + ,0 + ) = 𝑓(0.1,0)
2 2 2 2
= 0.12 + 0.02 = 0.01
ℎ 𝑚2 ℎ 0.2 0.01 × 0.2
𝑚3 = 𝑓(𝑥0 + , 𝑦0 + ) = 𝑓(0 + ,0 + ) = 𝑓(0.1,0.001)
2 2 2 2
= 0.12 + 0.0012 = 0.01
133
𝑦1 = 𝑦(0.2)
134
(𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 )
𝑘=
6
(𝑙1 + 2𝑙2 + 2𝑙3 + 𝑙4 )
𝑙=
6
y1 = y0 + k, z1 = z0 + l
Solution:
Here,
𝑓1 (𝑥, 𝑦, 𝑧) = 𝑦𝑧 + 𝑥 ; 𝑓2 (𝑥, 𝑦, 𝑧) = 𝑥𝑧 + 𝑦
= −0.08525
135
ℎ 𝑘1 𝑙1
𝑙2 = ℎ𝑓2 (𝑥0 + , 𝑦0 + , 𝑧0 + )
2 2 2
0.1 −0.1 0.1
= 0.1𝑓2 (0 + ,1 + , −1 + )
2 2 2
= 0.1(0.05 × −0.95 + 0.95)
= 0.09025
ℎ 𝑘2 𝑙2
𝑘3 = ℎ𝑓1 (𝑥0 + , 𝑦0 + , 𝑧0 + )
2 2 2
0.1 −0.08525 0.09025
= 0.1𝑓1 (0 + ,1 + , −1 + )
2 2 2
= 0.1𝑓1 (0.05,0.95738, −0.95738)
ℎ 𝑘2 𝑙2
𝑙3 = ℎ𝑓2 (𝑥0 + , 𝑦0 + , 𝑧0 + )
2 2 2
0.1 −0.0891 0.0903
= 0.1𝑓2 (0 + ,1 + , −1 + )
2 2 2
= 0.1𝑓2 (0.05,0.95738, −0.95738)
= 0.09095
𝑘4 = ℎ𝑓1 (𝑥0 + ℎ, 𝑦0 + 𝑘3 , 𝑧0 + 𝑙3 )
= −0.07303
136
𝑙4 = ℎ𝑓2 (𝑥0 + ℎ, 𝑦0 + 𝑘3 , 𝑧0 + 𝑙3 )
= 0.08224
𝑑𝑚 𝑦 𝑑𝑦 𝑑2 𝑦
= 𝑓(𝑥, 𝑦, , 2 . . . . ). . . . . . 𝑎
𝑑𝑥 𝑚 𝑑𝑥 𝑑𝑥
With m initial conditions given as
137
we can replace equation a by a system of first order equation as follows. Let us
denote
𝑑𝑦 𝑑2 𝑦
𝑦 = 𝑦1 , = 𝑦2 , 2 = 𝑦3 . . . ..
𝑑𝑥 𝑑𝑥
then
𝑑𝑦1
= 𝑦2 𝑦1 (𝑥0 ) = 𝑦1,0 = 𝑎1
𝑑𝑥
𝑑𝑦2
= 𝑦3 𝑦2 (𝑥0 ) = 𝑦2,0 = 𝑎2
𝑑𝑥
….
𝑑𝑦𝑚−1
= 𝑦𝑚 𝑦𝑚−1 (𝑥0 ) = 𝑦𝑚−1,0 = 𝑎𝑚−1
𝑑𝑥
𝑑𝑦𝑚
= 𝑓(𝑥1 , 𝑦2 , 𝑦2 . . . 𝑦𝑚 ) 𝑦𝑚 (𝑥0 ) = 𝑦𝑚,0 = 𝑎𝑚
𝑑𝑥
This system is similar to the system of first order with the condition,
𝑓𝑖 = 𝑦𝑖+1 , 𝑖 = 1,2, . . . . 𝑚 − 1
𝑓𝑚 = 𝑓(𝑥, 𝑦1 , 𝑦2 . . . 𝑦𝑚 )
𝑑𝑦 𝑑2𝑦 𝑑𝑧
Let = 𝑧, 𝑡ℎ𝑒𝑛 , =
𝑑𝑥 𝑑𝑥 2 𝑑𝑥
138
Example : Solve 𝑦 " = 𝑥𝑦 ′ − 𝑦, 𝑦(0) = 3, 𝑦 ′ (0) = 0 to approximate y(0.1).
Given:
𝑦 ′ = 𝑧 = 𝑓1 (𝑥, 𝑦, 𝑧)
𝑦 " = 𝑧 ′ = 𝑥𝑦 ′ − 𝑦 = 𝑥𝑧 − 𝑦 = 𝑓2 (𝑥, 𝑦, 𝑧)
Now,
= 0.1(−0.15)
= −0.015
0.1 0 0.3
𝑙2 = 0.1𝑓2 (0 + ,3 + ,0 − )
2 2 2
= 0.1𝑓2 (0.05,3, −0.15)
139
= 0.1𝑓1 (0.05,2.925, −0.1501)
= 0.1 × −0.1501
= −0.0150
= −0.2933
= 0.1 × −0.2933
= −0.0293
= −0.3014
140
(𝑙1 + 2𝑙2 + 2𝑙3 + 𝑙4 )
𝑙=
6
(−0.3 + 2 × −0.3001 + 2 × −0.2933 − 0.3014)
=
6
= −0.2980
𝑥
𝑦 − 𝑦0 = ∫ 𝑓(𝑥, 𝑦)𝑑𝑥
𝑥0
𝑥
𝑦 = 𝑦0 + ∫ 𝑓(𝑥, 𝑦)𝑑𝑥
𝑥0
𝑥
𝑦(𝑥) = 𝑦(𝑥0 ) + ∫ 𝑓(𝑥, 𝑦)𝑑𝑥
𝑥0
Since y appears under the integral sign on the right, the integration cannot be
formed. The dependent variable should be replaced by either a constant or a
function of x, since we know the initial value of y at x=x0 we may use this as a
first approximation to the solution and the result can be used on the right hand
side to obtain the next approximation.
141
𝑥
𝑦1 = 𝑦0 + ∫ 𝑓(𝑥, 𝑦0 )𝑑𝑥
𝑥0
𝑥
𝑦𝑛 = 𝑦0 + ∫ 𝑓(𝑥, 𝑦𝑛−1 )𝑑𝑥
𝑥0
The process is to be stopped when two values of y, are same to desired degree of
accuracy
Note:
Given
𝑑𝑦
= 1 + 𝑥𝑦 ; 𝑦(0) = 1
𝑑𝑥
𝑓(𝑥, 𝑦) = 1 + 𝑥𝑦, 𝑦0 = 1, 𝑥0 = 0
first approximation
𝑥 𝑥
𝑦1 = 𝑦0 + ∫ 𝑓(𝑥, 𝑦0 )𝑑𝑥 = 1 + ∫ (1 + 𝑥𝑦0 )𝑑𝑥
𝑥0 0
𝑥
= 1 + ∫ (1 + 𝑥 )𝑑𝑥
0
𝑥2
=1+𝑥+
2
142
Second approximation
𝑥
𝑦2 = 𝑦0 + ∫ 𝑓 (𝑥, 𝑦1 )𝑑𝑥
𝑥0
𝑥
= 1 + ∫ (1 + 𝑥𝑦1 )𝑑𝑥
0
𝑥
𝑥2
= 1 + ∫ (1 + 𝑥 (1 + 𝑥 + )) 𝑑𝑥
0 2
𝑥2 𝑥3 𝑥4
= 1+𝑥+ + +
2 3 8
Third approximation
𝑥
𝑦3 = 𝑦0 + ∫ 𝑓(𝑥, 𝑦2 )𝑑𝑥
𝑥0
𝑥
= 1 + ∫ (1 + 𝑥𝑦2 )𝑑𝑥
0
𝑥
𝑥2 𝑥3 𝑥4
= 1 + ∫ (1 + 𝑥 (1 + 𝑥 + + + )) 𝑑𝑥
0 2 3 8
𝑥2 𝑥3 𝑥4 𝑥5 𝑥6
= 1+𝑥+ + + + +
2 3 8 15 48
Fourth approximation
𝑥
𝑦4 = 𝑦0 + ∫ 𝑓(𝑥, 𝑦3 )𝑑𝑥
𝑥0
𝑥
= 1 + ∫ (1 + 𝑥𝑦3 )𝑑𝑥
0
𝑥
𝑥2 𝑥3 𝑥4 𝑥5 𝑥6
= 1 + ∫ (1 + 𝑥 ( 1 + 𝑥 + + + + + )) 𝑑𝑥
0 2 3 8 15 48
143
𝑥2 𝑥3 𝑥4 𝑥5 𝑥6 𝑥7 𝑥8
= 1+𝑥+ + + + + + +
2 3 8 15 48 105 348
Now at x=0.1
Y1=1.1050
Y2=1.1053
Y3=1.1053
Shooting method
This method is called shooting method because it resembles an artillery problem.
In this method the given boundary value problem is first converted into an
equivalent initial value problem an then solved using any of the method discussed
in previous section.
𝑦′ = 𝑧 𝑦 (𝑎 ) = 𝐴
Equation a can be solved for y and z, using any one step method, using steps of
h, until the solution at x=b is reached. Let the estimated value of y(x) at x=b be
B1, if B1=B then we have obtained the required solution. In practice it is very
unlikely that our initial guess z(a)=M1 is correct.
If 𝐵1 ≠ 𝐵 then we obtain the solution with another guess say z(a) =M2. Let new
estimate of y(x) be at x=b be B2. If B2 is not equal to B then process is continued
till we obtain the correct estimate of y(b). However the procedure can be
144
accelerated by suing an improves guess for z(a) after estimates of B1 & B2 are
obtained.
Let us assume that z(a)=M3 lead to the value of y(b)=B, if we assume that values
of M and B are linearly related then
𝑀3 − 𝑀2 𝑀2 − 𝑀1
=
𝐵 − 𝐵2 𝐵2 − 𝐵1
𝐵 − 𝐵2
𝑀3 = 𝑀2 + (𝑀 − 𝑀1 )
𝐵2 − 𝐵1 2
𝐵2 − 𝐵
𝑀3 = 𝑀2 − (𝑀 − 𝑀1 )
𝐵2 − 𝐵1 2
Solution
By transformation
𝑑𝑦 𝑑𝑧
= 𝑧 = 𝑓1 (𝑥, 𝑦, 𝑧), 𝑦(1) = 2 , = 6𝑥 = 𝑓2 (𝑥, 𝑦, 𝑧)
𝑑𝑥 𝑑𝑥
145
Let us assume that 𝑧(1) = 𝑦 ′ (1) = 2(𝑀1 𝑠𝑎𝑦), applying Heun’s method we
obtain the solution as follows
Iteration 1:
= 𝑓1 (1.5,3,5)
=5
= 𝑓1 (1.5,3,5)
=6×𝑥
= 6 × 1.5
=9
𝑚1 (1) + 𝑚2 (1) 2 + 5
𝑚(1) = = = 3.5
2 2
𝑚1 (2) + 𝑚2 (2) 6 + 9
𝑚(2) = = = 7.5
2 2
146
iteration 2 :
= 𝑓1 (2,6.625,10.25)
= 10.25
= 𝑓1 (2,6.625,10.25)
=6×𝑥
=6×2
= 12
𝑚1 (1) + 𝑚2 (1) 5.75 + 10.25
𝑚(1) = = =8
2 2
𝑚1 (2) + 𝑚2 (2) 9 + 12
𝑚(2) = = = 10.5
2 2
𝑦(𝑥2 ) = 𝑦(2) = 𝑦(1) + 𝑚(1)ℎ = 3.75 + 8 × 0.5 = 7.75
Now let us assume z (1) =y’ (1) =4(M2) and again estimate y (2)
Iteration 1:
147
𝑚1 + 𝑚2
𝑦𝑖+1 = 𝑦𝑖 + ( )ℎ
2
𝑚1 (1) = 𝑓1 (𝑥0 , 𝑦0 , 𝑧0 ) = 𝑓1 (1,2,4) = 𝑧0 = 4
= 𝑓1 (1.5,4,7)
=7
= 𝑓1 (1.5,4,7)
=6×𝑥
= 6 × 1.5
=9
𝑚1 (1) + 𝑚2 (1) 4 + 7
𝑚(1) = = = 5.5
2 2
𝑚1 (2) + 𝑚2 (2) 6 + 9
𝑚(2) = = = 7.5
2 2
iteration 2 :
148
𝑚1 (1) = 𝑓1 (𝑥1 , 𝑦1 , 𝑧1 ) = 𝑧1 = 7.75
= 𝑓1 (2,8.625,12.25)
= 12.25
= 𝑓1 (2,8.625,12.25)
=6×𝑥
=6×2
= 12
𝑚1 (1) + 𝑚2 (1) 7.75 + 12.25
𝑚(1) = = = 10
2 2
𝑚1 (2) + 𝑚2 (2) 9 + 12
𝑚(2) = = = 10.5
2 2
𝑦(𝑥2 ) = 𝑦(2) = 𝑦(1) + 𝑚(1)ℎ = 4.75 + 10 × 0.5 = 9.75
Which is greater than B=9, now let us have the third estimate of z(1)=M 3 using
the relationship
𝐵2 − 𝐵
𝑀3 = 𝑀2 − (𝑀 − 𝑀1 )
𝐵2 − 𝐵1 2
9.75 − 9
𝑀3 = 4 − (4 − 2)
9.75 − 7.75
= 3.25
149
Iteration 1:
𝑚1 + 𝑚2
𝑦𝑖+1 = 𝑦𝑖 + ( )ℎ
2
𝑚1 (1) = 𝑓1 (𝑥0 , 𝑦0 , 𝑧0 ) = 𝑓1 (1,2,3.25) = 𝑧0 = 3.25
= 𝑓1 (1.5,3.625,6.25)
= 6.25
= 𝑓1 (1.5,3.625,6.25)
=6×𝑥
= 6 × 1.5
=9
150
𝑧(𝑥1 ) = 𝑧(1.5) = 𝑧(1) + 𝑚(2)ℎ = 3.25 + 7.5 × 0.5 = 7.
iteration 2 :
𝑚1 (1) = 𝑓1 (𝑥1 , 𝑦1 , 𝑧1 ) = 𝑧1 = 7
= 𝑓1 (2,7.875,11.5)
= 11.5
= 𝑓1 (2,7.875,11.5)
=6×𝑥
=6×2
= 12
𝑚1 (1) + 𝑚2 (1) 7 + 11.5
𝑚(1) = = = 9.25
2 2
𝑚1 (2) + 𝑚2 (2) 9 + 12
𝑚(2) = = = 10.5
2 2
𝑦(𝑥2 ) = 𝑦(2) = 𝑦(1) + 𝑚(1)ℎ = 4.375 + 9.25 × 0.5 = 9
151
Practice :
𝑑𝑦
1. Solve = 1 − 𝑦, 𝑦(0) = 0 𝑖𝑛 𝑡ℎ𝑒 𝑟𝑎𝑛𝑔𝑒 0 ≤ 𝑥 ≤ 0.3, using
𝑑𝑥
a. Euler’s method b. Heun’s method
𝑑𝑦 2𝑥
2. Solve =𝑦− , 𝑦(0) = 0 𝑖𝑛 𝑡ℎ𝑒 𝑟𝑎𝑛𝑔𝑒 0 ≤ 𝑥 ≤ 0.2, using
𝑑𝑥 𝑦
b. Euler’s method b. Heun’s method
3. Using Runge Kutta method of fourth order solve for y(0.1), y(0.2) & y(0.3)
given that 𝑦 ′ = 𝑥𝑦 + 𝑦 2 ,y(0)=1
4. Solve the following equation by Picard’s method 𝑦 ′ (𝑥) = 𝑥 2 + 𝑦 2 , 𝑦(0) =
0 estimate y(0.1),y(0.2)
5. Applying shooting method to solve the boundary value problem, solve
𝑦 " = 𝑦(𝑥), 𝑦(0) = 0 𝑎𝑛𝑑 𝑦(1) = 1, i.e find 𝑦 ′ (0)
152
Chapter 6: Solution of partial differential equations
Many physical phenomena in applied science and engineering when formulated
into mathematical models fall into a category of system known as partial
differential equations. A partial differential equation is a differential equation
involving more than one in independent variables.
𝑎𝜕 2 𝑓 𝑏𝜕 2 𝑓 𝑐𝜕 2 𝑓 𝜕𝑓 𝜕𝑓
+ + = 𝐹(𝑥, 𝑦, , ). . . . . . . . . . 1
𝜕𝑥 2 𝜕𝑥𝜕𝑦 𝜕𝑦 2 𝜕𝑥 𝜕𝑦
153
Figure: two-dimensional finite difference grid
𝑥𝑖+1 = 𝑥𝑖 + ℎ
𝑦𝑖+1 = 𝑦𝑖 + ℎ
We have already discussed that if the function f(x) has a continuous fourth
derivative, then it’s first and second derivatives are given by the following central
difference approximation.
𝑓(𝑥𝑖 + ℎ) − 𝑓(𝑥𝑖 − ℎ)
𝑓 ′ (𝑥𝑖 ) =
2ℎ
𝑓𝑖+1 − 𝑓𝑖−1
𝑓 ′𝑖 = . . . . . . . . . .2
2ℎ
𝑓(𝑥𝑖 + ℎ) − 2𝑓(𝑥𝑖 ) − 𝑓(𝑥𝑖 − ℎ)
𝑓 " (𝑥𝑖 ) =
ℎ2
𝑓𝑖+1 −2𝑓𝑖 − 𝑓𝑖−1
𝑓 "𝑖 = . . . . . . . . . .3
ℎ2
The subscripts on f indicate the x value at which the function is evaluated. When
f is a function of two variables x and y, the partial derivatives of f with respect x
or y are the ordinary derivatives of f with respect to x or y when y or x does not
change. We can use the equations 2 and 3 in the x direction to determine
154
derivatives with respect to x and in the y direction to determine derivatives with
respect to y. thus we have
𝜕𝑓(𝑥𝑖 , 𝑦𝑗 ) 𝑓(𝑥𝑖+1 , 𝑦𝑗 ) − 𝑓(𝑥𝑖−1 , 𝑦𝑗 )
= 𝑓𝑥 (𝑥𝑖 , 𝑦𝑗 ) =
𝜕𝑥 ℎ
𝜕𝑓(𝑥𝑖 , 𝑦𝑗 ) 𝑓(𝑥𝑖 , 𝑦𝑗+1 ) − 𝑓(𝑥𝑖 , 𝑦𝑗−1 )
= 𝑓𝑦 (𝑥𝑖 , 𝑦𝑗 ) =
𝜕𝑦 𝑘
Elliptical equations
Elliptical equations are governed by condition on the boundary of closed domain.
We consider here the two most commonly encountered elliptical equations.
a. Laplace’s equation
155
b. Poisson’s equation
Laplace’s equation
Any equation of the form ∇2 𝑓 = 0 is called laplace’s equation, where
2
𝜕2 𝜕2
∇ = 2 + 2 .....4
𝜕𝑥 𝜕𝑦
2 2
𝜕 𝑓 𝜕 𝑓
∴ 2
+ 2 = ∇2 𝑓 = 0. . . . . .5
𝜕𝑥 𝜕𝑦
𝜕𝑓 𝜕𝑓
Where a=1,b=0,c=1 and F(x,y, , )=0
𝜕𝑥 𝜕𝑦
Where ∇2 is called Laplacian operator, above equation can be written as
𝑈𝑥𝑥 + 𝑈𝑦𝑦 = 0
Replacing the second order derivatives by their finite difference equivalents, at
points (𝑥𝑖 , 𝑦𝑖 ) we get
𝑓𝑖+1,𝑗 − 2𝑓𝑖𝑗 + 𝑓𝑖−1,𝑗 𝑓𝑖,𝑗+1 − 2𝑓𝑖𝑗 + 𝑓𝑖,𝑗−1
∇2 𝑓𝑖𝑗 = + = 0. . . . . . 6
ℎ2 𝑘2
If we assume for simplicity, h=k, then we get
Above equation contains four neighboring points around central points (𝑥𝑖 , 𝑦𝑖 ) as
shown in figure, the above equation is known as five point difference formula for
Laplace’s equation.
156
figure: Grid for Laplace equation
1 1
2
∇ 𝑓𝑖𝑗 = 2 {1 −4 1} 𝑓𝑖𝑗 = 0 . . . . . . . 8
ℎ
1
From above equation we can show that the function value at grid point(𝑥𝑖 , 𝑦𝑖 ) is
the average of the values at the four adjoining points. i.e
1
𝑓𝑖𝑗 = (𝑓𝑖+1,𝑗 + 𝑓𝑖−1,𝑗 +𝑓𝑖,𝑗+1 + 𝑓𝑖,𝑗−1 ). . . . . . . . 9
4
Example: Consider a steel plate of size 15cm x 15cm, if two of the sides are held
at 100oc and other two sides are held at 0oc, what are the steady state temperatures
at interior points assuming a grid of size 5cm x 5cm.
Note: A problem with values known on each boundary is said to have Dirichlet
boundary condition. This problem is illustrated below.
157
The system of equation is as follows:
Point 1:
Point 2:
𝑓1 + 100 + 𝑓4 + 0 − 4𝑓2 = 0. . . . . . . .2
Point 3:
100 + 𝑓1 + 0 + 𝑓4 − 4𝑓3 = 0. . . . . . . .3
Point 4:
𝑓3 + 𝑓2 + 0 + 0 − 4𝑓4 = 0. . . . . . . .4
Note: for solving use any iterative methods that you have learned in chapter 4.
158
Poisson’s equation
If we put a=1, b=0, c=1 in the equation and F(x,y,f,𝑓𝑥 , 𝑓𝑦 )=g(x,y) then
𝑎𝜕 2 𝑓 𝑏𝜕 2 𝑓 𝑐𝜕 2 𝑓 𝜕𝑓 𝜕𝑓
+ + = 𝐹 (𝑥, 𝑦, , )
𝜕𝑥 2 𝜕𝑥𝜕𝑦 𝜕𝑦 2 𝜕𝑥 𝜕𝑦
We get
𝜕 2 𝑓 𝑐𝜕 2 𝑓
+ = 𝑔(𝑥, 𝑦)
𝜕𝑥 2 𝜕𝑦 2
∇2 𝑓 = 𝑔(𝑥, 𝑦) … … . . 𝑎
The above equation a is called poisson’s equation using the notation 𝑔𝑖𝑗 =
𝑔(𝑥𝑖 , 𝑦𝑖 ) and laplace equation may be modified to solve the equation a. the finite
difference formula for solving poisson’s equation then takes of the form
159
By applying equations b at each grid points
Point 1:
0 + 0 + 𝑓3 + 𝑓2 − 4𝑓1 = 12 𝑔(1,2)
𝑓2 +𝑓3 − 4𝑓1 = 8 . . . . . . . . 1
Point 2:
𝑓1 + 0 + 0 + 𝑓4 − 4𝑓2 = 12 𝑔(2,2)
𝑓1 − 4𝑓2 + 𝑓4 = 32 . . . . . . . . 2
Point 3:
0 + 𝑓1 + 0 + 𝑓4 − 4𝑓3 = 12 𝑔(1,1)
𝑓1 − 4𝑓3 + 𝑓4 = 2 . . . . . . . . 3
Point 4:
𝑓3 + 𝑓2 + 0 + 0 − 4𝑓4 = 12 𝑔(2,1)
𝑓2 + 𝑓3 − 4𝑓4 = 8 . . . . . . . . 4
On solving we get
22 43 13 22
𝑓1 = − , 𝑓2 = − , 𝑓3 = − , 𝑓4 = −
4 4 4 4
Therefore the interior points are as above.
160
Note: we can use any problem-solving methods already discussed for solving the
values of fi
Parabolic equations
Elliptical equations studies previously described problems that are time
independent, such problems are known as steady state problems, but we come
across problems that are not steady state. This means that the function is
dependent on both space and time. Parabolic equations for which 𝑏 2 − 4𝑎𝑐 = 0,
describes the problem that depend on space and time variables.
A popular case for parabolic type of equation is the study of heat flow in one-
dimensional direction in an insulated rod, such problems are governed by both
boundary and initial conditions.
Let f represent the temperature at any points in rod, whose distance from left end
is x. Heat is flowing from left to right under the influence of temperature gradient.
The temperature f(x,t) in the at position x and time t governed by the heat
equation.
𝜕𝑓 𝜕𝑓
𝑘1 = 𝑘 2 𝑘 3 ……..𝑎
𝜕𝑥 2 𝜕𝑡
Where 𝑘1 is coefficient of thermal conductivity, 𝑘2 is the specific heat and 𝑘3 is
density of the material.
161
𝑘1
Where 𝑘 =
𝑘2 𝑘3
The initial condition will be the initial temperatures at all points along the rod .
𝑓(𝑥, 0) = 𝑓(𝑥) 0 ≤ 𝑥 ≤ 𝐿
The boundary conditions f(0,t) and f(L,t) describes the temperature at each end
of the rod as function of time, if they are held at constant then
𝑓 (0, 𝑡) = 𝑐1 0 ≤ 𝑡 ≤ ∞
𝑓(𝐿, 𝑡) = 𝑐2 0 ≤ 𝑡 ≤ ∞
we can solve the heat equation in equation using the finite difference formula
given below.
𝑓(𝑥, 𝑡 + 𝜏) − 𝑓 (𝑥, 𝑡)
𝑓𝑡 (𝑥, 𝑡) =
𝜏
1
= (𝑓𝑖,𝑗+1 − 𝑓𝑖,𝑗 ) … … . 𝑐
𝜏
𝑓(𝑥 − ℎ, 𝑡) − 2𝑓(𝑥, 𝑡) + 𝑓(𝑥 + ℎ, 𝑡)
𝑓𝑥𝑥 (𝑥, 𝑡) =
ℎ2
1
= (𝑓 − 2𝑓𝑖,𝑗 +𝑓𝑖+1,𝑗 ) … … 𝑑
ℎ2 𝑖−1,𝑗
Substituting c and d in equation b we can obtain
1 𝑘
(𝑓𝑖,𝑗+1 − 𝑓𝑖,𝑗 ) = 2 (𝑓𝑖−1,𝑗 − 2𝑓𝑖𝑗 + 2𝑓𝑖+1,𝑗 ) … … 𝑒
𝜏 ℎ
Solving for 𝑓𝑖,𝑗+1
2𝜏𝑘 𝜏𝑘
𝑓𝑖,𝑗+1 = (1 − ) 𝑓𝑖𝑗 + (𝑓 + 𝑓𝑖+1,𝑗 )
ℎ2 ℎ2 𝑖−1,𝑗
= (1 − 2𝛾)𝑓𝑖𝑗 + 𝛾(𝑓𝑖−1,𝑗 + 𝑓𝑖+1,𝑗 ) … . . 𝑓
𝜏𝑘
Where 𝛾 =
ℎ2
162
Bender Schmidt method
The recurrence of equation allows us to evaluate f at each point x and at any point
t. if we choose step size ∆𝑡 & ∆𝑥, such that
2𝜏𝑘
1 − 2𝛾 = 1 − = 0 ……𝑔
ℎ2
Equation f simplifies to
1
𝑓𝑖,𝑗+1 = (𝑓 + 𝑓𝑖−1,𝑗 ) … … . ℎ
2 𝑖+1,𝑗
Equation h is known as Bender Schmidt recurrence equation. This equation
determines the value of f at 𝑥 = 𝑥𝑖 , at time 𝑡 = 𝑡𝑖 + 𝜏 as the average of the values
right and left of 𝑥𝑖 at time 𝑡𝑗 .
ℎ2
𝜏=
2𝑘
Gives the equation h , equation f is stable if and only if the we step size 𝜏 satisfies
the condition
ℎ2
𝜏≤
2𝑘
Example : Solve the parabolic equation 2𝑓𝑥𝑥 (𝑥, 𝑡) = 𝑓𝑡 (𝑥, 𝑡) , given the initial
condition
𝑓(𝑥, 0) = 50(4 − 𝑥 ) 0 ≤ 𝑥 ≤ 4
𝑓(0, 𝑡) = 0
𝑓(4, 𝑡) = 0, 0 ≤ 𝑡 ≤ 1.5
Solution
ℎ2 12
𝜏≤ = = 0.25
2𝑘 2𝑥2
163
Taking 𝜏 = 0.25, we have
1
𝑓𝑖,𝑗+1 = (𝑓 + 𝑓𝑖+1,𝑗 )
2 𝑖−1,𝑗
From the initial boundary condition : 𝑓(0, 𝑡) = 0 or, f(0,j) =0 for all values of j.
i.e. 𝑓(0,0) = 0
𝑓(0,1) = 0
𝑓(0,2) = 0
𝑓(0,3) = 0
𝑓(0,4) = 0
𝑓(0,5) = 0
𝑓(0,6) = 0
From the final boundary condition: 𝑓 (4, 𝑡) = 0 or 𝑓(4, 𝑗) = 0 for all values of j.
𝑓(4,0) = 0
𝑓(4,1) = 0
𝑓(4,2) = 0
𝑓(4,3) = 0
𝑓(4,4) = 0
𝑓(𝑥, 0) = 50(4 − 𝑥 )
164
Now, again from Bender Schmidt recursive formula,
1
𝑓𝑖,𝑗+1 = (𝑓𝑖−1,𝑗 + 𝑓𝑖+1,𝑗 ) ………(A)
2
1 1
𝑓2,1 = (𝑓1,0 + 𝑓3,0 )= (150 + 50)=100
2 2
1 1
𝑓3,1 = (𝑓2,0 + 𝑓4,0 )= (100 + 0)=50
2 2
Again take j=1 and i= 1.2, 3 respectively to find 𝑓1,2 𝑓2,2 𝑓3,2 and take j=2, j=3,
j=4 and j=6 and find the corresponding values for i=1, i=2 and i=3 for each ‘j’ .
Using the formula we can generate successfully f(x,t). the estimated are recorded
in the following table at each interior point, the temperature at any single point is
just average of the values at the adjacent points of the previous points.
165
The lateral displacement of string f varies with time t and distance x along the
string. The displacement f(x,t) is governed by the wave equation
𝜕2𝑓 𝜕2𝑓
𝑇 = 𝜌
𝜕𝑥 2 𝜕𝑡 2
Where T is the tension in the string and ρ is the mass per unit length.
𝑓(0, 𝑡) = 0, 0≤𝑡≤𝑏
𝑓 (𝐿, 𝑡) = 0, 0 ≤ 𝑡 ≤ 𝑏
𝑓(𝑥, 0) = 𝑓(𝑥 ), 0 ≤ 𝑡 ≤ 𝛼
𝑓𝑡 (𝑥, 0) = 𝑔(𝑥 ), 0 ≤ 𝑡 ≤ 𝑎
166
Figure : Grid for solving hyperbolic equation
The difference equation for 𝑓𝑥𝑥 (𝑥, 𝑡) 𝑎𝑛𝑑 𝑓𝑡𝑡 (𝑥, 𝑡) are
𝑇𝜏 2 𝑇𝜏 2
𝑓𝑖,𝑗+1 = −𝑓𝑖,𝑗−1 + 2 (1 − 2 ) 𝑓𝑖𝑗 + 2 (𝑓𝑖+1,𝑗 + 𝑓𝑖−1,𝑗 )
𝜌ℎ 𝜌ℎ
𝑇𝜏2
If we make 1 − =0
𝜌ℎ2
Then we have
167
The values of f at 𝑥 = 𝑥𝑖 and 𝑡 = 𝑡𝑗 + 𝜏 is equal to the um of the values of f, at
the point 𝑥 = 𝑥𝑖 − ℎ and 𝑥 = 𝑥𝑖 + ℎ at the time 𝑡 = 𝑡𝑗 (𝑝𝑟𝑒𝑣𝑖𝑜𝑢𝑠 𝑡𝑖𝑚𝑒) minus
the value of f at 𝑥 = 𝑥𝑖 at time 𝑡 = 𝑡𝑗 − 𝜏. From figure we can say 𝑓𝐴 = 𝑓𝐵 +
𝑓𝐷 − 𝑓𝐶
Starting values
We need two rows of starting values, corresponding to j=1 and j=2, in order to
computer the values of the third row. First row is obtained using the condition.
𝑓(𝑥, 0) = 𝑓(𝑥 )
The 2nd row can be obtained using the 2nd initial condition as 𝑓𝑡 (𝑥, 0) = 𝑔(𝑥 )
We know that
𝑓𝑖,0+1 − 𝑓𝑖,0−1
𝑓𝑡 (𝑥, 0) = = 𝑔𝑖
2𝜏
𝑓𝑖,−1 = 𝑓𝑖,1 − 2𝜏𝑔𝑖 𝑓𝑜𝑟 𝑡 = 0 𝑜𝑛𝑙𝑦
168
And initial values
𝑓 (𝑥, 0) = 𝑓(𝑥 ) = 𝑥 (5 − 𝑥 )
𝑓𝑡 (𝑥, 0) = 𝑔(𝑥 ) = 0
Solution
Let h=1
𝑇
Given =4
𝜌
𝜏2
Assuming 1 − 4 =0
12
We get
1
𝜏=
2
The values estimated using equations d and e
0 1 2 3 4 5
x
t
0.0 0 4 6 6 4 0
0.5 0 3* 5 ** 5 3 0
1.0 0 1 2 2 1 0
1.5 0 -1*** -2 -2 -1 0
2.0 0 -3 -5 -5 -3 0
2.5 0 -4 -6 -6 -4 0
0+6 4+6
∗= ∗∗= ∗∗∗= 2 + 0 − 3
2 2
169
Chapter 6: Solution of partial differential equations
Many physical phenomena in applied science and engineering when formulated
into mathematical models fall into a category of system known as partial
differential equations. A partial differential equation is a differential equation
involving more than one in independent variables.
𝑎𝜕 2 𝑓 𝑏𝜕 2 𝑓 𝑐𝜕 2 𝑓 𝜕𝑓 𝜕𝑓
+ + = 𝐹(𝑥, 𝑦, , ). . . . . . . . . . 1
𝜕𝑥 2 𝜕𝑥𝜕𝑦 𝜕𝑦 2 𝜕𝑥 𝜕𝑦
170
Figure: two-dimensional finite difference grid
𝑥𝑖+1 = 𝑥𝑖 + ℎ
𝑦𝑖+1 = 𝑦𝑖 + ℎ
We have already discussed that if the function f(x) has a continuous fourth
derivative, then it’s first and second derivatives are given by the following central
difference approximation.
𝑓(𝑥𝑖 + ℎ) − 𝑓(𝑥𝑖 − ℎ)
𝑓 ′ (𝑥𝑖 ) =
2ℎ
𝑓𝑖+1 − 𝑓𝑖−1
𝑓 ′𝑖 = . . . . . . . . . .2
2ℎ
𝑓(𝑥𝑖 + ℎ) − 2𝑓(𝑥𝑖 ) − 𝑓(𝑥𝑖 − ℎ)
𝑓 " (𝑥𝑖 ) =
ℎ2
𝑓𝑖+1 −2𝑓𝑖 − 𝑓𝑖−1
𝑓 "𝑖 = . . . . . . . . . .3
ℎ2
The subscripts on f indicate the x value at which the function is evaluated. When
f is a function of two variables x and y, the partial derivatives of f with respect x
or y are the ordinary derivatives of f with respect to x or y when y or x does not
change. We can use the equations 2 and 3 in the x direction to determine
171
derivatives with respect to x and in the y direction to determine derivatives with
respect to y. thus we have
𝜕𝑓(𝑥𝑖 , 𝑦𝑗 ) 𝑓(𝑥𝑖+1 , 𝑦𝑗 ) − 𝑓(𝑥𝑖−1 , 𝑦𝑗 )
= 𝑓𝑥 (𝑥𝑖 , 𝑦𝑗 ) =
𝜕𝑥 2ℎ
𝜕𝑓(𝑥𝑖 , 𝑦𝑗 ) 𝑓(𝑥𝑖 , 𝑦𝑗+1 ) − 𝑓(𝑥𝑖 , 𝑦𝑗−1 )
= 𝑓𝑦 (𝑥𝑖 , 𝑦𝑗 ) =
𝜕𝑦 2𝑘
Elliptical equations
Elliptical equations are governed by condition on the boundary of closed domain.
We consider here the two most commonly encountered elliptical equations.
c. Laplace’s equation
172
d. Poisson’s equation
Laplace’s equation
Any equation of the form ∇2 𝑓 = 0 is called laplace’s equation, where
2
𝜕2 𝜕2
∇ = 2 + 2 .....4
𝜕𝑥 𝜕𝑦
2 2
𝜕 𝑓 𝜕 𝑓
∴ 2
+ 2 = ∇2 𝑓 = 0. . . . . .5
𝜕𝑥 𝜕𝑦
𝜕𝑓 𝜕𝑓
Where a=1,b=0,c=1 and F(x,y, , )=0
𝜕𝑥 𝜕𝑦
Where ∇2 is called Laplacian operator, above equation can be written as
𝑈𝑥𝑥 + 𝑈𝑦𝑦 = 0
Replacing the second order derivatives by their finite difference equivalents, at
points (𝑥𝑖 , 𝑦𝑖 ) we get
𝑓𝑖+1,𝑗 − 2𝑓𝑖𝑗 + 𝑓𝑖−1,𝑗 𝑓𝑖,𝑗+1 − 2𝑓𝑖𝑗 + 𝑓𝑖,𝑗−1
∇2 𝑓𝑖𝑗 = + = 0. . . . . . 6
ℎ2 𝑘2
If we assume for simplicity, h=k, then we get
Above equation contains four neighboring points around central points (𝑥𝑖 , 𝑦𝑖 ) as
shown in figure, the above equation is known as five point difference formula for
Laplace’s equation.
173
figure: Grid for Laplace equation
1 1
2
∇ 𝑓𝑖𝑗 = 2 {1 −4 1} 𝑓𝑖𝑗 = 0 . . . . . . . 8
ℎ
1
From above equation we can show that the function value at grid point(𝑥𝑖 , 𝑦𝑖 ) is
the average of the values at the four adjoining points. i.e
1
𝑓𝑖𝑗 = (𝑓𝑖+1,𝑗 + 𝑓𝑖−1,𝑗 +𝑓𝑖,𝑗+1 + 𝑓𝑖,𝑗−1 ). . . . . . . . 9
4
Example: Consider a steel plate of size 15cm x 15cm, if two of the sides are held
at 100oc and the other two sides are held at 0oc. What are the steady state
temperatures at interior points(nodes) assuming a grid of 5cm x 5cm.
Note: a problem with values known on each boundary is said to have Dirichlet
boundary condition. This problem is illustrated below.
174
The system of equation is as follows
Point 1:
Point 2:
𝑓1 + 100 + 𝑓4 + 0 − 4𝑓2 = 0. . . . . . . .2
Point 3:
100 + 𝑓1 + 0 + 𝑓4 − 4𝑓3 = 0. . . . . . . .3
Point 4:
𝑓3 + 𝑓2 + 0 + 0 − 4𝑓4 = 0. . . . . . . .4
Note: for solving use any methods that have learned in before chapters.
175
Poisson’s equation
If we put a=1, b=0, c=1 in the equation and F(x,y,f,𝑓𝑥 , 𝑓𝑦 )=g(x,y) then
𝑎𝜕 2 𝑓 𝑏𝜕 2 𝑓 𝑐𝜕 2 𝑓 𝜕𝑓 𝜕𝑓
+ + = 𝐹 (𝑥, 𝑦, , )
𝜕𝑥 2 𝜕𝑥𝜕𝑦 𝜕𝑦 2 𝜕𝑥 𝜕𝑦
We get
𝜕 2 𝑓 𝑐𝜕 2 𝑓
+ = 𝑔(𝑥, 𝑦)
𝜕𝑥 2 𝜕𝑦 2
∇2 𝑓 = 𝑔(𝑥, 𝑦) … … . . 𝑎
The above equation a is called poisson’s equation using the notation 𝑔𝑖𝑗 =
𝑔(𝑥𝑖 , 𝑦𝑖 ) and laplace equation may be modified to solve the equation a. the finite
difference formula for solving poisson’s equation then takes of the form
176
By applying equations b at each grid points
Point 1:
0 + 0 + 𝑓3 + 𝑓2 − 4𝑓1 = 12 𝑔(1,2)
𝑓2 +𝑓3 − 4𝑓1 = 8 . . . . . . . . 1
Point 2:
𝑓1 + 0 + 0 + 𝑓4 − 4𝑓2 = 12 𝑔(2,2)
𝑓1 − 4𝑓2 + 𝑓4 = 32 . . . . . . . . 2
Point 3:
0 + 𝑓1 + 0 + 𝑓4 − 4𝑓3 = 12 𝑔(1,1)
𝑓1 − 4𝑓3 + 𝑓4 = 2 . . . . . . . . 3
Point 4:
𝑓3 + 𝑓2 + 0 + 0 − 4𝑓4 = 12 𝑔(2,1)
𝑓2 + 𝑓3 − 4𝑓4 = 8 . . . . . . . . 4
On solving we get
22 43 13 22
𝑓1 = − , 𝑓2 = − , 𝑓3 = − , 𝑓4 = −
4 4 4 4
Therefore the interior points are as above.
177
Note: we can use any problem-solving methods already discussed for solving the
values of fi
Parabolic equations
Elliptical equations studies previously described problems that are time
independent, such problems are known as steady state problems, but we come
across problems that are not steady state. This means that the function is
dependent on both space and time. Parabolic equations for which 𝑏 2 − 4𝑎𝑐 = 0,
describes the problem that depend on space and time variables.
A popular case for parabolic type of equation is the study of heat flow in one-
dimensional direction in an insulated rod, such problems are governed by both
boundary and initial conditions.
Let f represent the temperature at any points in rod, whose distance from left end
is x. Heat is flowing from left to right under the influence of temperature gradient.
The temperature f(x,t) in the at position x and time t governed by the heat
equation.
𝜕𝑓 𝜕𝑓
𝑘1 = 𝑘 2 𝑘 3 ……..𝑎
𝜕𝑥 2 𝜕𝑡
Where 𝑘1 is coefficient of thermal conductivity, 𝑘2 is the specific heat and 𝑘3 is
density of the material.
178
𝑘1
Where 𝑘 =
𝑘2 𝑘3
The initial condition will be the initial temperatures at all points along the rod .
𝑓(𝑥, 0) = 𝑓(𝑥) 0 ≤ 𝑥 ≤ 𝐿
The boundary conditions f(0,t) and f(L,t) describes the temperature at each end
of the rod as function of time, if they are held at constant then
𝑓 (0, 𝑡) = 𝑐1 0 ≤ 𝑡 ≤ ∞
𝑓(𝐿, 𝑡) = 𝑐2 0 ≤ 𝑡 ≤ ∞
we can solve the heat equation in equation using the finite difference formula
given below.
𝑓(𝑥, 𝑡 + 𝜏) − 𝑓 (𝑥, 𝑡)
𝑓𝑡 (𝑥, 𝑡) =
𝜏
1
= (𝑓𝑖,𝑗+1 − 𝑓𝑖,𝑗 ) … … . 𝑐
𝜏
𝑓(𝑥 − ℎ, 𝑡) − 2𝑓(𝑥, 𝑡) + 𝑓(𝑥 + ℎ, 𝑡)
𝑓𝑥𝑥 (𝑥, 𝑡) =
ℎ2
1
= (𝑓 − 2𝑓𝑖,𝑗 +𝑓𝑖+1,𝑗 ) … … 𝑑
ℎ2 𝑖−1,𝑗
Substituting c and d in equation b we can obtain
1 𝑘
(𝑓𝑖,𝑗+1 − 𝑓𝑖,𝑗 ) = 2 (𝑓𝑖−1,𝑗 − 2𝑓𝑖𝑗 + 2𝑓𝑖+1,𝑗 ) … … 𝑒
𝜏 ℎ
Solving for 𝑓𝑖,𝑗+1
2𝜏𝑘 𝜏𝑘
𝑓𝑖,𝑗+1 = (1 − ) 𝑓𝑖𝑗 + (𝑓 + 𝑓𝑖+1,𝑗 )
ℎ2 ℎ2 𝑖−1,𝑗
= (1 − 2𝛾)𝑓𝑖𝑗 + 𝛾(𝑓𝑖−1,𝑗 + 𝑓𝑖+1,𝑗 ) … . . 𝑓
𝜏𝑘
Where 𝛾 =
ℎ2
179
Bender Schmidt method
The recurrence of equation allows us to evaluate f at each point x and at any point
t. if we choose step size ∆𝑡 & ∆𝑥, such that
2𝜏𝑘
1 − 2𝛾 = 1 − = 0 ……𝑔
ℎ2
Equation f simplifies to
1
𝑓𝑖,𝑗+1 = (𝑓 + 𝑓𝑖−1,𝑗 ) … … . ℎ
2 𝑖+1,𝑗
Equation h is known as Bender Schmidt recurrence equation. This equation
determines the value of f at 𝑥 = 𝑥𝑖 , at time 𝑡 = 𝑡𝑖 + 𝜏 as the average of the values
right and left of 𝑥𝑖 at time 𝑡𝑗 .
ℎ2
𝜏=
2𝑘
Gives the equation h , equation f is stable if and only if the we step size 𝜏 satisfies
the condition
ℎ2
𝜏≤
2𝑘
Example: Solve the equation 2𝑓𝑥𝑥 (𝑥, 𝑡) = 𝑓𝑡 (𝑥, 𝑡) 𝑎𝑛𝑑 given the initial
condition:
𝑓(𝑥, 0) = 50(4 − 𝑥 ) 0 ≤ 𝑥 ≤ 4
𝑓(0, 𝑡) = 0,
𝑓(4, 𝑡) = 𝑡 + 1, 0 ≤ 𝑡 ≤ 1.5
Solution
ℎ2 12
𝜏≤ = = 0.25
2𝑘 2𝑥2
180
Taking 𝜏 = 0.25, we have
1
𝑓𝑖,𝑗+1 = (𝑓 + 𝑓𝑖+1,𝑗 )
2 𝑖−1,𝑗
Using the formula we can generate successfully f(x,t). the estimated are recorded
in the following table at each interior point, the temperature at any single point is
just average of the values at the adjacent points of the previous points.
X(i) 0 1 2 3 4
t (j)
0.00(0) 0 150 100 50 0
0.25(1) 0 50(f1,1) 100 50 0
0.50(2) 0 50 50 50 0
0.75 0 25 25 25 0
1.00 0 12.5 25 12.5 0
1.25 0 12.5 12.5 12.5 0
1.50 0 6.25 12.5 6.25 0
f(0,t) =0 , f(0,j)=0 for all values of j.
Again from the given final boundary condition: f(4,t) =0 , f(4,j)=0 for all values
of j.
Also from the given initial condition:𝑓(𝑖, 0) = 50(4 − 𝑖 ) for all values of i.
Hyperbolic equation
Hyperbolic equation models the vibration of structure such as building beams and
machines we here consider the case of a vibrating string that is fixed at both the
ends as figure.
181
The lateral displacement of string f varies with time t and distance x along the
string. The displacement f(x,t) is governed by the wave equation
𝜕2𝑓 𝜕2𝑓
𝑇 = 𝜌
𝜕𝑥 2 𝜕𝑡 2
Where T is the tension in the string and ρ is the mass per unit length.
𝑓(0, 𝑡) = 0, 0≤𝑡≤𝑏
𝑓 (𝐿, 𝑡) = 0, 0 ≤ 𝑡 ≤ 𝑏
𝑓(𝑥, 0) = 𝑓(𝑥 ), 0 ≤ 𝑡 ≤ 𝛼
𝑓𝑡 (𝑥, 0) = 𝑔(𝑥 ), 0 ≤ 𝑡 ≤ 𝑎
182
Figure : Grid for solving hyperbolic equation
The difference equation for 𝑓𝑥𝑥 (𝑥, 𝑡) 𝑎𝑛𝑑 𝑓𝑡𝑡 (𝑥, 𝑡) are
𝑇𝜏 2 𝑇𝜏 2
𝑓𝑖,𝑗+1 = −𝑓𝑖,𝑗−1 + 2 (1 − 2 ) 𝑓𝑖𝑗 + 2 (𝑓𝑖+1,𝑗 + 𝑓𝑖−1,𝑗 )
𝜌ℎ 𝜌ℎ
𝑇𝜏2
If we make 1 − =0
𝜌ℎ2
Then we have
183
The values of f at 𝑥 = 𝑥𝑖 and 𝑡 = 𝑡𝑗 + 𝜏 is equal to the um of the values of f, at
the point 𝑥 = 𝑥𝑖 − ℎ and 𝑥 = 𝑥𝑖 + ℎ at the time 𝑡 = 𝑡𝑗 (𝑝𝑟𝑒𝑣𝑖𝑜𝑢𝑠 𝑡𝑖𝑚𝑒) minus
the value of f at 𝑥 = 𝑥𝑖 at time 𝑡 = 𝑡𝑗 − 𝜏. From figure we can say 𝑓𝐴 = 𝑓𝐵 +
𝑓𝐷 − 𝑓𝐶
Starting values
We need two rows of starting values, corresponding to j=1 and j=2, in order to
computer the values of the third row. First row is obtained using the condition.
𝑓(𝑥, 0) = 𝑓(𝑥 )
The 2nd row can be obtained using the 2nd initial condition as 𝑓𝑡 (𝑥, 0) = 𝑔(𝑥 )
We know that
𝑓𝑖,0+1 − 𝑓𝑖,0−1
𝑓𝑡 (𝑥, 0) = = 𝑔𝑖
2𝜏
𝑓𝑖,−1 = 𝑓𝑖,1 − 2𝜏𝑔𝑖 𝑓𝑜𝑟 𝑡 = 0 𝑜𝑛𝑙𝑦
184
And initial values
𝑓 (𝑥, 0) = 𝑓(𝑥 ) = 𝑥 (5 − 𝑥 )
𝑓𝑡 (𝑥, 0) = 𝑔(𝑥 ) = 0
Solution
Let h=1
𝑇
Given =4
𝜌
𝜏2
Assuming 1 − 4 =0
12
We get
1
𝜏=
2
The values estimated using equations d and e
0 1 2 3 4 5
x
t
0.0 0 4 6 6 4 0
0.5 0 3* 5 ** 5 3 0
1.0 0 1 2 2 1 0
1.5 0 -1*** -2 -2 -1 0
2.0 0 -3 -5 -5 -3 0
2.5 0 -4 -6 -6 -4 0
0+6 4+6
∗= ∗∗= ∗∗∗= 2 + 0 − 3
2 2
185