[go: up one dir, main page]

0% found this document useful (0 votes)
1K views68 pages

Project Full v01 PDF

This document provides an abstract for a project that uses various mathematical methods to solve real-life applications of systems of linear equations, ordinary differential equations, and eigenvalues/eigenvectors. The document outlines 8 chapters that will use Gaussian elimination, Jacobi/Gauss-Seidel iteration, LU decomposition, Euler's method, Runge-Kutta methods, and the power method. It also includes MATLAB code examples and discussions of error analysis for the numerical methods.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views68 pages

Project Full v01 PDF

This document provides an abstract for a project that uses various mathematical methods to solve real-life applications of systems of linear equations, ordinary differential equations, and eigenvalues/eigenvectors. The document outlines 8 chapters that will use Gaussian elimination, Jacobi/Gauss-Seidel iteration, LU decomposition, Euler's method, Runge-Kutta methods, and the power method. It also includes MATLAB code examples and discussions of error analysis for the numerical methods.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 68

Abstract

In this project, we represent the real life application of mathematics in different fields for different
perspectives. In the very first chapter, we discuss about the topics which will be described in further
chapters. Gaussian Elimination method, Jacobi Iterative method, Gauss-Seidel method, Successive
Over Relaxation (SOR) technique, LU Decomposition method are used to solve the real life
applications of system of linear equations. Then we discuss about Euler’s method, Runge-Kutta
methods and Shooting method , Finite-Difference method to solve the application of initial value
problem and boundary value problem, respectively. At the end, we have discussed the power
method which gives us the use of dominating eigenvalue and corresponding eigenvector in real
life.
Contents
Chapter 01 ..................................................................................................................................... 5
Introduction ................................................................................................................................... 5
Chapter 02 ..................................................................................................................................... 7
Gaussian Elimination for Solving a System of Linear Equations ............................................ 7
2.1.Introduction: .......................................................................................................................... 7
2.2.Gaussian Elimination Method: ............................................................................................. 7
2.3.Forward Elimination of Unknowns: ..................................................................................... 7
2.4.Back Substitution: ................................................................................................................. 9
2.5.Technique for improving the Naïve Gaussian elimination method: ..................................... 9
2.6.Gaussian elimination with partial pivoting: .......................................................................... 9
2.7.Application of Gaussian elimination method: .................................................................... 10
2.7.1.Formation of the problem : .............................................................................................. 10
2.7.2.Solution of the problem: .................................................................................................. 10
2.7.3.MATLAB Code for Gaussian Elimination Method: ....................................................... 10
Chapter 03 ................................................................................................................................... 12
Jacobi Iterative Method for Solving a System of Linear Equations ...................................... 12
3.1.Introduction: ........................................................................................................................ 12
3.2.Derivation of Jacobi’s Method: .......................................................................................... 12
3.3.Application of Jacobi iterative method : ............................................................................. 14
3.3.1.Formation of the problem : .............................................................................................. 14
3.3.2.Solution of the problem: .................................................................................................. 14
3.3.3.MATLAB Code for Jacobi Iterative Method: ................................................................. 15
Chapter 04 ................................................................................................................................... 16
Gauss-Seidel Method for Solving a System of Linear Equations ........................................... 16
4.1.Introduction: ........................................................................................................................ 16
4.2.Derivation of Gauss-Seidel method: ................................................................................... 16
4.3.Application of Gauss Seidel method : ................................................................................ 18
4.3.1.Formation of the problem : .............................................................................................. 18
4.3.2.Solution of the problem: .................................................................................................. 18
4.3.3.MATLAB Code for Gauss Seidel method: ...................................................................... 19
Chapter 05 ................................................................................................................................... 20
Successive Over Relaxation (SOR) Technique for Solving a System of Linear Equations. 20

1
5.1.Introduction: ........................................................................................................................ 20
5.2.SOR Technique: .................................................................................................................. 20
5.3.Derivation of Successive Over Relaxation Technique: ..................................................... 20
5.4.Application of Successive Over Relaxation Technique: ................................................... 23
5.4.1.Formation of the problem : .............................................................................................. 23
5.4.2.Solution of the problem: .................................................................................................. 23
5.4.3. MATLAB Code for Successive Over Relaxation Technique: ....................................... 24
Chapter 06 ................................................................................................................................... 25
LU Decomposition Method for Solving a system of Linear Equations .................................. 25
6.1.Introduction: ........................................................................................................................ 25
6.2.Derivation of LU Decomposition Method : ........................................................................ 25
6.3.Application of LU Decomposition Method: ....................................................................... 26
6.3.1.Formation of the problem : .............................................................................................. 26
6.3.2.Solution of the problem: .................................................................................................. 26
6.3.3. MATLAB Code for LU Decomposition Method : ......................................................... 27
Comparison Table: ..................................................................................................................... 28
Comments on Observation :..................................................................................................... 28
Chapter 07 ................................................................................................................................... 29
Euler’s Method for Solving Ordinary Differential Equations................................................ 29
7.1.Introduction: ........................................................................................................................ 29
7.2.Definition of Euler’s method: ............................................................................................. 29
7.3.Derivation of Euler’s method: ............................................................................................ 29
Figure 7.3.1.: Graphical interpretation of the first step of Euler’s method. ............................. 30
Figure 7.3.2. :General graphical interpretation of Euler’s method. .......................................... 31
7.4.Error Analysis in Euler’s Method ....................................................................................... 31
7.5.Application of Euler’s Method : ......................................................................................... 32
7.5.1.Solution of the problem: .................................................................................................. 32
7.5.2.Temperature vs Time Graph: ........................................................................................... 32
7.5.3.MATLAB code for Euler’s Method : ............................................................................. 33
Chapter 08 ................................................................................................................................... 34
Runge-Kutta 2nd Order Method for Ordinary Differential Equations ................................ 34
8.1.Introduction: ........................................................................................................................ 34
8.2.Definition of Runge-Kutta 2nd order method:.................................................................... 34
8.3.Derivation of Runge-Kutta 2nd order method: .................................................................... 34

2
8.4.Error Analysis in the Runge-Kutta Method of Order 2: ..................................................... 35
8.5.1.Heun’s Method: ............................................................................................................... 35
Figure 8.5.1.1.: Runge-Kutta 2nd order method (Heun’s method). ........................................ 36
8.5.2.Midpoint Method: ............................................................................................................ 36
8.5.3.Ralston’s Method: ............................................................................................................ 37
8.6.Application of Runge-Kutta 2nd order method: ................................................................. 38
8.6.1.Solution of the problem: .................................................................................................. 38
8.6.2. Temperature vs Time Graph: .......................................................................................... 38
8.6.3.MATLAB Code for Runge-Kutta 2nd order method: ..................................................... 39
Chapter 09 ................................................................................................................................... 40
Runge-Kutta 4th Order Method for Ordinary Differential Equations ................................ 40
9.1.Introduction: ........................................................................................................................ 40
9.2.Definition of Runge-Kutta 4th order method: .................................................................... 40
9.3.Derivation of Runge-Kutta 4th order method: .................................................................... 40
9.4.Error Analysis in the Runge-Kutta Method of Order 4: ..................................................... 41
9.5.Application of Runge-Kutta 4th order method : ................................................................. 43
9.5.1.Solution of the problem: .................................................................................................. 43
9.5.2.Temperature vs Time Graph: ........................................................................................... 43
9.5.3.MATLAB Code of Runge-Kutta 4th order method:........................................................ 44
Comparison Table : .................................................................................................................... 45
Comments on Observation :...................................................................................................... 45
Chapter 10 ................................................................................................................................... 46
Shooting Method ......................................................................................................................... 46
10.1.Introduction: ...................................................................................................................... 46
10.2.Theorem (Boundary Value Problem): .............................................................................. 48
10.3.Corollary (linear Boundary Value Problem)..................................................................... 48
10.4.Linear Shooting Method ................................................................................................... 48
10.4.1.Simple Boundary Conditions: ........................................................................................ 49
10.4.2.General Boundary Condition at x = b ............................................................................ 49
10.4.3.General Seperated Boundary Conditions ....................................................................... 50
10.5.Application of Linear Shooting method for Linear ODE: ................................................ 50
10.5.1.Solution of the problem: ................................................................................................ 52
10.5.2.Temperature vs Radius Graph ....................................................................................... 52
10.5.3.MATLAB Code for Linear Shooting method for Linear ODE: .................................... 53

3
Chapter 11 ................................................................................................................................... 55
Finite-Difference Methods for Linear Problems ...................................................................... 55
11.1.Introduction: ...................................................................................................................... 55
11.2.Discrete Approxiamtion:................................................................................................... 55
11.3.Application of Finite Difference method for Linear ODE: .............................................. 57
11.3.1.Solution of the problem: ................................................................................................ 58
11.3.2.Temperature vs Radius Graph: ...................................................................................... 58
11.3.3.MATLAB Code for Finite Difference method for Linear ODE: ................................... 59
Comparison Table : .................................................................................................................... 61
Comment on observation: ......................................................................................................... 61
Chapter 12 ................................................................................................................................... 62
The Power Method ...................................................................................................................... 62
12.1.Introduction: ...................................................................................................................... 62
12.2.Derivation of Power Method: ........................................................................................... 62
12.3.Application of Power Method:.......................................................................................... 64
12.3.1.Solution of the Problem: ................................................................................................ 64
12.3.2.MATLAB Code for Power Method: .............................................................................. 65
Conclusion ................................................................................................................................... 66
References .................................................................................................................................... 67

4
Chapter 01

Introduction

Application of mathematics is used in every sector of modern science and technology. In physics
and engineering, even in biological sector we find the applications of mathematics in a large scale.

In chapter 2 to 6, we use five different methods named Gaussian Elimination method, Jacobi
Iterative method, Gauss-Seidel method, Successive Over Relaxation (SOR) technique, LU
Decomposition method to solve the application of system of linear equations. In the table after
chapter 6 we have compared the accuracy of these methods to find the best method for us.

There are a few ways to solve the initial value problem. For example, some researchers use Euler’s
method. Others use second order Runge-Kutta method. Again one may use fourth order Runge-
Kutta method. For either method it is important that the reader understands the initial conditions
and how the system is functioning.

These methods are used to solve a differential equation with initial values by approximating points
on the solution graph in a specific interval. A limitation of these numerical analysis techniques is
the farther away from the initial value point the farther the approximation is from the actual value.
Euler’s method has a greater error than the Runge-Kutta methods, but is useful as a stepping stone
into more complex techniques.

In chapter 7 to 9, the numerical methods for ordinary differential equations such as Euler’s method,
second order Runge-Kutta method and fourth order Runge-Kutta method are discussed and these
method’s error analysis are also focused. An application of same initial value problem for each
method is taken and the exact solution, approximate solution and error terms are found. In the
table, after chapter 9 the values of the IVP solved by Euler’s method, second order Runge-Kutta
method and fourth order Runge-Kutta method are compared.

To solve the differential equation with boundary condition we use two methods named Shooting
method and Finite-Difference method. In chapter 10 and 11, we discuss about Shooting method
and Finite-Difference method to solve boundary value problem numerically, and also focused in
error of these two methods. An application of same boundary value problem is taken for each
method and the exact solution, approximate solution and error terms are also found here. In the

5
table after chapter after 11 the values of the BVP solved with Shooting method and Finite-
Difference method are compared.

In the last chapter we discuss about Power method and the implementation of this method in real
life.

Computer programming codes in MATLAB are developed for the implementation of the
numerical scheme and experiments are performed in order to verify some qualitative behavior for
various parameters.

6
Chapter 02

Gaussian Elimination for Solving a System of Linear Equations

2.1.Introduction:

One of the most popular techniques for solving simultaneous linear equations is the Gaussian
elimination method. The approach is designed to solve a general set of n equations and n
unknowns
a11 x1  a12 x2  a13 x3  ...  a1n xn  b1
a21x1  a22 x2  a23 x3  ...  a2 n xn  b2
. . . . .
. . . . .
. . . . .
an1 x1  an 2 x2  an3 x3  ...  ann xn  bn

2.2.Gaussian Elimination Method:

Gaussian elimination consists of two steps


1. Forward Elimination of Unknowns: In this step, the unknown is eliminated in each
equation starting with the first equation. This way, the equations are reduced to one equation
and one unknown in each equation.
2. Back Substitution: In this step, starting from the last equation, each of the unknowns is
found.

2.3.Forward Elimination of Unknowns:

In the first step of forward elimination, the first unknown, x1 is eliminated from all rows below
the first row. The first equation is selected as the pivot equation to eliminate x1 . So, to
eliminate x1 in the second equation, one divides the first equation by a11 (hence called the
pivot element) and then multiplies it by a21 . This is the same as multiplying the first equation
by a21 / a11 to give
a a a
a 21 x1  21 a12 x2  ...  21 a1n xn  21 b1
a11 a11 a11
Now, this equation can be subtracted from the second equation to give
 a   a  a
 a 22  21 a12  x 2  ...   a 2 n  21 a1n  x n  b2  21 b1
 a11   a11  a11
or
 x2  ...  a2 n xn  b2
a22
where

7
a21
  a22 
a22 a12
a11

a21
a2 n  a2 n  a1n
a11
th
This procedure of eliminating x1 , is now repeated for the third equation to the n equation to
reduce the set of equations as
a11 x1  a12 x2  a13 x3  ...  a1n xn  b1
 x2  a23
a22  x3  ...  a2 n xn  b2
 x2  a33
a32  x3  ...  a3 n xn  b3
. . .
. . .
. . .
an 2 x2  an 3 x3  ...  ann
 xn  bn

This is the end of the first step of forward elimination. Now for the second step of forward
elimination, we start with the second equation as the pivot equation and a22  as the pivot
 (the
element. So, to eliminate x2 in the third equation, one divides the second equation by a22
 . This is the same as multiplying the second equation
pivot element) and then multiply it by a32
 / a22
by a32  and subtracting it from the third equation. This makes the coefficient of x2 zero in
th
the third equation. The same procedure is now repeated for the fourth equation till the n
equation to give
a11 x1  a12 x2  a13 x3  ...  a1n xn  b1
 x2  a23
a22  x3  ...  a2 n xn  b2
a33 x3  ...  a3n xn  b3
. .
. .
. .
an3 x3  ...  ann
 xn  bn
The next steps of forward elimination are conducted by using the third equation as a pivot
equation and so on. That is, there will be a total of n  1 steps of forward elimination. At the
end of n  1 steps of forward elimination, we get a set of equations that look like
a11 x1  a12 x2  a13 x3  ...  a1n xn  b1
 x2  a23
a22  x3  ...  a2 n xn  b2
a33 x3  ...  a3n xn  b3
. .
. .
. .
annn 1 xn  bnn 1

8
2.4.Back Substitution:

Now the equations are solved starting from the last equation as it has only one unknown.
bn( n 1)
x n  ( n 1)
a nn
Then the second last equation, that is the (n  1) th equation, has two unknowns: x n and xn1 ,
but x n is already known. This reduces the (n  1) th equation also to one unknown. Back
substitution hence can be represented for all equations by the formula
bii 1   aiji 1 x j
n

for i  n  1, n  2,,1
j i 1
xi 
aiii 1
and
bn( n 1)
xn  ( n 1)
a nn

2.5.Technique for improving the Naïve Gaussian elimination method:

The round off errors are large when five significant digits are used as opposed to six significant
digits. One method of decreasing the round-off error would be to use more significant digits,
that is, use double or quad precision for representing the numbers. However, this would not
avoid possible division by zero errors in the Naïve Gauss elimination method. To avoid
division by zero as well as reduce (not eliminate) round-off error, Gaussian elimination with
partial pivoting is the method of choice.

2.6.Gaussian elimination with partial pivoting:

The two methods are the same, except in the beginning of each step of forward elimination, a
row switching is done based on the following criterion. If there are n equations, then there
th
are n  1 forward elimination steps. At the beginning of the k step of forward elimination,
one finds the maximum of
a kk , a k 1,k , …………, a nk
Then if the maximum of these values is a pk in the p th row, k  p  n , then switch rows p
and k .
The other steps of forward elimination are the same as the Naïve Gauss elimination method.
The back substitution steps stay exactly the same as the Naïve Gauss elimination method.

9
2.7.Application of Gaussian elimination method:

An application of Kirchhoff’s Rules to the circuit shown results in the following system of
equations :
I3 = I1 + I2
8=4 I3 +6 I2
8 I1 =4+6 I2

Find the currents I1 , I2 , I3 using the Gaussian elimination method.

2.7.1.Formation of the problem :

The system reduces to :


1 1 −1 𝐼1 0
[0 6 4 ] [𝐼2 ] = [8]
8 −6 0 𝐼3 4

2.7.2.Solution of the problem:

𝐼1 = 0.8462
𝐼2 = 0.4615
𝐼3 = 1.3077

2.7.3.MATLAB Code for Gaussian Elimination Method:

A= [1 1 -1 0
0648
8 -6 0 4];
[n,m]=size(A);

for i=1:n-1
for r=i+1:n
if(abs(A(i,i))<abs(A(r,i)))

10
for c=1:n+1
t(c)=A(i,c);
A(i,c)=A(r,c);
A(r,c)=t(c);
end
end
end
end

for r=1:n+1
b(1,r)=A(1,r);
end
for i=1:n-1
for r=i+1:n
for c=1:n+1
b(r,c)=A(r,c)-A(r,i)*A(i,c)/A(i,i);
end
end
for r=1:n
for c=1:n+1
A(r,c)=b(r,c);
end
end
end

x(n)=b(n,n+1)/b(n,n);

for r=n-1:-1:1
s=0;
for c=r+1:n
s=s+b(r,c)*x(c);
end
x(r)=(b(r,n+1)-s)/b(r,r);
end
fprintf(2,'Solution is=\n %8.4f\n %8.4f\n %8.4f\n',x')

disp('Exact roots are')


syms x y z
s=solve(1*x+ 1*y-1*z== 0, 0*x+ 6*y+ 4*z== 8,8*x -6*y+ 0*z== 4);
s1=vpa(s.x,5)
s2=vpa(s.y,5)
s3=vpa(s.z,5)

11
Chapter 03

Jacobi Iterative Method for Solving a System of Linear Equations

3.1.Introduction:

Iterative techniques are seldom used for solving linear systems of small dimensions since the
time required for sufficient accuracy exceeds that required for direct techniques such as
Gaussian Elimination. For large systems with a high percentage
of 0 entries, however these techniques are efficient in terms of both computer storage and
computation. Systems of this type arise frequently in circuit analysis and in the numerical
solution of boundary value problems and partial differential equations.

An iterative technique to solve the n × n linear systems Ax = b starts with an initial


approximation x (0) to the solution x and generates a sequence of vectors {x (k) }∞
k=0 that
converges to x

3.2.Derivation of Jacobi’s Method:

The Jacobi Iterative Method is obtained by solving the ith equation in Ax = b for xi to obtain
(provided aii ≠ 0)
n
aij xj bi
xi = ∑ (− )+ for i = 1,2, ⋯ , n
aii aii
j=1
j≠i
.
(k)
For each k ≥ 1, generate the components xi of x (k) from the components of x (k−1) by
n
(k) 1 (k−1)
xi = ∑(−aij xj + bi ) for i = 1,2, ⋯ , n
aii
j=1
[ j≠i ]

In general, iterative techniques for solving linear systems involve a process that converts the
systems Ax = b into an equivalent systems of the form x = Tx + c for some fixed matrix T
and vector c . After the initial vector x (0) is selected, the sequence of approximate solution
vectors is generated by computing
x (k) = Tx (k−1) + c for each k=1,2,3,…
This should be reminiscent of the fixed point iteration.
The Jacobi method can be written in the form x (k) = Tx (k−1) + c by splitting A into
its diagonal and off-diagonal parts. To see this, let D be the diagonal matrix whose diagonal
entries are those of A, -L be the strictly lower-triangular part of A and –U be the strictly upper-
triangular part of A. With this notation ,

12
a11 a12 … a1n
a21 a22 … a2n
A=[ ⋮ ⋮ ⋮ ⋮ ]
an1 an2 … ann
is split into
a11 0 ⋯ 0 0 … … 0 0 −a12 ⋯ −an1
0 a22 ⋱ ⋮ −a21 ⋱ ⋮
A=[ ]−[ ] − [⋮ ⋱ ⋱ ⋮ ]
⋮ ⋱ ⋱ 0 ⋮ ⋱ ⋱ ⋮ ⋮ ⋱ ⋱ −an−1,n
0 ⋯ 0 a nn −a n1 ⋯ −an,n−1 0 0 ⋯ ⋯ 0

A=D-L-U
The equation x = b , or (D-L-U)x = b , is transformed into
Dx =(L + U)x + b
−1
and ,if D exists, that is, if aii ≠ 0 for each i ,then
x =D−1 (L + U)x + D−1 b
This results in the matrix form of the Jacobi iterative technique:
x (k) =D−1 (L + U)x (k−1) + D−1 b for k = 1,2,3, …
−1 (L −1
Introducing the notation Tj = D + U) and cj =D b gives the Jacobi technique the form:
(k) (k−1)
x = Tj x + cj
Stopping Criterion: iterate until
‖x(k) −x(k−1) ‖
‖x(k) ‖
< tolerance

13
3.3.Application of Jacobi iterative method :

An application of Kirchhoff’s Rules to the circuit shown results in the following system of
equations :
I3 = I1 + I2
8=4 I3 +6 I2
8 I1 =4+6 I2

Find the currents I1 , I2 , I3 using the Jacobi iterative method. Use


 I1  1
 I   1
 2  
 I 3  1
as the initial guess.

3.3.1.Formation of the problem :

The system reduces to :


1 1 −1 𝐼1 0
[0 6 4 ] [𝐼2 ] = [8]
8 −6 0 𝐼3 4

3.3.2.Solution of the problem:

𝐼𝟏 = 0.8456
𝐼2 = 0.4613
𝐼3 = 1.3069

14
3.3.3.MATLAB Code for Jacobi Iterative Method:

A= [1 1 -1 0
0648
8 -6 0 4];
x= [1
1
1];
n=rank(A);
for i=1:n-1
for r=i:n
if sum(abs(A(i,1:n)))>2*abs(A(i,i))
tmp= A(i,:);
A(i,:)=A(r,:);
A(r,:)=tmp;
end
end
end
b=A(:,n+1);
A=A(:,1:n);
A

D=diag(diag(A));
L=-tril(A,-1);
U=-triu(A,1);

for i=1:9999
x1=inv(D)*(L+U)*x+inv(D)*b;
if norm(x1-x,2)<10^(-3);
fprintf(2,'\nNo. of Iteration= %2i\n',i )
fprintf(2,'Solution is=\n %8.4f\n %8.4f\n %8.4f\n',x)
break
else
x=x1;
[i,x']
end
end

disp('Exact roots are')


syms x y z
s=solve(1*x+ 1*y-1*z== 0, 0*x+ 6*y+ 4*z== 8,8*x -6*y+ 0*z== 4);
s1=vpa(s.x,5)
s2=vpa(s.y,5)
s3=vpa(s.z,5)

15
Chapter 04

Gauss-Seidel Method for Solving a System of Linear Equations

4.1.Introduction:

In certain cases, such as when a system of equations is large, iterative methods of solving equations
are more advantageous. Elimination methods, such as Gaussian elimination, are prone to large
round-off errors for a large set of equations. Iterative methods, such as the Gauss-Seidel method,
give the user control of the round-off error. Also, if the physics of the problem are well known,
initial guesses needed in iterative methods can be made more judiciously leading to faster
convergence.

4.2.Derivation of Gauss-Seidel method:

Given a general set of n equations and n unknowns, we have


a11 x1  a12 x2  a13 x3  ...  a1n xn  c1
a 21 x1  a 22 x2  a 23 x3  ...  a 2 n xn  c2
. .
. .
. .
a n1 x1  a n 2 x2  a n3 x3  ...  a nn xn  cn
If the diagonal elements are non-zero, each equation is rewritten for the corresponding
unknown, that is, the first equation is rewritten with x1 on the left hand side, the second
equation is rewritten with x2 on the left hand side and so on as follows

c1  a12 x 2  a13 x3   a1n x n


x1 
a11
c 2  a 21 x1  a 23 x3   a 2 n x n
x2 
a 22


c n 1  a n 1,1 x1  a n 1, 2 x 2   a n 1,n  2 x n  2  a n 1,n x n
x n 1 
a n 1,n 1
c n  a n1 x1  a n 2 x 2    a n ,n 1 x n 1
xn 
a nn
These equations can be rewritten in a summation form as
n
c1   a1 j x j
j 1
j 1
x1 
a11

16
n
c2   a2 j x j
j 1
j 2
x2 
a 22
.
.
.
n
c n 1  a
j 1
n 1, j xj
j  n 1
x n 1 
a n 1,n 1
n
cn   a nj x j
j 1
j n
xn 
a nn
Hence for any row i ,
n
ci   aij x j
j 1
j i
xi  , i  1,2,, n.
aii
Now to find xi ’s, one assumes an initial guess for the xi ’s and then uses the rewritten equations
to calculate the new estimates. Remember, one always uses the most recent estimates to
calculate the next estimates, xi . At the end of each iteration, one calculates the absolute
relative approximate error for each xi as
x inew  x iold
a i  100
x inew
where xinew is the recently obtained value of xi , and xiold is the previous value of xi .
When the absolute relative approximate error for each xi is less than the pre-specified tolerance,
the iterations are stopped.

17
4.3.Application of Gauss Seidel method :

An application of Kirchhoff’s Rules to the circuit shown results in the following system of
equations :
I3 = I1 + I2
8=4 I3 +6 I2
8 I1 =4+6 I2

Find the currents I1 , I2 , I3 using the Gauss Seidel method. Use


 I1  1
 I   1
 2  
 I 3  1
as the initial guess.

4.3.1.Formation of the problem :

The system reduces to :


1 1 −1 𝐼1 0
[0 6 4 ] [𝐼2 ] = [8]
8 −6 0 𝐼3 4

4.3.2.Solution of the problem:

𝐼𝟏 = 0.8456
𝐼2 = 0.4616
𝐼3 = 1.3072

18
4.3.3.MATLAB Code for Gauss Seidel method:

A= [1 1 -1 0
0648
8 -6 0 4];
x= [1
1
1];
n=rank(A);
for i=1:n-1
for r=i:n
if sum(abs(A(i,1:n)))>2*abs(A(i,i))
tmp= A(i,:);
A(i,:)=A(r,:);
A(r,:)=tmp;
end
end
end
A
b=A(:,n+1);
A=A(:,1:n);

D=diag(diag(A));
L=-tril(A,-1);
U=-triu(A,1);

for i=1:9999
x1=inv(D-L)*U*x+inv(D-L)*b;
if norm(x1-x,Inf)<10^(-3);
fprintf(2,'\nNo. of Iteration= %2i\n',i)
fprintf(2,'Solution is=\n %8.4f\n %8.4f\n %8.4f\n',x)
break
else
x=x1;
[i x']
end
end

disp('Exact roots are')


syms x y z
s=solve(1*x+ 1*y-1*z== 0, 0*x+ 6*y+ 4*z== 8,8*x -6*y+ 0*z== 4);
s1=vpa(s.x,5)
s2=vpa(s.y,5)
s3=vpa(s.z,5)

19
Chapter 05

Successive Over Relaxation (SOR) Technique for Solving a System of Linear Equations

5.1.Introduction:

The rate of convergence of an iterative technique depends on the spectral radius of the matrix
associated with the method. One way to select a procedure to accelerate convergence is to
choose a method whose associated matrix has minimal spectral radius. Before describing a
procedure for selecting such a method, we need to introduce a new means of measuring the
amount by which an approximation to the solution to a linear system differs from the true
solution to the system. The method makes use of the vector described in the following
definition.

5.2.SOR Technique:

Definiton: Suppose 𝐱̃ ∈𝐑n is an approximation to the solution of the linear system defined by
Ax = b.
The residual vector for 𝐱̃ with respect to this system is r = b − A𝐱̃.

5.3.Derivation of Successive Over Relaxation Technique:

In procedures such as the Jacobi or Gauss-Seidel methods, a residual vector is associated


with each calculation of an approximate component to the solution vector. The true objective
is to generate a sequence of approximations that will cause the residual vectors to converge
rapidly to zero. Suppose we let
(k) (k) (k) (k)
𝐫i =(r1i , r2i , … , rni )t

denote the residual vector for the Gauss-Seidel method corresponding to the approximate
(k)
solution vector 𝐱 i defined by
(k) (k) (k) (k) (k−1) (k−1) t
𝐱 i =(x1 , x2 , … , xi−1 , xi , … , xn ).
th (k)
The m component of 𝐫i is

i−1 n
(k) (k−1)
𝐫mi = bm − ∑ amj xjk − ∑ amj xj (5.1)
j=1 j=1
or, equivalently,

i−1 n
(k) (k−1) (k−1)
𝐫mi = bm − ∑ amj xjk − ∑ amj xj − ami xi ,
j=1 j=i+1
For each m=1,2,…,n.

20
(k)
In particular, the ith component of 𝐫i is
i−1 n
(k) (k−1) (k−1)
𝐫ii = bi − ∑ aij xjk − ∑ aij xj − aii xi ,
j=1 j=i+1
So

i−1 n
(k−1) (k) (k−1)
aii xi + 𝐫ii = bi − ∑ aij xjk − ∑ aij xj (5.2)
j=1 j=i+1

(k)
Recall, however, that in the Gauss-Seidel method , 𝐱 i is chosen to be

i−1 n
(k) 1 (K) (k−1)
xi = [bi − ∑ aij xj − ∑ aij xj ] (5.3)
aii
j=1 j=i+1

So equation (5.2) can be rewritten as


(k−1) (k) (k)
aii xi + rii = aii xi

(k)
Consequently, the Gauss-Seidel method can be characterized as choosing xi to satisfy
(k)
(k) (k−1) rii
xi =xi + . (5.4)
aii
We can derive another connection between the residual vectors and the Gauss-Seidel
(k) (k)
technique. Consider the residual vector 𝐫i+1 ,associated with the vectors xi+1 =
(k) (k) (k−1) (k−1) t (k)
(x1 , … , xi , xi+1 , … , xn ) . By equation (5.1) the ith component of 𝐫i+1 is
i n
(k) (k−1)
𝐫i,i+1 = bi − ∑ aij xjk − ∑ aij xj
j=1 j=i+1

i−1 n
(k−1) (k)
= bi − ∑ aij xjk − ∑ aij xj − aii xi .
j=1 j=i+1
(k)
By the manner in which xi is defined in Equation (5.3) we see that r(k)
(k)
𝐫i,i+1 = 0. In a sense, then, the Gauss-Seidel technique is characterized by choosing each
(k)
xi+1 in such a way
(k)
that the i th component of 𝐫i+1 is zero.
(k)
Choosing xi+1 so that one coordinate of the residual vector is zero, however, is not
(k)
necessarily the most efficient way to reduce the norm of the vector 𝐫i+1. If we modify the
Gauss-Seidel procedure, as given by Eq. (5.4), to

21
(k)
(k) (k−1) rii
xi = xi +ω , (5.5)
aii
then for certain choices of positive we can reduce the norm of the residual vector and
obtain significantly faster convergence.
Methods involving Eq. (5.5) are called relaxation methods. For choices of ω with
0 < ω < 1, the procedures are called under-relaxation methods. We will be interested
in choices of ω with 1 < ω, and these are called over-relaxation methods. They are
used to accelerate the convergence for systems that are convergent by the Gauss-Seidel
technique. The methods are abbreviated SOR, for Successive Over-Relaxation, and are
particularly useful for solving the linear systems that occur in the numerical solution of
certain partial-differential equations.
Before illustrating the advantages of the SOR method, we note that by using Eq. (5.2),
we can reformulate Eq. (5.5) for calculation purposes as

i−1 n
(k) (k−1) ω (k) (k−1)
xi = (1 − ω)xi + [bi − ∑ aij xj − ∑ aij xj ] .
aii
j=1 j=i+1

To determine the matrix form of the SOR method, we rewrite this as


i−1 n
(k) (k) (k−1) (k−1)
aij xi + ω ∑ aij xj = (1 − ω)aii xi − ω ∑ aij xj + ωbi ,
j=1 j=i+1
So that in vector form, we have
(D- ωL)𝐱 (k) = [(1 − ω)D + ωU]𝐱 (k−1) + ω𝐛 .
This is,
𝐱 (k) = (D − ωL)−1 [(1 − ω)D + ωU]𝐱 (k−1) + ω(D − ωL)−1 𝐛 (5.6)

Letting Tω = (D − ωL)−1 [(1 − ω)D + ωU] and 𝐜ω = ω(D − ωL)−1 𝐛 ,


gives the SOR technique the form
𝐱 (k) = Tω 𝐱 (k−1) + 𝐜ω .

22
5.4.Application of Successive Over Relaxation Technique:

An application of Kirchhoff’s Rules to the circuit shown results in the following system of
equations :
I3 = I1 + I2
8=4 I3 +6 I2
8 I1 =4+6 I2

Find the currents I1 , I2 , I3 using SOR technique . Use


 I1  1
 I   1
 2  
 I 3  1
as the initial guess.

5.4.1.Formation of the problem :

The system reduces to :


1 1 −1 𝐼1 0
[0 6 4 ] [𝐼2 ] = [8]
8 −6 0 𝐼3 4

5.4.2.Solution of the problem:

𝐼𝟏 = 𝟎. 𝟖𝟒𝟓𝟔
𝐼2 = 0.4620
𝐼3 = 1.3077

23
5.4.3. MATLAB Code for Successive Over Relaxation Technique:

A= [1 1 -1 0
0648
8 -6 0 4];
x= [1
1
1];
n=rank(A);
for i=1:n-1
for r=i:n
if sum(abs(A(i,1:n)))>2*abs(A(i,i))
tmp= A(i,:);
A(i,:)=A(r,:);
A(r,:)=tmp;
end
end
end
A
b=A(:,n+1);
A=A(:,1:n);

D=diag(diag(A));
L=-tril(A,-1);
U=-triu(A,1);

w=input('Enter w=');

for i=1:9999
x1=inv(D-w*L)*((1-w)*D+w*U)*x+w*inv(D-w*L)*b;
if norm(x1-x,Inf)<10^(-3);
fprintf(2,'\nNo. of Iteration= %2i\n',i )
fprintf(2,'Solution is=\n %8.4f\n %8.4f\n %8.4f\n',x)
break
else
x=x1;
[i,x']
end
end

disp('Exact roots are')


syms x y z
s=solve(1*x+ 1*y-1*z== 0, 0*x+ 6*y+ 4*z== 8,8*x -6*y+ 0*z== 4);
s1=vpa(s.x,5)
s2=vpa(s.y,5)
s3=vpa(s.z,5)

24
Chapter 06

LU Decomposition Method for Solving a system of Linear Equations

6.1.Introduction:
For direct solution of linear systems of equations, Gaussian elimination with partial pivoting is one
of the principal tool. Here we will see that the steps used to solve a system of the form Ax=b can
be used to factor a matrix. The factorization is particularly useful when it has the form A=LU.
That is LU decomposition used as a method to solve a set of simultaneous linear equations.

6.2.Derivation of LU Decomposition Method :

We need to learn another method named LU decomposition because it could be a better choice
than the Gauss elimination techniques in some cases, let us discuss first what LU
decomposition is about.
For a nonsingular matrix A on which one can successfully conduct the Naïve Gauss
elimination forward elimination steps, one can always write it as
A  LU 
where
L = Lower triangular matrix
U  = Upper triangular matrix
Then if one is solving a set of equations
[ A][ X ]  [C ] ,
then
LU X   C as [ A]  LU 
Multiplying both sides by L1 ,
L1 LU X   L1 C 
I U X  = L1C as L1 L  [ I ]
U X   L1 C  as I U   [U ]
Let
L1 C   Z 
then
LZ   C (6.1)
and
U X   Z  (6.2)

25
So we can solve Equation (6.1) first for [Z ] by using forward substitution and then use
Equation (6.2) to calculate the solution vector X  by back substitution.

1 0 0 u11 u12 u13 


A  LU    21 1 0  0 u 22 u 23 
 31  32 1  0 0 u 33 
The U  matrix is the same as the one found at the end of the forward elimination steps of the
naïve Gauss elimination method.

6.3.Application of LU Decomposition Method:

An application of Kirchhoff’s Rules to the circuit shown results in the following system of
equations :
I3 = I1 + I2
8=4 I3 +6 I2
8 I1 =4+6 I2

Find the currents I1 , I2 , I3 using LU Decomposition Method.

6.3.1.Formation of the problem :

The system reduces to :


1 1 −1 𝐼1 0
[0 𝐼
6 4 ] [ 2 ] = [8]
8 −6 0 𝐼3 4

6.3.2.Solution of the problem:

26
𝐼𝟏 = 𝟎. 𝟖𝟒𝟔𝟐
𝐼2 = 0.4615
𝐼3 = 1.3077

6.3.3. MATLAB Code for LU Decomposition Method :

A= [1 1 -1
064
8 -6 0];
b=[0
8
4];
[L U]=lu(A);
y=inv(L)*b;
x=inv(U)*y;
fprintf(2,'Solution is=\n %8.4f\n %8.4f\n %8.4f\n',x)

disp('Exact roots are')


syms x y z
s=solve(1*x+ 1*y-1*z== 0, 0*x+ 6*y+ 4*z== 8,8*x -6*y+ 0*z== 4);
s1=vpa(s.x,5)
s2=vpa(s.y,5)
s3=vpa(s.z,5)

27
Comparison Table:
A Table Showing the Error Percentage Among the Numerical Methods With the Exact Solutions:

Error % , Error % , Error % ,


Name of Method
x1 x2 x3
Gaussian Elimination Method 0.011818 0.000000 0.00000
Jacobi Iterative Method 0.059094 0.043336 0.06117
Gauss-Seidel Method 0.059094 0.021668 0.03823
Successive Over Relaxation Method 0.059094 0.108342 0.00000
LU Decomposition Method 0.011818 0.000000 0.00000

Comments on Observation :

The best methods are - Gaussian Elimination Method and LU Decomposition Method which
give minimum error percentage.

28
Chapter 07

Euler’s Method for Solving Ordinary Differential Equations

7.1.Introduction:

Euler’s method is the most elementary approximation technique for solving initial-value
problems. Although it is seldom used in practice, the simplicity of its derivation can be
used to illustrate the techniques involved in the construction of some of the more advanced
techniques, without the cumbersome algebra that accompanies these constructions. The use of
elementary difference methods to approximate the solution to differential equation was one of
the numerous mathematical topics that was first presented to the mathematical public by the
most prolific of mathematicians, Leonhard Euler (1707–1783).

7.2.Definition of Euler’s method:

Euler’s method is a numerical technique to solve ordinary differential equations of the form
 f x, y , y 0  y0
dy
(7.1)
dx
So only first order ordinary differential equations can be solved by using Euler’s method. In
another chapter we will discuss how Euler’s method is used to solve higher order ordinary
differential equations or coupled (simultaneous) differential equations.

7.3.Derivation of Euler’s method:

At x  0 , we are given the value of y  y 0 . Let us call x  0 as x 0 . Now since we know the
slope of y with respect to x , that is, f x, y  , then at x  x0 , the slope is f x0 , y0  . Both x0
and y 0 are known from the initial condition y x0   y 0 .

29
y

True value
y1, Predicted
Φ value

Step size,
h
x

Figure 7.3.1.: Graphical interpretation of the first step of Euler’s method.

So the slope at x  x0 as shown in Figure 7.3.1. is


Rise
Slope 
Run
y1  y0

x1  x0
 f  x0 , y 0 
From here
y1  y0  f x0 , y0 x1  x0 
Calling x1  x0 the step size h , we get
y1  y0  f x0 , y0 h (7.2)
We can now use the value of y1 (an approximate value of y at x  x1 ) to calculate y 2 , and that
would be the predicted value at x2 , given by
y2  y1  f x1 , y1 h
x2  x1  h
Based on the above equations, if we now know the value of y  yi at x i , then
yi 1  yi  f xi , yi h (7.3)
This formula is known as Euler’s method and is illustrated graphically in Figure 7.3.2. In some
books, it is also called the Euler-Cauchy method.

30
y

True Value

yi+1, Predicted value


Φ
yi
h
Step size
x
xi xi+1

Figure 7.3.2. :General graphical interpretation of Euler’s method.

7.4.Error Analysis in Euler’s Method

Many of the methods are based on the Taylor series expansion of y (t ) about the point t  t 0 .
We know in Euler’s method t i  t 0  ih . The nth degree Taylor polynomial has the form
y' ' (t 0 ) 2 2 y ( n ) (t 0 ) n n
Pn (t i )  y(t 0 )  y' (t 0 )ih  i h  ......  i h
2! n!
The formula is using the same step size h as Euler’s method. In the Taylor series n goes to
infinity. The nth Taylor polynomial truncates the series to end at the (n + 1)St term. When the
series is truncated there is some error between the Taylor polynomial and y (t ) . The error is
called the remainder term or truncation error:
y ( n 1) ( (t )) n 1 n 1
Rn (t i )  i h
(n  1)!
This term helps us determine the error of a numerical method. In Euler’s method, since it uses
the Taylor polynomial of order 1, the remainder term is
Rn (t i )  y' ( (t ))ih

The step size h decreases the remainder term and thus the error decreases as well as y' (t ) is

bounded by M on [a, b] . This is one of the methods to decrease error. The remainder term is

on the order of h , we say that Euler’s method is of order h denoted by (h) .

31
7.5.Application of Euler’s Method :

A ball at 1200 K is allowed to cool down in air at an ambient temperature of 300K . Assuming
heat is lost only due to radiation, the differential equation for the temperature of the ball is
given by
d
 2.2067  10 12  4  81  10 8 ,  0  1200 K
dt
where  is in K and t in seconds. Solve the IVP using Euler’s Method. Assume a step size
of h  50 seconds.

7.5.1.Solution of the problem:

t Appr. Sol. Exact sol. Error percent


--------------------------------------------------------------
0.000 1200.000 1200.000 0.000
50.000 972.103 1032.841 5.881
100.000 874.468 931.897 6.163
150.000 810.843 861.782 5.911
200.000 764.043 809.108 5.570
250.000 727.337 767.497 5.233
300.000 697.352 733.454 4.922
350.000 672.153 704.874 4.642
400.000 650.526 680.401 4.391
450.000 631.660 659.114 4.165
500.000 614.989 640.358 3.962

7.5.2.Temperature vs Time Graph:

32
7.5.3.MATLAB code for Euler’s Method :

clc;clear all;clf;

a=0;
b=500;
h=50;
alpha=1200;
f=@(t,y)(-2.2067*10^(-12)*(y^4-81*10^8));

n=(b-a)/h;
t(1)=a;
w(1)=alpha;

for i=2:n+1
t(i)=t(i-1)+h;
w(i)=w(i-1)+h*f(t(i-1),w(i-1));
end

[t,ex]=ode45(f,[a:h:b],alpha);

for i=1:n+1
err(i)=abs((ex(i)-w(i))/ex(i))*100;
end

p=[t';w;ex';err];
fprintf('For h=%3i\n',h)
fprintf(2,' t Appr. Sol. Exact sol. Error percent\n');
fprintf(' %6.3f %8.3f %8.3f %6.3f\n',p);

plot(t,w,'b:')
hold on
plot(t,ex,'r')
legend('Appr. Sol.','Exact Sol.')
title('Temperature vs Time ')
xlabel('Time, t (s)')
ylabel('Temperature, theta (K)')

33
Chapter 08

Runge-Kutta 2nd Order Method for Ordinary Differential Equations

8.1.Introduction:

In numerical analysis, the Runge–Kutta methods are a family of implicit & explicit iterative
methods, which include the well-known routine called the Euler’s Method, used in temporal
discretization for the approximate solutions of ordinary differential equations. They are
motivated by the dependence of the Taylor methods on the specific IVP. These new methods
do not require derivatives of the right-hand side function f in the code, and are therefore
general-purpose initial value problem solvers. Runge-Kutta methods are among the most
popular ODE solvers. They were first studied by Carle Runge and Martin Kutta around
1900. Modern developments are mostly due to John Butcher in the 1960s.

8.2.Definition of Runge-Kutta 2nd order method:

The Runge-Kutta 2nd order method is a numerical technique used to solve an ordinary
differential equation of the form
 f x, y , y0  y0
dy
dx
Only first order ordinary differential equations can be solved by using the Runge-Kutta 2nd
order method. In other sections, we will discuss how the Euler and Runge-Kutta methods are
used to solve higher order ordinary differential equations or coupled (simultaneous) differential
equations.

8.3.Derivation of Runge-Kutta 2nd order method:

Euler’s method is given by


yi 1  yi  f xi , yi h (8.1)
Where,
x0  0
y 0  y ( x0 )
h  xi 1  xi
To understand the Runge-Kutta 2nd order method, we need to derive Euler’s method from the
Taylor series.
2 3
yi 1  yi 
dy
xi 1  xi   1 d 2y xi 1  xi 2  1 d 3y xi 1  xi 3  ...
dx xi , yi 2! dx x , y 3! dx x , y
i i i i

 yi  f ( xi , yi )xi 1  xi  
f ' ( xi , yi )xi 1  xi   f ' ' ( xi , yi )xi 1  xi   ... (8.2)
1 2 1 3

2! 3!
As we can see the first two terms of the Taylor series
yi 1  yi  f xi , yi h
are Euler’s method and hence can be considered to be the Runge-Kutta 1st order method.

34
8.4.Error Analysis in the Runge-Kutta Method of Order 2:

The true error in the approximation is given by


f xi , yi  2 f xi , yi  3
Et  h  h  ... (8.3)
2! 3!
So what would a 2nd order method formula look like. It would include one more term of the
Taylor series as follows.
yi 1  yi  f xi , yi h  f xi , yi h 2
1
(8.4)
2!
However, we already see the difficulty of having to find f x, y  in the above method. What
Runge and Kutta did was write the 2nd order method as
yi 1  yi  a1 k1  a 2 k 2 h (8.5)
where
k1  f xi , yi 
k 2  f xi  p1h, yi  q11k1h  (8.6)
This form allows one to take advantage of the 2nd order method without having to calculate
f x, y  .
So how do we find the unknowns a1 , a2 , p1 and q11 . Without proof , equating Equation
(8.4) and (8.5) , gives three equations.
a1  a2  1
1
a2 p1 
2
1
a2 q11 
2
Since we have 3 equations and 4 unknowns, we can assume the value of one of the unknowns.
The other three will then be determined from the three equations. Generally the value of a2 is
1
chosen to evaluate the other three constants. The three values generally used for a2 are , 1
2
2
and , and are known as Heun’s Method, the midpoint method and Ralston’s method,
3
respectively.

8.5.1.Heun’s Method:
1
Here a2  is chosen, giving
2
1
a1 
2
p1  1

35
q11  1
resulting in
1 1 
y i 1  y i   k1  k 2 h (8.7)
2 2 
where
k1  f xi , yi  (8.7.1)
k 2  f xi  h, yi  k1h  (8.7.2)

This method is graphically explained in Figure 8.4.1.1

predicted

yi

xi xi+1 x

Figure 8.5.1.1.: Runge-Kutta 2nd order method (Heun’s method).

8.5.2.Midpoint Method:

Here a2  1 is chosen, giving


a1  0
1
p1 
2
1
q11 
2
resulting in
yi 1  yi  k 2 h (8.8)
where
k1  f xi , yi  (8.8.1)
 1 1 
k 2  f  xi  h, yi  k1 h 
 2 2 
(8.8.2)

36
8.5.3.Ralston’s Method:
2
Here a2  is chosen, giving
3
1
a1 
3
3
p1 
4
3
q11 
4
resulting in
1 2 
yi 1  y i   k1  k 2 h (8.9)
3 3 
where
k1  f xi , yi  (8.9.1)
 3 3 
k 2  f  xi  h, yi  k1 h 
 4 4 
(8.9.2)

37
8.6.Application of Runge-Kutta 2nd order method:

A ball at 1200 K is allowed to cool down in air at an ambient temperature of 300 K. Assuming
heat is lost only due to radiation, the differential equation for the temperature of the ball is
given by
d
 2.2067  10-12 ( 4  81  108 )
dt
where  is in K and t in seconds. Solve the IVP using Runge-Kutta 2nd order method.
Assume a step size of h  50 seconds.

8.6.1.Solution of the problem:

t approx. sol. exact sol. Error percentage


-------------------------------------------------------------
0.00 1200.000 1200.000 0.000
50.00 1047.391 1032.841 1.409
100.00 945.914 931.897 1.504
150.00 873.708 861.782 1.384
200.00 819.142 809.108 1.240
250.00 776.017 767.497 1.110
300.00 740.782 733.454 0.999
350.00 711.257 704.874 0.906
400.00 686.023 680.401 0.826
450.00 664.113 659.114 0.759
500.00 644.843 640.358 0.700

8.6.2. Temperature vs Time Graph:

38
8.6.3.MATLAB Code for Runge-Kutta 2nd order method:

clc;clear all;

f=@(t,y)(-2.2067*10^(-12)*(y^4-81*10^8));
a=0;b=500;alpha=1200;

h=50;
n=(b-a)/h;
t(1)=a;
w(1)=alpha;
for i=1:n
w(i+1)=w(i)+h*f(t(i)+h/2,w(i)+h/2*f(t(i),w(i)));
t(i+1)=a+i*h;
end

[t,yex]=ode45(f,[a:h:b],alpha);
err=abs((yex-w')./yex).*100;

p=[t,w',yex,err];
fprintf(2,'For h=%5.3f\n',h)
fprintf(2,' t approx. sol. exact sol. Error percentage\n');
fprintf(' %5.2f %8.3f %8.3f %8.3f\n',p')

plot(t,w,':b')
hold on
plot(t,yex,'r')
hold off
legend('Appr. Sol.','Exact Sol.')
title('Temperature vs Time ')
xlabel('Time, t (s)')
ylabel('Temperature, theta (K)')

39
Chapter 09

Runge-Kutta 4th Order Method for Ordinary Differential Equations

9.1.Introduction:

In the last section it was shown that using two estimates of the slope (i.e., Second Order Runge
Kutta; using slopes at the beginning and midpoint of the time step, or using the slopes at
the beginninng and end of the time step) gave an approximation with greater accuracy than
using just a single slope (i.e., First Order Runge Kutta; using only the slope at the beginning
of the interval). It seems reasonable, then, to assume that using even more estimates of the
slope would result in even more accuracy. It turns out this is true, and leads to higher-order
Runge-Kutta methods. Third order methods can be developed (but are not discussed here).
Instead we will restrict ourselves to the much more commonly used Fourth Order Runge-Kutta
technique, which uses four approximations to the slope. It is important to understand these
lower order methods before starting on the fourth order method.

9.2.Definition of Runge-Kutta 4th order method:

Runge-Kutta 4th order method is a numerical technique used to solve ordinary differential
equation of the form
 f x, y , y0  y0
dy
dx
So only first order ordinary differential equations can be solved by using the Runge-Kutta 4th
order method. In other sections, we have discussed how Euler and Runge-Kutta methods are
used to solve higher order ordinary differential equations or coupled (simultaneous) differential
equations.
9.3.Derivation of Runge-Kutta 4th order method:

The Runge-Kutta 4th order method is based on the following


yi 1  yi  a1k1  a2 k 2  a3 k3  a4 k 4 h (9.1)
where knowing the value of y  yi at x i , we can find the value of y  yi 1 at xi 1 , and
h  xi 1  xi
Equation (9.1) is equated to the first five terms of Taylor series
1 d2y 1 d3y
dy
    xi 1  xi 
3
yi 1  yi     
2
xi , yi xi 1 xi 2 xi , yi
x i 1 xi 3 xi , yi
dx 2! dx 3! dx
4
(9.2)
x , y  xi 1  xi 
1d y

4

4! dx4 i i
 f x, y  and xi 1  xi  h
dy
Knowing that
dx
yi 1  yi  f xi , yi h  f ' xi , yi h 2  f '' xi , yi h 3  f ''' xi , yi h 4
1 1 1
(9.3)
2! 3! 4!
Based on equating Equation (9.2) and Equation (9.3), one of the popular solutions used is

40
yi 1  yi 
1
k1  2k 2  2k3  k 4 h (9.4)
6
k1  f xi , yi 
(9.4.1)
 1 1 
k2  f  xi  h, yi  k1h 
 2 2  (9.4.2)
 1 1 
k 3  f  xi  h, y i  k 2 h  (9.4.3)
 2 2 
k 4  f xi  h, yi  k 3 h  (9.4.4)

9.4.Error Analysis in the Runge-Kutta Method of Order 4:

As in Euler’s method, Runge-Kutta methods are based on Taylor series polynomial. The fourth
order Runge-Kutta uses the fourth order Taylor polynomial so the method’s remainder term is
y ( 4) ( (t )) 4 4
Rn (t i )  i h
4!

The error is (h 4 ) , clearly better than all other methods presented as long as the derivative
y ( 4 ) is bounded by M on [a, b] . The fourth order Runge-Kutta method has a better error
than using Euler’s method with a much larger step size. To illustrate this we will compare
problem (2.3) solved with Euler’s method, Taylor’s method and fourth order Runge-Kutta
method with step size h  0.5 . Now we see that even if they have the same step size but the
fourth order Runge-Kutta method has the smallest error. It should also be said that fourth order
Runge-Kutta method will have less computations than the other methods to attain similar error.
Since the step size can be larger the computer will be able to go through the method few times
to get accurate estimations.

We are comparing the exact results with Euler’s method (Runge-Kutta 1st order method),
Heun’s method (Runge-Kutta 2nd order method), and Runge-Kutta 4th order method.
The formula described in this chapter was developed by Runge. This formula is same as
Simpson’s 1/3 rule, if f x, y  were only a function of x . There are other versions of the 4th
order method just like there are several versions of the second order methods. The formula
developed by Kutta is
yi 1  yi  k1  3k 2  3k3  k 4 h
1
(9.5)
8
Where
k1  f xi , yi  (9.5.1)
 1 1 
k 2  f  xi  h, yi  hk1  (9.5.2)
 3 3 

41
 2 1 
k 3  f  xi  h, yi  hk1  hk2  (9.5.3)
 3 3 
k 4  f xi  h, yi  hk1  hk2  hk3  (9.5.4)
This formula is the same as the Simpson’s 3/8 rule, if f x, y  is only a function of x .

42
9.5.Application of Runge-Kutta 4th order method :

A ball at 1200 K is allowed to cool down in air at an ambient temperature of 300 K. Assuming
heat is lost only due to radiation, the differential equation for the temperature of the ball is
given by
d
 2.2067  1012  4  81  108 , 0  1200 K
dt
where  is in K and t in seconds. Solve the IVP using Runge-Kutta 4th order method.
Assume a step size of h  50 seconds.

9.5.1.Solution of the problem:

t approx. sol. exact sol. Error percentage


------------------------------------------------------------------------
0.00 1200.000 1200.000 0.00000
50.00 1032.659 1032.841 0.01758
100.00 931.766 931.897 0.01399
150.00 861.686 861.782 0.01119
200.00 809.033 809.108 0.00927
250.00 767.436 767.497 0.00789
300.00 733.403 733.454 0.00686
350.00 704.832 704.874 0.00606
400.00 680.365 680.401 0.00542
450.00 659.081 659.114 0.00490
500.00 640.330 640.358 0.00447

9.5.2.Temperature vs Time Graph:

43
9.5.3.MATLAB Code of Runge-Kutta 4th order method:

clc;clear all;

f=@(t,y)(-2.2067*10^(-12)*(y^4-81*10^8));
a=0;b=500;alpha=1200;

h=50;
n=(b-a)/h;
t(1)=a;
w(1)=alpha;
for i=1:n
k1=h*f(t(i),w(i));
k2=h*f(t(i)+h/2,w(i)+k1/2);
k3=h*f(t(i)+h/2,w(i)+k2/2);
k4=h*f(t(i)+h,w(i)+k3);
w(i+1)=w(i)+(k1+2*k2+2*k3+k4)/6;
t(i+1)=a+i*h;
end

[t,yex]=ode45(f,[a:h:b],alpha);
err=abs((yex-w')./yex).*100;

p=[t';w;yex';err'];
fprintf(2,'For h=%5.3f\n',h)
fprintf(2,' t approx. sol. exact sol. Error percentage\n');
fprintf(' %5.2f %8.3f %8.3f %8.5f\n',p)

plot(t,w,'*b')
hold on
plot(t,yex,'r')
hold off
legend('Appr. Sol.','Exact Sol.')
title('Temperature vs Time ')
xlabel('Time, t (s)')
ylabel('Temperature, theta (K)')

44
Comparison Table :

A Table Showing the Error Percentage Among the Numerical Methods:

Error % in Error % in Error % in


t
Euler RK-2 RK-4
0 0.000 0.000 0.000
50 5.881 1.409 0.017
100 6.163 1.504 0.013
150 5.911 1.384 0.011
200 5.570 1.240 0.009
250 5.233 1.110 0.007
300 4.922 0.999 0.006
350 4.642 0.906 0.006
400 4.391 0.826 0.005
450 4.165 0.759 0.004
500 3.962 0.700 0.004

Comments on Observation :
The best method is Runge-Kutta 4th order Method since it has the minimum error percentage.

45
Chapter 10

Shooting Method

10.1.Introduction:

In mathematics, in the field of differential equations, an initial value problem (IVP) is an


ordinary differential equation (ode), which frequently occurs in mathematical models that arise in
many branches of science, engineering and economics, together with specified value, call the
initial condition, of the unknown function at a given point in the domain of the solution.

𝑦 ′ = 𝑓(𝑡, 𝑦) (10.1)
𝑦(𝑡0) = 𝑦0 (10.2)

There is also another case that we consider an ordinary differential equation (ode), we require
the solution on an interval [a,b] , and some conditions are given at a, and the rest at b, although
more complicated situations are possible, involving three or more points .we call this a boundary
value problem (BVP).
−𝑦 ′′ + 𝑟(𝑡)𝑦 = 𝑓(𝑡), 𝑎 ≤ 𝑡 ≤ 𝑏 (10.3)
with the boundary conditions

𝑦(𝑎) = 𝐴 , 𝑦(𝑏) = 𝐵 (10.4)


For analytical solutions of IVPs' and BVPs', there exist many different methods.

Numerical solutions of such kind of problems is a subject which can be treated separately. There
are also several methods derived until now. The numerical methods for the solution of IVP of
ode’s are classified in two major groups: the one-step methods and multi-step methods.The one-
step methods are as follows: Taylor methods, Euler's Method , Runge-Kutta Methods. The linear
multistep methods are implicit Euler method, Trapezium rule method, Adams – Bashforth
method,Adams-Moulton method, Predictor- Corrector methods. These methods are solved in
advance.

46
Similarly, for the numerical study of boundary value problems there exists some methods like,
Shooting method for linear and nonlinear BVP , Finite-Difference method for linear and nonlinear
BVP.

In this project, our aim is to study the Shooting Method for the numerical solutions of second order
BVPs both for linear and nonlinear case.

The algorithm of these methods are presented to see how the method works .Some examples are
given to show the performance and advantages. The plan of this project is as follows: In the first
chapter we will give a definition of shooting method and where we use it and for which kind of
problems it is used. In the second chapter we give an explanation to Linear Shooting.
The third chapter is about Nonlinear Shooting. Before the conclusion part we will solve some
examples in the Application chapter with using our method.

Ordinary differential equations are given either with initial conditions or with boundary
conditions. The shooting method uses the same methods that were used in solving initial value
problems. This is done by assuming initial values that would have been given if the ordinary
differential equation were an initial value problem. The boundary value obtained is then compared
with the actual boundary value. Using trial and error or some scientific approach, one tries to get
as close to the boundary value as possible. Mainly, the central idea of the method is to replace the
boundary value problem under consideration by an initial value problem of the form
−𝑦 ′′ + 𝑟(𝑡)𝑦 = 𝑓(𝑡) , 𝑎 ≤ 𝑡 ≤ 𝑏 (10.5)
𝑦(𝑎) = 𝐴 , 𝑦 ′ (𝑎) = 𝑆 (10.6)

where t is to be chosen in such a way that y(b) = B. This can be thought of as a problem of trying
to determine the angle of inclination arc tans t of a loaded gun, so that, when shot from height B
at the point t= a, the bullet hits the target placed at height B at the point x = b. Hence the name
,shooting method.

Once the boundary value problem has been transformed into such an 'equivalent' initial value
problem, any of the methods for the numerical solution of initial value problems can be applied to

47
find a numerical solution.The following theorem gives general conditions that ensure that the
solution to a second-order boundary value problem exists and is unique.

10.2.Theorem (Boundary Value Problem):


Suppose the function 𝒇 in the boundary value problem
𝑦 ′ = 𝑓(𝑡, 𝑦, 𝑦 ′ ),
𝑎 ≤ 𝑡 ≤ 𝑏,
y(a) = A, y(b) = B
is continuous on the set 𝐷 = {(𝑡, 𝑦, 𝑦 ′ ): 𝑎 ≤ 𝑡 ≤ 𝑏 , −∞ < 𝑦 < ∞, −∞ < 𝑦 ′ < ∞}
𝜕𝑓 𝜕𝑓
and that 𝜕𝑦 𝑎𝑛𝑑 𝜕𝑦 ′ are also continuous on D. If
𝜕𝑓
(𝑖) (𝑡, 𝑦, 𝑦 ′ ) > 0 𝑓𝑜𝑟 𝑎𝑙𝑙 (𝑡, 𝑦, 𝑦 ′ )Є𝐷, and
𝜕𝑦
𝜕𝑓
(𝑖𝑖) A constant M exists ,with |𝜕𝑦 (𝑡, 𝑦, 𝑦 ′ )| ≤ 𝑀, 𝑓𝑜𝑟 𝑎𝑙𝑙 (𝑡, 𝑦, 𝑦 ′ )Є𝐷.
Then the boundary value problem has a unique solution.

10.3.Corollary (linear Boundary Value Problem)

Assume that 𝑓 in Theorem (1.1) has the form


𝑓(𝑡, 𝑦, 𝑦 ′ ) = 𝑝(𝑡)𝑦 ′ + 𝑞(𝑡)𝑦 + 𝑟(𝑡)
𝜕𝑓
and that 𝑓 and its partial derivatives 𝜕𝑦 = 𝑞(𝑡) and
𝜕𝑓
= 𝑝(𝑡) are continuous on D. If there exists a constant M>0 for which p(t) and q(t) satsfy
𝜕𝑦 ′
Q(t)>0 for all tЄ[𝑎, 𝑏]
and |𝑝(𝑡) ≤ 𝑀 = 𝑚𝑎𝑥𝑎≤𝑡≤𝑏 {|𝑝(𝑡)|}

Then the linear boundary value problem


𝑦 ′′ = 𝑝(𝑡)𝑦 ′ + 𝑞(𝑡)𝑦 + 𝑟(𝑡) 𝑤𝑖𝑡ℎ 𝑦(𝑎) = 𝐴 𝑎𝑛𝑑 𝑦(𝑏) = 𝐵
has a unique solution y=y(t) over a≤ 𝑡 ≤b.

10.4.Linear Shooting Method


A linear two - point boundary value problem can be solved by forming a linear combination
of the solutions to two initial value problems. The form of the IVP depends on the form of the
boundary conditions . We begin with the simplest case , Dirichlet boundary conditions , in
which the value of the function is given at each end of the interval.
We then consider some more general boundary conditions .

48
10.4.1.Simple Boundary Conditions:

Suppose the two point boundary value ploblem is linear, i.e., of the form
𝑦 ′′ = 𝑝(𝑥)𝑦 ′ + 𝑞(𝑥)𝑦 + 𝑟(𝑥); 𝑎 ≤ 𝑡 ≤ 𝑏, (10.7.1)
with boundary conditions y(a) = A, y(b) = B. The approach is to solve the two IVPs
𝑢′′ = 𝑝(𝑡)𝑢′ + 𝑞(𝑡)𝑢 + 𝑟(𝑡) 𝑤𝑖𝑡ℎ 𝑢(𝑎) = 𝐴 , 𝑢′ (𝑎) = 0 , (10.7.2)
𝑣 ′′ = 𝑝(𝑡)𝑣 ′ + 𝑞(𝑡)𝑣; 𝑣(𝑎) = 0 , 𝑣 ′ (𝑎) = 1, (10.7.3)
𝐵−𝑢(𝑏)𝑣(𝑥)
If v(b)≠ 0 the solution of the original two - point BVP is given by 𝑦(𝑡) = 𝑢(𝑡) + 𝑣(𝑏)
linear ode by finding a general solution of the homogenous equation (expressed as the ode for
v) and a particular solution of the nonhomogeneous equation (ex-pressed as the ODE for u).
The arbitrary constant C that would appear in the solution
𝑦(𝑡) = 𝑢(𝑡) + 𝐶 𝑣(𝑡) is found from the requirement that y(b) = u(b) + Cv(b) = B, which
𝐵−𝑢(𝑏)
Yields 𝐶 = 𝑣(𝑏)

In order to approximate the solution of the linear ode – BV,


𝑦 ′′ = 𝑝(𝑥)𝑦 ′ + 𝑞(𝑥)𝑦 + 𝑟(𝑥) with boundary conditions y(a) = A, y(b) = B, using the linear
shooting method, we must convert the problem to a system of four first order ode - IVP, which
we write as 𝑍 ′ = 𝑓(𝑡, 𝑧) . The variables 𝑍1 and 𝑍2 are u and 𝑢′ , 𝑢′′ respectively, where u
satisfies the ode – IVP
𝑢′′ = 𝑝(𝑡)𝑢′ + 𝑞(𝑡)𝑢 + 𝑟(𝑡) , with initial conditions u(a) = A, 𝑢′ (𝑎) = 0.
The variables 𝑍3 and 𝑍4 are v and 𝑣 ′ respectively, where v satisfies the ode –IVP
𝑣 ′′ = 𝑝(𝑡)𝑣 ′ + 𝑞(𝑡)𝑣, with initial conditions 𝑣(𝑎) = 0 , 𝑣 ′ (𝑎) = 1 .
Define the variables 𝑍1 , 𝑍2 , 𝑍3 and 𝑍4
𝑍1 = 𝑢 , 𝑍2 = 𝑢′ , 𝑍3 = 𝑣 , 𝑍4 = 𝑣 ′

Define the ode


𝑍1′ = 𝑍2
𝑍2′ = 𝑝(𝑡)𝑍2 + 𝑞(𝑡)𝑍1 + 𝑟(𝑡)
𝑍3′ = 𝑍4
𝑍4′ = 𝑝(𝑡)𝑍4 + 𝑞(𝑡)𝑍3
Define the initial conditions
𝑍1 (𝑎) = 𝐴, 𝑍2 (𝑎) = 0, 𝑍3 (𝑎) = 0, 𝑍4 (𝑎) = 0.
The original problem has been converted into the appropriate System of the first – order ode - IVP
as described above.
10.4.2.General Boundary Condition at x = b

Suppose that the linear ode


𝑦 ′′ = 𝑝(𝑡)𝑦 ′ + 𝑞(𝑡)𝑦 + 𝑟(𝑡) (10.8)
has boundary conditions consisting of the value of y given at t = a ,
but the condition at t = b involves a linear combination of y(b) and 𝑦 ′ (𝑏) :

49
y(a)=A, 𝑦 ′ (𝑏) + 𝑐𝑦(𝑏) = 𝐵 (10.9)
as in the previous discussion, the approach is to solve the two IVP:
𝑢′′ = 𝑝(𝑥)𝑢′ + 𝑞(𝑥)𝑢 + 𝑟(𝑥) 𝑤𝑖𝑡ℎ 𝑢(𝑎) = 𝐴 , 𝑢′ (𝑎) = 0 , (10.10)
𝑣 ′′ = 𝑝(𝑥)𝑣 ′ + 𝑞(𝑥)𝑣; 𝑣(𝑎) = 0 , 𝑣 ′ (𝑎) = 1 (10.11)
The linear combination, y=u+dv satisfies the condition at t=a, since y(a) = A. We now need
to find d (if possible) so that y satisfies
𝑦 ′ (𝑏) + 𝑐𝑦(𝑏) = 𝐵 (10.12)
If 𝑣 ′ (𝑏) + 𝑐𝑣(𝑏) ≠ 0 , there is a unique solution, given by

𝐵−𝑢′ (𝑏)−𝑐𝑢(𝑏)
y(t)= 𝑢(𝑡) + 𝑣(𝑡) (10.13)
𝑣 ′ (𝑏)+𝑐𝑣(𝑏)

10.4.3.General Seperated Boundary Conditions

Suppose that the linear ode


𝑦 ′′ = 𝑝(𝑡)𝑦 ′ + 𝑞(𝑡)𝑦 + 𝑟(𝑡) (10.14)
has mixed boundary conditions at both x = a and x = b, i.e,
𝑦 ′ (𝑎) + 𝑐1 𝑦(𝑎) = 𝐴; 𝑦 ′ (𝑏) + 𝑐2 𝑦(𝑏) = 𝐵; (10.15)
as in the previous discussion, the approach is to solve two IVPs, however, the appropriate
forms are now
𝑢′′ = 𝑝(𝑡)𝑢′ + 𝑞(𝑡)𝑢 + 𝑟(𝑡) 𝑤𝑖𝑡ℎ 𝑢(𝑎) = 𝐴 , 𝑢′ (𝑎) = 𝐴 , (10,16)

𝑣 ′′ = 𝑝(𝑡)𝑣 ′ + 𝑞(𝑡)𝑣; 𝑣(𝑎) = 1 , 𝑣 ′ (𝑎) = 𝑐1 (10.17)

The linear combination y = u+dv satisfies


𝑦 ′ (𝑎) + 𝑐1 𝑦(𝑎) = 𝐴 ,as we need to find d (if possible) such that y satisfie
𝑦 ′ (𝑏) + 𝑐2 𝑦(𝑏) = 𝐵
if 𝑣 ′ (𝑏) + 𝑐2 𝑣(𝑏) ≠ 0,
there is a unique solution, given by
𝐵−𝑢′ (𝑏)−𝑐2 𝑢(𝑏)
𝑦(𝑡) = 𝑢(𝑡) + (10.18)
𝑣 ′ (𝑏)+𝑐2 𝑣(𝑏)

10.5.Application of Linear Shooting method for Linear ODE:

Find the temperature profile inside the tube wall. Hot fluid flows inside the tube such
that the inside temperature is 100℃. The differential equation for the temperature
distribution is given by
𝑑 2 𝑇 1 𝑑𝑇
+ =0
𝑑𝑟 𝑟 𝑑𝑟

The boundary conditions are T(100) = 100℃ and T(150) = 0℃. Use linear Shooting technique

50
with h = 5 meter.

51
10.5.1.Solution of the problem:

r approx. sol. exact sol. Error percentage


----------------------------------------------------------------------
100.000 100.000 100.000 0.000
105.000 88.048 87.967 0.092
110.000 76.630 76.494 0.178
115.000 65.699 65.530 0.257
120.000 55.216 55.034 0.331
125.000 45.146 44.966 0.400
130.000 35.457 35.293 0.465
135.000 26.122 25.985 0.525
140.000 17.115 17.016 0.582
145.000 8.414 8.361 0.636
150.000 0.000 0.000 0.000

10.5.2.Temperature vs Radius Graph

52
10.5.3.MATLAB Code for Linear Shooting method for Linear ODE:

p=@(x)(-1/x);
q=@(x)(0);
r=@(x)(0);
a=100;
b=150;
alpha=100;
beta=0;
h=2;

n=(b-a)/h;
u(1,1)=alpha;
u(2,1)=0;
v(1,1)=0;
v(2,1)=1;

for i=1:n
x=a+i*h;
k11=h*u(2,i);
k12=h*(p(x)*u(2,i)+q(x)*u(1,i)+r(x));
k21=h*(u(2,i)+k12/2);
k22=h*(p(x+h/2)*(u(2,i)+k12/2)+q(x+h/2)*(u(1,i)+k11/2)+r(x+h/2));
k31=h*(u(2,i)+k22/2);
k32=h*(p(x+h/2)*(u(2,i)+k22/2)+q(x+h/2)*(u(1,i)+k21/2)+r(x+h/2));
k41=h*(u(2,i)+k32);
k42=h*(p(x+h)*(u(2,i)+k32/2)+q(x+h)*(u(1,i)+k31)+r(x+h));

u(1,i+1)=u(1,i)+1/6*(k11+2*k21+2*k31+k41);
u(2,i+1)=u(2,i)+1/6*(k12+2*k22+2*k32+k42);

m11=h*v(2,i);
m12=h*(p(x)*v(2,i)+q(x)*v(1,i));
m21=h*(v(2,i)+m12/2);
m22=h*(p(x+h/2)*(v(2,i)+m12/2)+q(x+h/2)*(v(1,i)+m11/2));
m31=h*(v(2,i)+m22/2);
m32=h*(p(x+h/2)*(v(2,i)+m22/2)+q(x+h/2)*(v(1,i)+m21/2));
m41=h*(v(2,i)+m32);
m42=h*(p(x+h)*(v(2,i)+m32/2)+q(x+h)*(v(1,i)+m31));

v(1,i+1)=v(1,i)+1/6*(m11+2*m21+2*m31+m41);
v(2,i+1)=v(2,i)+1/6*(m12+2*m22+2*m32+m42);

end
w(1,1)=alpha;

53
w(2,1)=(beta-u(1,n+1))/v(1,n+1);
W1(1)=w(1,1);
W2(1)=w(2,1);
x(1)=a;

for i=2:n+1
W1(i)=u(1,i)+w(2,1)*v(1,i);
W2(i)=u(2,i)+w(2,1)*v(2,i);
x(i)=x(i-1)+h;
end

solinit=bvpinit(a:h:b,[1,0]);
sol=bvp4c(@twoode,@bc,solinit);
x=a:h:b;
y=deval(sol,x);
yex=y(1,:);

err=abs((yex-W1)./yex).*100;

p=[x;W1;yex;err];
fprintf(2,'For h= %5.3f\n',h)
fprintf(2,' r approx. sol. exact sol. Error percentage\n');
fprintf(' %6.3f %8.3f %8.3f %8.3f\n',p)

plot(x,W1,':b')
hold on
plot(x,yex,'r')
hold off
legend('Approx. Sol.','Exact sol.')
title('Temperature , T(r) vs Radius , r')
xlabel('Radius , r')
ylabel('Temperature , T(r)')

54
Chapter 11

Finite-Difference Methods for Linear Problems

11.1.Introduction:

The linear and nonlinear Shooting methods for boundary-value problems can present problems
of instability. The methods in this section have better stability characteristics , but they
generally require more computation to obtain a specified accuracy.
Methods involving finite differences for solving boundary-value problems replace each of the
derivatives in the differential equation with an appropriate difference quotient approximation
of the type considered. The particular difference quotient and step size h are chosen to maintain
a specified order of truncation error. However , h cann’t be chosen too small because of the
general instability of the derivative approxiamtions.

11.2.Discrete Approxiamtion:

The finite difference method for the linear second-order boundary-value problem

𝑦 ′′ = 𝑝(𝑥)𝑦 ′ + 𝑞(𝑥)𝑦 + 𝑟(𝑥) , 𝑓𝑜𝑟 𝑎 ≤ 𝑥 ≤ 𝑏 𝑤𝑖𝑡ℎ 𝑦(𝑎) = 𝛼 𝑎𝑛𝑑 𝑦(𝑏) = 𝛽 , (11.1)


requires that difference-quotient approximations be used to approximate both 𝑦 ′ 𝑎𝑛𝑑 𝑦 ′′ . First,
we select an integer N>0 and divide the interval [a,b] into (N+1) equal subintervals whose
endpoints are the mesh points 𝑥𝑖 = 𝑎 + 𝑖ℎ , for I =0,1,…(N+1), where h=(b-a)/(N+1).

Choosing the step size h in this manner facilitates the application of a matrix algorithm which
solves a linear system involving an 𝑁 × 𝑁 matrix.
At the interior mesh points 𝑥𝑖 for i=1,2,…,N , the differential equation to be approximated is
𝑦 ′′ (𝑥𝑖 ) = 𝑝(𝑥𝑖 )𝑦 ′ (𝑥𝑖 ) + 𝑞(𝑥𝑖 )𝑞(𝑥𝑖 ) + 𝑟(𝑥𝑖 ). (11.2)
Expanding y in a third Taylor polynomial about 𝑥𝑖 evaluated at 𝑥𝑖+1 𝑎𝑛𝑑 𝑥𝑖−1 , we have ,
assuming that yЄ𝐶 4 [𝑥𝑖−1 , 𝑥𝑖+1 ],
ℎ2 ℎ3 ℎ4
𝑦(𝑥𝑖+1 ) = 𝑦(𝑥𝑖 + ℎ) = 𝑦(𝑥𝑖 ) + ℎ𝑦 ′ (𝑥𝑖 ) 𝑦 ′′ (𝑥𝑖 ) + 𝑦 ′′′ (𝑥𝑖 ) + 24 𝑦 (4) (𝜉𝑖+ ),
2 6
And
for some 𝜉𝑖+ in (𝑥𝑖 , 𝑥𝑖+1 ), and
ℎ2 ℎ3 ℎ4
𝑦(𝑥𝑖−1 ) = 𝑦(𝑥𝑖 − ℎ) = 𝑦(𝑥𝑖 ) − ℎ𝑦 ′ (𝑥𝑖 ) + 2 𝑦 ′′ (𝑥𝑖 ) − 6 𝑦 ′′′ (𝑥𝑖 ) + 24 𝑦 (4) (𝜉𝑖− ),
for some 𝜉𝑖− in (𝑥𝑖−1 , 𝑥𝑖 ). If these equations are added, we have
ℎ4
𝑦(𝑥𝑖+1 ) + 𝑦(𝑥𝑖−1 ) = 2𝑦(𝑥𝑖 ) + ℎ2 𝑦 ′′ (𝑥𝑖 ) + 24 [𝑦 (4) (𝜉𝑖+ ) + 𝑦 (4) (𝜉𝑖− )],

And solving for 𝑦 ′′ (𝑥𝑖 ) gives


1 ℎ4
𝑦 ′′ (𝑥𝑖 ) = ℎ2 [ 𝑦(𝑥𝑖+1 ) − 2𝑦(𝑥𝑖 ) + 𝑦(𝑥𝑖−1 )] − 24 [𝑦 (4) (𝜉𝑖+ ) + 𝑦 (4) (𝜉𝑖− )].
The intermediate Value theorem 1.11 can be used to simplify the error term to give
1 ℎ4
𝑦 ′′ (𝑥𝑖 ) = ℎ2 [ 𝑦(𝑥𝑖+1 ) − 2𝑦(𝑥𝑖 ) + 𝑦(𝑥𝑖−1 )] − 12 𝑦 (4) (𝜉𝑖 ), (11.3)

55
for some 𝜉𝑖 in (𝑥𝑖−1 , 𝑥𝑖 ).
This is called the centered-difference formula for 𝑦 ′′ (𝑥𝑖 )

A centered-difference formula for 𝑦 ′ (𝑥𝑖 ) is obtained in a similar manner, resulting in


1 ℎ2
𝑦 ′ (𝑥𝑖 ) = 2ℎ [ 𝑦(𝑥𝑖+1 ) − 𝑦(𝑥𝑖−1 )] − 𝑦 ′′′ (𝜂𝑖 ), (11.4)
6
for some 𝜂𝑖 in (𝑥𝑖−1 , 𝑥𝑖+1 ).

The use of these centered- difference formulas in equation(11.2) results in the equation
𝑦(𝑥𝑖+1 )−2𝑦(𝑥𝑖 )+𝑦(𝑥𝑖−1 ) [ 𝑦(𝑥𝑖+1 )−𝑦(𝑥𝑖−1 )] ℎ2
= 𝑝(𝑥𝑖 ) [ ] + 𝑞(𝑥𝑖 )𝑦(𝑥𝑖 ) + 𝑟(𝑥𝑖 ) − 12 [2𝑝(𝑥𝑖 )𝑦 ′′′ (𝜂𝑖 ) −
ℎ2 2ℎ
𝑦 (4) (𝜉𝑖 )].
A Finite Difference method with truncation error of order 𝑂(ℎ2 ) results by using this equation
together with the boundary conditions 𝑦(𝑎) = 𝛼 𝑎𝑛𝑑 𝑦(𝑏) = 𝛽 to define the system of linear
equations
𝜔0 = 𝛼 , 𝜔𝑁+1 = 𝛽
and
−𝜔𝑖+1 +2𝜔𝑖 −𝜔𝑖−1 𝜔 −𝜔
+ 𝑝(𝑥𝑖 ) ( 𝑖+12ℎ 𝑖−1 ) + 𝑞(𝑥𝑖 )𝜔𝑖 = −𝑟(𝑥𝑖 ) , (11.5)
ℎ2
for each i=1,2,...N.
In the form we will consider, equation (11.5) is rewritten as
ℎ ℎ
− (1 + 𝑝(𝑥𝑖 )) 𝜔𝑖−1 + (2 + ℎ2 𝑞(𝑥𝑖 ))𝜔𝑖 − (1 − 𝑝(𝑥𝑖 )) 𝜔𝑖+1 = −ℎ2 𝑟(𝑥𝑖 ) ,
2 2
and the resulting system of equations is expressed in the triangular 𝑁 × 𝑁 matrix form
Aw=b , where (11.6)


2 + ℎ2 𝑞(𝑥1 ) −1 + 2 𝑝(𝑥1 ) 0 ⋯ ⋯ 0

ℎ 2 + ℎ2 𝑞(𝑥2 ) −1 − 2 𝑝(𝑥2 ) ⋱ ⋱ ⋮
−1 − 2 𝑝(𝑥2 )
A= ⋱ ⋱ ⋱ ⋱ ⋮
0 ⋱ ⋱ 0
⋮ ⋱ ⋱ ℎ
⋮ ⋱ ⋱ ⋱ ⋱ −1 + 2 𝑝(𝑥𝑛−1 )

[ 0 ⋯ ⋯ 0 −1 − 2 𝑝(𝑥𝑛 ) 2 + ℎ2 𝑞(𝑥𝑛 ) ]


𝑤1 −ℎ2 𝑟(𝑥1 ) + (1 + 𝑝(𝑥1 ))𝑤0
2
𝑤2 −ℎ2 𝑟(𝑥2 )
𝑤= ⋮ , 𝑎𝑛𝑑 𝑏 = ⋮
𝑤𝑁−1 −ℎ2 𝑟(𝑥𝑁−1 )
[ 𝑤𝑁 ] ℎ
2
[−ℎ 𝑟(𝑥𝑁 ) + (1 + 2 𝑝(𝑥𝑁 ))𝑤𝑁+1 ]

The following theorem gives conditions under which the tridiagonal linear system (11.6) has a
unique solution.

56
Suppose that p,q and r are continuous on [a,b] ,then the tridiagonal linear system (11.6) has a
unique solution provided that h<2/L , where
𝐿 = 𝑚𝑎𝑥𝑎≤𝑥≤𝑏 |𝑝(𝑥)|
It should be noted that there is a guarantee a unique solution to the boundary value problem (11.1)
, but they do not guarantee that yЄ𝐶 4 [𝑎, 𝑏]. We need to establish that 𝑦 (4) is continuous on [a,b]
to ensure that the truncation error has order 𝑂(ℎ2 ).

11.3.Application of Finite Difference method for Linear ODE:

Find the temperature profile inside the tube wall. Hot fluid flows inside the tube such
that the inside temperature is 100℃. The differential equation for the temperature
distribution is given by
𝑑 2 𝑇 1 𝑑𝑇
+ =0
𝑑𝑟 𝑟 𝑑𝑟

The boundary conditions are T(100) = 100℃ and T(150) = 0℃. Use Finite difference technique
with h = 5 meter.

57
11.3.1.Solution of the problem:

r approx. sol. exact sol. Error percentage


-------------------------------------------------------------------
100.00 100.000 100.000 0.000
105.00 87.252 87.967 0.012
110.00 76.108 76.494 0.105
115.00 65.396 65.530 0.205
120.00 55.081 55.034 0.086
125.00 44.531 44.966 0.268
130.00 35.518 35.293 0.338
135.00 25.821 25.985 0.431
140.00 17.150 17.016 0.488
145.00 8.311 8.361 0.598
150.00 0.000 0.000 0.000

11.3.2.Temperature vs Radius Graph:

58
11.3.3.MATLAB Code for Finite Difference method for Linear ODE:

clc;clear all;

p=@(x)(-1/x);
q=@(x)(0);
r=@(x)(0);
aa=100;
bb=150;
alpha=100;
beta=0;
h=5;

n=(bb-aa)/h-1;

x=aa+h;
a(1)=2+h^2*q(x);
b(1)=-1+h/2*p(x);
d(1)=-h^2*r(x)+(1+h/2*p(x))*alpha;

for i=2:n
x=aa+i*h;
a(i)=2+h^2*q(x);
b(i)=-1+h/2*p(x);
c(i)=-1-h/2*p(x);
d(i)=-h^2*r(x);
end

x=bb-h;
a(n+1)=2+h^2*q(x);
c(n+1)=-1-h/2*p(x);
d(n+1)=-h^2*r(x)+(1-h/2*p(x))*beta;

l(1)=a(1);
u(1)=b(1)/a(1);
z(1)=d(1)/l(1);

for i=2:n
l(i)=a(i)-c(i)*u(i-1);
u(i)=b(i)/l(i);
z(i)=(d(i)-c(i)*z(i-1))/l(i);
end

l(n+1)=a(n+1)-c(n+1)*u(n);
z(n+1)=(d(n+1)-c(n+1)*z(n))/l(n+1);

59
w(1)=alpha;
w(n+2)=beta;
w(n+1)=z(n+1);

for i=n:-1:2
w(i)=z(i)-u(i)*w(i+1);
end
for i=1:n+2
x(i)=aa+(i-1)*h;
end

solinit=bvpinit(aa:h:bb,[1,0]);
sol=bvp4c(@twoode,@bc,solinit);
x=aa:h:bb;
y=deval(sol,x);
yex=y(1,:);

err=abs((yex-w)./yex).*100;

p=[x;w;yex;err];
fprintf(2,'For h= %5.3f\n',h)
fprintf(2,' x approx. sol. exact sol. Error percentage\n');
fprintf(' %5.2f %8.3f %8.3f %8.3f\n',p)

plot(x,w,':b')
hold on
plot(x,yex,'r')
hold off
legend('Approx. Sol.','Exact sol.')
title('Temperature , T(r) vs Radius , r')
xlabel('Radius , r')
ylabel('Temperature , T(r)')

60
Comparison Table :
A Table Showing the Error Percentage Among the Numerical Methods:

Error % in Error % in
r
Linear Shooting Finite Difference
100 0.000 0.000
105 0.092 0.012
110 0.178 0.105
115 0.257 0.205
120 0.331 0.086
125 0.400 0.268
130 0.465 0.338
135 0.525 0.431
140 0.582 0.488
145 0.636 0.598
150 0.000 0.000

Comment on observation:
The best method is Finite Difference method since it has minimum error percentage.

61
Chapter 12

The Power Method


12.1.Introduction:

The Power method is an iterative technique used to determine the dominant eigenvalue of a
matrix—that is, the eigenvalue with the largest magnitude. By modifying the method slightly, it
can also used to determine other eigenvalues. One useful feature of the Power method is that it
produces not only an eigenvalue, but also an associated eigenvector. In fact,the Power method is
often applied to find an eigenvector for an eigenvalue that is determined by some other means.

12.2.Derivation of Power Method:

To apply the Power method, we assume that the n × n matrix A has n eigenvalues
𝜆1 , 𝜆2 , . . . , 𝜆𝑛 with an associated collection of linearly independent eigenvectors {𝑣 (1) ,𝑣 (2) , 𝑣 (3) ,
. . . , 𝑣 (𝑛) }. Moreover, we assume that A has precisely one eigenvalue,
𝜆1 , that is largest in magnitude, so that
| 𝜆1 | > | 𝜆2 | ≥ | 𝜆3 | ≥ ・・・ ≥ | 𝜆𝑛 | ≥ 0.
We can illustrates that an n×n matrix need not have n linearly independent eigenvectors. When it
does not the Power method may still be successful, but it is not guaranteed to be.
If x is any vector in 𝑅 𝑛 , the fact that {𝑣 (1) ,𝑣 (2) , 𝑣 (3) , . . . , 𝑣 (𝑛) } is linearly independent
implies that constants 𝛽1 , 𝛽2, . . . , 𝛽𝑛 , exist here
𝑛

𝑥 = ∑ 𝛽𝑗 𝑣 (𝑗)
𝑗=1
Multiplying both sides of this equation by 𝐴1 , 𝐴2 ,. . . 𝐴𝑘 . . gives

A𝑥 =∑𝑛𝑗=1 𝛽𝑗 𝐴𝑣 (𝑗) =∑𝑛𝑗=1 𝛽𝑗 𝜆𝑗 𝑣 (𝑗) , 𝐴2 𝑥=∑𝑛𝑗=1 𝛽𝑗 𝜆𝑗 𝐴𝑣 (𝑗) =∑𝑛𝑗=1 𝛽𝑗 𝜆𝑗 2 𝑣 (𝑗) ,

and generally, 𝐴𝑘 𝑥 =∑𝑛𝑗=1 𝛽𝑗 𝜆𝑗 𝑘 𝑣 (𝑗) .

If 𝜆1𝑘 is factored from each term on the right side of the last equation, then
𝜆
𝐴𝑘 𝑥=𝜆𝑗𝑘 ∑𝑛𝑗=1 𝛽𝑗 ( 𝑗 )𝑘 𝑣 (𝑗) .
𝜆1
𝜆
Since | 𝜆1 | > | 𝜆𝑗 |, for all j = 2, 3, . . . , n, we have lim ( 𝑗 )𝑘 = 0, and
𝑘→∞ 𝜆1
lim 𝐴𝑘 𝑥 = lim 𝜆1𝑘 𝛽1 𝑣 (1) . (9.1)
𝑘→∞ 𝑘→∞

The sequence in Eq. (9.1) converges to 0 if |𝜆1 | < 1 and diverges if |𝜆1 | > 1, provided,
of course, that 𝛽1 ≠0. As a consequence, the entries in the 𝐴𝑘 𝑥 will grow with k if | 𝜆1 | > 1
and will go to 0 if | 𝜆1 | < 1, perhaps resulting in overflow or underflow. To take care of that
possibility, we scale the powers of 𝐴𝑘 𝑥 in an appropriate manner to ensure that the limit in Eq.
(9.2) is finite and nonzero. The scaling begins by choosing 𝑥 to be a unit vector 𝑥 (0) relative to
(0)
||𝑥 (0) ||∞ and choosing a component 𝑥𝑝0 of 𝑥 (0) with

62
(0)
𝑥𝑝0 = 1 = ||𝑥 (0) ||∞

(1)
Let 𝑦 (1) = A𝑥 (0) and define µ(1) =𝑦𝑝0
(1) 𝜆𝑗 (𝑗)
(1)
(1)
𝑦𝑝0
(1)
𝛽1 𝜆1 𝑣𝑝0 +∑𝑛
𝑗=2 𝛽𝑗 𝜆𝑗 𝑣𝑝0
(𝑗) 𝛽1 𝑣𝑝0 +∑𝑛
𝑗=2 𝛽𝑗 ( )𝑣
(1) 𝜆1 𝑝0
µ = 𝑦𝑝0 = (1) = (1) (𝑗) =𝜆1 [ (1) (𝑗) ].
𝑥𝑝0 𝛽1 𝑣𝑝0 +∑𝑛
𝑗=2 𝛽𝑗 𝑣𝑝0 𝛽1 𝑣𝑝0 +∑𝑛
𝑗=2 𝛽𝑗 𝑣𝑝0
Let 𝑝1 be the least integer such that

(1)
|𝑦𝑝0 |=||𝑦 (1) ||∞,
𝑦 (1) 𝑥 (0)
and define 𝑥 (1) by 𝑥 (1) = (1) =𝐴 (1) .
𝑦𝑝1 𝑦𝑝1
(1)
𝑥𝑝1 = 1 = ||𝑥 (1) ||∞

𝑥 (0)
Now define 𝑦 (2) = 𝐴𝑥 (1) = 𝐴2 (1)
𝑦𝑝1
and
(1) 𝜆𝑗 (𝑗)
(2)
(2)
𝑦𝑝1
(1)
[𝛽1 𝜆21 𝑣𝑝1 +∑𝑛 2 1
𝑗=2 𝛽𝑗 𝜆𝑗 𝑣𝑝1 ]/𝑦𝑝1
(𝑗) 𝛽1 𝑣𝑝1 +∑𝑛
𝑗=2 𝛽𝑗 ( )2 𝑣𝑝1
(2) 𝜆1
µ = 𝑦𝑝1 = (1) = (1) (𝑗) (1) =𝜆1 [ 𝜆𝑗 (𝑗) ].
𝑥𝑝1 [𝛽1 𝜆1 𝑣𝑝1 +∑𝑛
𝑗=2 𝛽𝑗 𝜆𝑗 𝑣𝑝1 ]/𝑦𝑝1
(1)
𝛽1 𝑣𝑝0 +∑𝑛
𝑗=2 𝛽𝑗 ( 𝜆1 )𝑣𝑝1

Let p2 be the smallest integer with


(2)
|𝑦𝑝2 | = ||𝑦 (2) ||∞

and define

𝑦 (2) 𝑥 (1) 𝑥 (0)


𝑥 (2) = (2) =𝐴 (2) =𝐴2 (1) (2)
𝑦𝑝2 𝑦𝑝2 𝑦𝑝1 𝑦𝑝2

In a similar manner, define sequences of vectors {𝑥 (𝑚) }∞


𝑚=0 and {𝑦
(𝑚) ∞
}𝑚=1 and a sequence of
scalars {µ(1) }∞
𝑚=1 inductively by 𝑦
(𝑚)
= 𝐴𝑥 (𝑚−1) ,
(1) 𝜆𝑗 (𝑗)
(𝑚) 𝛽1 𝑣𝑝𝑚−1 +∑𝑛 𝑚
𝑗=2 𝛽𝑗 (𝜆1 ) 𝑣𝑝𝑚−1
(𝑚)
µ = 𝑦𝑝𝑚−1 =𝜆1 [ (1) 𝜆𝑗 ]. (9.2)
𝛽1 𝑣𝑝𝑚−1 +∑𝑛 𝛽 ( )𝑚−1 𝑣 (𝑗)
𝑗=2 𝑗 𝑝1 𝜆1

And
𝑦 (𝑚) 𝐴𝑚 𝑥 (0)
𝑥 (𝑚) = (𝑚) = (𝑘)
𝑦𝑝𝑚 ∏𝑚
𝑘=1 𝑦𝑝𝑘

where at each step, 𝑝𝑚 is used to represent the smallest integer for which
(𝑚)
|𝑦𝑝𝑚 | = ||𝑦 (𝑚) ||∞

By examining Eq. (9.2), we see that since |𝜆𝑗 /𝜆1 |<1, for each j = 2, 3, . . . , n,

63
lim µ(𝑚) = 𝜆1 , provided that 𝑥 (0) is chosen so that 𝛽1 ≠ 0. Moreover, the sequence of
𝑚→∞
Vectors {𝑥 (𝑚) }∞
𝑚=0 converges to an eigenvector associated with 𝜆1 that has 𝑙∞ norm equal to
one.

12.3.Application of Power Method:

Let 𝑃0 = (𝑥 (0) , 𝑦 (0) , 𝑧 (0) )𝑇 record the number of people in a certain city who use brands
X, Y, and Z, respectively.
Each month people decide to keep using the same brand or switch brands.
The probability that a user of brand X will switch to brand Y or Z is 0.3 and 0.3, respectively.
The probability that a user of brand Y will switch to brand X or Z is 0.3 and 0.2, respectively.
The probability that a user of brand Z will switch to brand X or Y is 0.1 and 0.3, respectively.
The transition matrix for this process is 𝑃𝑘+1 = 𝐴. 𝑃𝑘 or
𝑥 (𝑘+1) 0.4 0.3 0.1 𝑥 (𝑘)
(𝑦 (𝑘+1) ) = (0.3 0.5 0.3) (𝑦 (𝑘) )
𝑧 (𝑘+1) 0.3 0.2 0.6 𝑧 (𝑘)
Assume that the initial distribution is 𝑃0 = (2000,6000,4000)𝑇
(a). Verify that 𝜆1 = 1 is the dominant eigenvalue of A.
(b). Verify that a corresponding eigenvector is 𝑣1 = (3000,4500,4500)𝑇 .
(c). Conclude that the limiting distribution is lim 𝑃𝑘 = 𝑣1 .
𝑘→∞

12.3.1.Solution of the Problem:

Dominant eigenvalue= 1.000


Dominant eigenvector=(0.6668,1.0000,0.9999)𝑇
Limiting distribution=(3000.4,4500.02,4499.58)𝑇

64
12.3.2.MATLAB Code for Power Method:

clc;clear all;

A=[0.4 0.3 0.1


0.3 0.5 0.3
0.3 0.2 0.6];
x0=[2000
6000
4000];
t=10^(-4);
x1=x0;
for i=1:9999
y=A*x0;
m=max(y);
x=y./m;
y1=A*x1;
r=max(abs(A*x-m*x));

if r<t
fprintf('No. of steps= %3i\n',i)
fprintf('Dominant eigenvalue=%6.3f\n',m)
fprintf('Dominant eigenvector=\n%8.4f\n%8.4f\n%8.4f\n',x)
fprintf('Limiting distribution=\n%8.2f\n%8.2f\n%8.2f\n',y1)
break
else
x0=x;
x1=y1;
end
end

65
Conclusion

In this project, we focus on system solving methods which gives the solution of electric circuit
models. Using these system solving methods we conclude a solution which is almost accurate to
exact solution. When we work on the example of thermodynamics which is a first order differential
equation with initial condition we use Euler’s and Runge-Kutta methods to find the pointwise
solution of the example and also give us the indication of preferable method from the accuracy of
them. For solving higher order differential equation of thermodynamics with boundary conditions
we use Shooting and Finite-Difference methods to get the best approximation of the example. We
also focus on power method which gives us a strong idea in the field of real life where one may
select or reject a product depends on his interest. Using the idea of dominating eigenvalue and
corresponding eigenvector gives us the expected result. Here we tried to show the real life use of
mathematics in various purposes. Though we have some limitations, it is a succinct idea of real
life application of mathematics.

66
References

[01] Numerical analysis by Richard L. Burden & J. Douglas Faires, Ninth Edition.
[02] Pre-Calculus by Michael Sulivan
[03] Wiley’s Differential Equation by Shepley L. Ross, 3rd Edition.
[04] Gaussian Elimination – from Wolfram MathWorld.
http://mathworld.wolfram.com/GaussianElimination.html
[05] Gauss Elimination Method – nptel.ac.in
https://nptel.ac.in
[06] System of Linear Equations – Elimination Practice Problems Online
https://brilliant.org/practice/systems-of-linear-equations-elimination/
[07] System of Linear Equations – Math Is Fun
https://www.mathisfun.com/algebra/systems-linear-equations.html
[08] Solving Linear Equations : Practice Problems – study.com
https://study.com/academy/lesson/solving-linear-equations-practice-problems.html
[09] Differential Equations – Linear Equations
http://tutorial.math.lamar.edu/Class/DE/Linear.aspx
[10] What are the real life applications of first order linear differential equation ? – Quora.com
https://www.quora.com
[11] Differential equations – Practical applications of first ODE – Math Stack Exchange
https://math.stack.exchange.com/questions
[12] Second – Order Linear ODE – Oregon State University
http://math.oregonstate.edu/home/programs/undergrad/CalculusQuestStudyGuides/ode/second/s
o_lin/so_lin.html
[13] Application of Second Order Differential Equations in Mechanical Engineering
http://www.engr.sjsu.edu/trhsu
[14] The Power Method for Eigenvectors
http://mathfaculty.fullerton.edu/mathnews/n2003/PowerMethodMod.html

67

You might also like