Numerical Methods in Civil Engineering
Numerical Methods in Civil Engineering
Veegay T. Kibete
BSCE III
January 2024
TABLE OF CONTENTS
I. LINEAR EQUATION SYSTEM
A. SMALL SCALE
1. GRAPHICAL METHOD
2. CRAMER'S RULE
3. ELIMINATION OF UNKNOWNS
B. LARGE SCALES
1. GAUSSIAN ELIMINATION WITH PIVOTING
2. NAIVE GAUSSIAN ELIMINATION
3. LU DECOMPOSITION
4. ITERATION METHODS
a. JACOBI METHOD
b. GAUSS SEIDEL
B. BRACKETING METHOD
1. BISECTION METHOD
2. REGULA-FALSI METHOD (FALSE-POSITION METHOD)
INTRODUCTION
Numerical methods are mathematical techniques used to solve problems that may not have
exact analytical solutions. These methods involve representing continuous mathematical models
with discrete approximations and using computational algorithms to obtain numerical solutions.
Common applications include solving equations, integration, differentiation, and simulating
physical systems. Numerical methods are crucial in fields such as engineering, physics, computer
science, and finance where analytical solutions may be impractical or impossible to find.
Numerical methods in the field of engineering encompass a set of computational approaches and
algorithms utilized to address mathematical problems that often lack exact analytical solutions or
involve complex, real-world scenarios. In essence, these methods bridge the gap between
mathematical models and practical engineering applications by employing techniques that
provide numerical approximations to intricate problems.
Numerical methods play a vital role in solving complex problems that arise in the design,
analysis, and optimization of systems. Engineers often encounter situations where analytical
solutions are challenging or impractical. Numerical methods help address these challenges by
providing approximate solutions through computational techniques.
In the context of algebra, numerical methods are employed for finding roots or solutions to
equations where analytical solutions may be elusive or impractical. One common numerical
method for root-finding is the Newton-Raphson method. This iterative technique starts with an
initial guess and refines it through successive calculations, converging toward the actual root.
Numerical methods for root-finding are particularly valuable when dealing with complex or
transcendental equations, offering a practical approach to obtaining numerical approximations
for solutions. While these methods don't guarantee exact solutions, they provide a
computationally efficient means of finding roots in situations where direct algebraic solutions are
challenging.
The goal of numerical methods is to offer efficient and accurate computational techniques for
approximating solutions to mathematical problems, especially when analytical solutions are
challenging or unavailable. These methods strive to strike a balance between precision and
computational efficiency, providing robust and versatile approaches applicable across a range of
mathematical models and engineering scenarios. The key objectives include achieving accuracy,
convergence to solutions through iterative processes, robustness in handling diverse problem
types, and stability to ensure reliability in the face of input variations. Ultimately, numerical
methods aim to deliver practical solutions for complex problems in algebra and other
mathematical domains.
I. LINEAR EQUATION SYSTEM
A. SMALL SCALE
Graphical Method
Graphical methods are often used in numerical analysis to visually represent data,
relationships, and solutions to problems. These methods can provide insights, aid in
understanding complex phenomena, and facilitate problem-solving.
The term "graphical method in numerical" generally refers to the use of graphical techniques
and visual representations to analyze and solve numerical problems. In the context of
numerical analysis, graphical methods are often employed to gain insights into mathematical
relationships, visualize data, and facilitate problem-solving.
The graphical method involves the utilization of visual representations, such as graphs,
charts, and plots, to analyze numerical data, understand mathematical relationships, and solve
problems in various fields of science, engineering, and mathematics.
Overall, the graphical method serves as a powerful complement to numerical techniques,
providing a visual framework for understanding and interpreting numerical information. It
enhances the clarity and accessibility of numerical analysis, making it a valuable tool in
scientific and engineering disciplines.
Example:
Solve graphically.
0.5x1-x2=-9.5
1.02x1-2x2=-18.8
Solution:
0.5x1-x2=-9.5
x1 0 2 4 6 8 10 12 14 16 18
x2 9.5 10.5 11.5 12.5 13.5 14.5 15.5 16.5 17.5 18.5
1.02x1-2x2=-
18.8
x1 0 2 4 6 8 10 12 14 16 18
10.4 13.4 16.5
x2 9.4 2 11.44 12.46 8 14.5 15.52 4 17.56 18.58
20
18
16
14.5
14.5
14
12
10
0.5x1-x2=-9.5
2
1.02x1-2x2=-18.8
0
0 2 4 6 8 10 12 14 16 18 20
Therefore, the solutions for the unknowns are x1 = 10 and x2 = 14.5, which will
satisfy the given equations.
Cramer's Rule
Cramer's Rule is a method used to solve a system of linear equations with as many equations
as unknowns, assuming that the system's determinant is nonzero. It provides a formula for
finding the individual unknowns by expressing the solution in terms of determinants.
It's important to note that Cramer's Rule has some limitations and may not be the most
efficient method for solving large systems of equations due to the computational cost of
calculating determinants. Additionally, the method requires the determinant of the coefficient
matrix A to be nonzero for a unique solution to exist.
Example:
x1+x2-x3= -3
6x1+2x2+2x3= 2
-3x1+4x2+x3= 1
Solution:
[ |]
x1+x2-x3= -3 1 1 −1 −3
6x1+2x2+2x3= 2 6 2 2 2
-3x1+4x2+x3= 1 −3 4 1 1
[ ]
1 1 −1 = 1[2(1)-4(2)] - 1[6(1)-(-3)(2)] + (-1)[6(4)-(-3)(2)]
D = 6 2 2 = 1(-6) - 1(12) - 1(30)
−3 4 1 = - 48
x 1=¿
[ ]
−3 1 −1 = -3[2(1)-4(2)] - 1[2(1)-(1)(2)] + (-1)[2(4)-(1) Dx 1
Dx1 = 2 2 2 (2)] D
1 4 1 = -3(-6) - 1(0) + (-1)(6) x1=
12
= 12 −48
−1
x1=
x 2=¿4
[ ]
1 −3 −1 = 1[2(1)-1(2)] - (-3)[6(1)-(3)(2)] + (-1)[6(1)-(-3) Dx 2
Dx2 = 6 2 2 (2)] d
−3 1 1 = 1(0) + 3(12) + (-1)(12) 24
= 24 x2=
−48
−1
xx23=¿
=
[ ]
1 1 −3 = 1[2(1)-4(2)] - 1[6(1)-(-3)(2)] + (-3)[6(4)-(-3) 2
Dx3 = 6 2 2 (2)] Dx 3
−3 4 1 = 1(-6) - 1(12) + (-3)(30) d
= -108 Therefore: −108
x3=
−48
−1 −1 9
x1= x2= x2=
4 2 4
Elimination of Unknown
The "elimination of unknowns" is a general concept used in various mathematical contexts,
not just limited to systems of linear equations. It often involves manipulating equations or
expressions to eliminate one or more variables, making it easier to solve a problem or analyze a
system.
Elimination of unknowns is a process used to solve systems of linear equations. The primary
goal is to transform the system into a simpler form that makes it easier to find the values of the
variables. There are several methods for eliminating unknowns, but one common approach is
Gaussian Elimination.
The elimination of unknowns is a versatile concept that arises in different mathematical
scenarios, and the specific techniques used depend on the nature of the problem at hand. It
involves systematic manipulations or substitutions to simplify expressions or equations, making
them more amenable to analysis or solution.
Example:
x1+x2-x3= -3
6x1+2x2+2x3= 2-
3x1+4x2+x3= 1
B. LARGE SCALE
• Gaussian Elimination with pivoting
Gaussian Elimination with pivoting is an extension of the Gaussian Elimination method
used for solving systems of linear equations. The basic idea behind Gaussian Elimination is to
transform the system of linear equations into an equivalent upper triangular system, making it
easier to solve. However, when dealing with matrices that may have zero or very small pivots,
pivoting becomes necessary to avoid numerical instability and division by small numbers.
Gaussian Elimination is a method used to solve systems of linear equations by
transforming the augmented matrix (a matrix that includes both the coefficient matrix and the
column of constants) into its row-echelon or reduced row-echelon form. The goal is to simplify
the system and make it easier to find the values of the variables.
Example 1:
6x1-2x2+2x3+4x4= 16
12x1-8x2+6x3+10x4= 26
3x1-13x2+9x3+3x4= -19
-6x1+ 4x2+x3-18x4= -34
Solution:
[ ][ ] [ ]
1 6 −2 2 4 x1 16
2 12 −8 6 10 x2
= 26
0.5 3 −13 9 −3 x 3 −19
−1 −6 4 1 −18 x 4 −34
[ ][ ] [ ]
1 6 −2 2 4 x1 16
2 0 −4 2 2 x2
= −6
3 0 −12 8 1 x3 −27
−0.5 0 2 3 −14 x 4 −18
[ ][ ] [ ]
1 6 −2 2 4 x1 16
0 −4 2 2 x2
2 = −6
3 0 0 2 −5 x3 −9
2 0 0 4 −13 x 4 −21
[ ][ ] [ ]
6 −2 2 4 x1 16
0 −4 2 2 x 2 −6
=
0 0 2 −5 x 3 −9
2x3-5x4= -9 -4x2+2x3+2x4=-6 0 0 0 −3 x 4 −3
2x3-5(1)=-9 -4x2+2(-2)+2(1)=-6
-3x4 =-3 x2 = (-6+4-2)/-4 6x1-2x2+2x3+4x4= 16 Therefore:
x3= (-9+5)/2
x4 = 1 x3= -2 x2 = 1 6x1-2(1)+2(-2)+4(1)= 16 x1= 3
x1= (16+2+4-4)/6 x2 = 1
x1=3 x3 = -2
Example 2:
x4= 1
2x1+4x2-2x3-2x4 = -4
x1+2x2+4x3-3x4 = 5
-3x1-3x2+8x3-2x4 = 7
-x1+x2+6x3-3x4 = 7
Solution:
[ ][ ] [ ] [ ][ ] [ ]
1 2 4 −2 −2 x 1 −4 1 2 4 −2 −2 x1 −4 Swap second and
0.5 1 2 4 −3 x 2 = 5 2 0 0 5 −2 x2
= 7
fourth row of the
→ augmented matrix.
1 .5 −3 −3 8 −2 x 3 7 0.5 0 3 5 −5 x3 1
−0.5 −1 1 6 −3 x 7 −1 0 3 5 −4 x4 5
4
[ ][ ] [ ] [ ][ ] [ ]
1 2 4 −2 −2 x 1 −4 1 2 4 −2 −2 x 1 −4 Swap third and fourth
2 0 3 5 −4 x2 5 −4 x2
= 5 2 0 3 = 5
row of the augmented
5 −5 → 0 −1 matrix.
1 0 3 x3 1 1 0 0 x 3 −4
0 0 0 5 −2 x4 7 0 0 0 5 −2 x4 7
[ ][ ] [ ] [ ][ ] [ ]
1 2 4 −2 −2 x 1 −4 2 4 −2 −2 x 1 −4
2 0 3 5 −4 x2 5 −4 x2
= 5 →
0 3 = 5
1 0 0 5 −2 x3 7 0 0 5 −2 x3 7
0 0 0 0 −1 x 4 −4 0 0 0 −1 x 4 −4
Example:
6x1-2x2+2x3+4x4= 16
12x1-8x2+6x3+10x4= 26
3x1-13x2+9x3+3x4= -19
-6x1+ 4x2+x3-18x4= -34
• LU Decomposition
Iterative Methods for solving Linear Systems
Jacobi Method
The Jacobi method is an iterative numerical technique used for solving systems of linear
equations. It's particularly suitable for large, sparse matrices. The method is named after the
German mathematician Carl Gustav Jacob Jacobi. The Jacobi method aims to find the solution to
the linear system Ax=b by iteratively refining an initial guess until a sufficiently accurate
solution is obtained.
Here's a basic outline of the Jacobi method:
1. System of Equations: Given a system of linear equations Ax=b, where A is a square
matrix, x is the column vector of unknowns, and b is the column vector of constants.
2. Decompose Matrix A: Decompose matrix A into its diagonal (D), lower triangular (L),
and upper triangular (U) components: A=D+L+U where:
D is the diagonal part of A,
L is the strictly lower triangular part of A,
U is the strictly upper triangular part of A.
3. Iteration: The Jacobi iteration formula is given by: x(k+1)=D−1(b−(L+U)x(k)) where x(k) is
the current approximation, D−1 is the inverse of the diagonal matrix D, and k is the
iteration index.
4. Convergence Criteria: Repeat the iteration until a convergence criterion is met.
Common criteria include a specified number of iterations, a desired level of accuracy, or
when the change in the solution between iterations is sufficiently small.
It's important to note that the Jacobi method has some limitations. It may converge slowly
for certain types of matrices, and its convergence is influenced by the spectral radius of the
iteration matrix. For systems with diagonally dominant or symmetric positive definite matrices,
the Jacobi method can be effective. However, for more general cases or when faster convergence
is needed, other iterative methods like the Gauss-Seidel method or more advanced methods like
Conjugate Gradient may be considered.
Example 1:
(16+ 2 x 2−2 x 3−4 x 4)
6x1-2x2+2x3+4x4 = 16 x1=
6
12x1-8x2+6x3+10x4= 26 (−26+ 12 x 1+ 6 x 3+10 x 4)
x2=
8
3x1-13x2+9x3+3x4= -19 (−19−3 x 1+13 x 2−3 x 4 )
x3=
-6x1+ 4x2+x3-18x4= -34 9
(34−6 x 1+ 4 x 2+ x 3)
x4=
18
Assumption:
x1=0 x3=0
x2=0 x4=0
n x1 x2 x3 x4
0 0 0 0 0
1 2.66667 -3.25000 -2.11111 1.88889
2 1.02778 1.52778 -8.32407 0.16049
3 5.84362 -7.75077 -0.30041 1.42335
4 -0.76569 7.06932 -15.72900 -1.79807
5 11.46482 -18.44287 8.95471 2.84124
Example 1:
Example 2:
x1=
2x1+4x2-2x3-2x4 = -4 (−4−4 x 2+2 x 3+2 x 4)
x1+2x2+4x3-3x4 = 5 2
-3x1-3x2+8x3-2x4 = 7 (5−x 1−4 x 3+3 x 4 )
-x1+x2+6x3-3x4 = 7 x2=
2
( 7+3 x 1+3 x 2+2 x 4 )
Assumption: x3=
8
x1=0 x3=0 (−7−x 1+ x 2+6 x 3)
x2=0 x4=0 x4=
3
(−4−4 (0)+2(0)+2(0))
x1= = -2
2
(5−(−2)−4(0)+3 (0))
x2= =¿ 3.5
2
( 7+3(−2)+3( 3.5)+2(0) )
x3= = 1.4375
8
(−7−(−2)+(3.5)+ 6(1.4375))
x4= = 2.375
3
n x1 x2 x3 x4
0 0 0 0 0
1 -2.00000 3.50000 1.43750 2.37500
2 -5.18750 5.78125 1.69141 4.70573
3 -7.16536 9.75846 3.02384 9.35563
4 -9.13745 15.05448 5.43280 16.59624
5 -10.07994 21.56873 9.33236 26.88094
6 -8.92417 28.61878 14.98071 40.14240
7 -4.11444 34.80940 22.42121 55.48371
8 6.28612 37.74008 31.25575 70.66282
9 24.43841 33.76352 40.36643 81.50790
10 52.34728 17.85534 47.57796 81.32527
11 91.19254 -16.26428 49.30441 60.45655
12 140.28954 -75.56877 40.25943 6.23275
13 195.62971 -166.48458 13.36261 -96.31287
14 248.01890 -292.70399 -39.96012 -262.49454
15 280.95330 -451.79822 -128.81548 -504.21480
16 268.56616 -630.47432 -260.89426 -823.80201
17 174.25236 -798.54068 -439.18362 -1204.96493
18 -49.06719 -902.04655 -657.03388 -1600.72755
19 -455.66833 -856.68940 -891.44104 -1918.88910
20 -1098.95134 -543.47590 -1094.75749 -2006.68983
A. OPEN METHOD
1. Fixed-Point Iteration
Fixed-point iteration is an iterative numerical method used to find the fixed points of a given
function. In the context of solving equations, it's often applied to find the roots or solutions of
nonlinear equations. The method is based on the concept of repeatedly applying a function to an
initial guess until convergence to a fixed point is achieved.
Fixed-Point Iteration Method:
Given a nonlinear equation f(x)=0, the fixed-point iteration aims to find a fixed point of
the function g(x) such that g(x)=x. This fixed point is also a solution to the original equation. The
iteration formula is given by:
xn+1=g(xn)
where:
xn is the current approximation at iteration n,
xn+1 is the next approximation,
g(x) is a chosen function that satisfies g(x)=x.
Algorithm:
1. Choose an Initial Guess: Select an initial guess x0.
2. Iteration: Apply the iteration formula xn+1=g(xn) repeatedly until convergence. The
process stops when a certain convergence criterion is met.
3. Convergence Criteria: Common convergence criteria include reaching a specified number
of iterations, achieving a desired level of accuracy, or when the change in the solution
between iterations is sufficiently small.
It's important to note that the success of the fixed-point iteration method depends on the
choice of the function g(x) and the initial guess. Additionally, the method may not converge for
all functions or initial guesses, and convergence properties should be carefully considered for
each specific case.
Example:
f(x) = x3-x-1 x 0 1 2
f(x) -1 -1 5
x3=x+1
x= 3√x+1 a= 1
g(x)= 3√x+1 b= 2
( a+b ) ( 1+2 )
X0= = = 1.5
2 2
n x0 x1= g(x0)
0 1.50000 1.35721
1 1.35721 1.33086
2 1.33086 1.32588
3 1.32588 1.32494
4 1.32494 1.32476
5 1.32476 1.32473
6 1.32473 1.32472
7 1.32472 1.32472
8 1.32472 1.32472
9 1.32472 1.32472
10 1.32472 1.32472
2. Newton-Raphson Method
The Newton-Raphson method, also known as the Newton method, is an iterative numerical
technique for finding successively better approximations to the roots (or zeros) of a real-valued
function. It is particularly efficient for finding roots of differentiable functions and is commonly
used for solving nonlinear equations.
Newton-Raphson Method:
Given a continuous and differentiable function f(x), the Newton-Raphson iteration formula is
defined as:
f (x ❑n )
xn+1=xn−
f '( x ❑n )
where:
xn is the current approximation at iteration n,
xn+1 is the next approximation,
f(xn) is the function value at xn,
f′(xn) is the derivative of the function with respect to x evaluated at xn.
Algorithm:
1. Choose an Initial Guess: Select an initial guess x0.
2. Iteration: Apply the Newton-Raphson iteration formula repeatedly until convergence.
The process stops when a certain convergence criterion is met.
3. Convergence Criteria: Common convergence criteria include reaching a specified
number of iterations, achieving a desired level of accuracy, or when the change in the
solution between iterations is sufficiently small.
The Newton-Raphson method is known for its quadratic convergence, which means that the
number of correct digits roughly doubles with each iteration when the method is converging.
However, it's important to note that the method may not converge for all functions or initial
guesses, and convergence properties should be carefully considered for each specific case.
Additionally, it requires the availability of the derivative f′(x) for the given function.
Example 1:
f(x)= 4x3-15x2+17x-6 x 0 1 2 3
2
f'(x)=12x −30𝑥+17 f(x) -6 0 0 18
n x0 f(x0) f'(x0) x1
0 1.50000 -0.75000 -1.00000 0.75000
1 0.75000 0.00000 23.75000 0.75000
2 0.75000 0.00000 23.75000 0.75000
3 0.75000 0.00000 23.75000 0.75000
4 0.75000 0.00000 23.75000 0.75000
5 0.75000 0.00000 23.75000 0.75000
Example 2:
x -1 0 1 2 3
f(x) -2.71828 0 0.367879 0.270671 0.149361
f(x) = e-xx
f'(x)= -xe-x+e-x
n x0 f(x0) f'(x0) x1
0 -0.50000 -0.82436 2.47308 -0.16667
1 -0.16667 -0.19689 1.37825 -0.02381
2 -0.02381 -0.02438 1.04848 -0.00055
3 -0.00055 -0.00055 1.00111 0.00000
4 0.00000 0.00000 1.00000 0.00000
5 0.00000 0.00000 1.00000 0.00000
Example 3:
f(x) = 3ex-4cos(x) x 0 1 2 3
f(x) -1 6 24 64
f(x) = 3ex-4cos(x)
n x0 f(x0) f'(x0) x1
0 0.50000 1.43583 6.86387 0.29081
1 0.29081 0.18050 5.15947 0.25583
2 0.25583 0.00478 4.88679 0.25485
3 0.25485 0.00000 4.87921 0.25485
4 0.25485 0.00000 4.87921 0.25485
5 0.25485 0.00000 4.87921 0.25485
3. Secant Method
The Secant Method is an iterative numerical technique used for finding the roots of a real-
valued function. It is similar to the Newton-Raphson method but doesn't require the evaluation of
the derivative of the function. Instead of using the derivative, the Secant Method approximates
the derivative using finite differences.
Secant Method:
Given a continuous function f(x), the Secant Method iteration formula is defined as:
f ( x n )∗f ( x n−x n−1 )
xn+1=xn−
f ( x n )−( xn−1 )
where:
xn is the current approximation at iteration n,
xn−1 is the previous approximation,
xn+1 is the next approximation,
f(xn) is the function value at xn,
f(xn−1) is the function value at xn−1.
Algorithm:
1. Choose Initial Guesses: Select initial guesses x0 and x1.
2. Iteration: Apply the Secant Method iteration formula repeatedly until convergence. The
process stops when a certain convergence criterion is met.
3. Convergence Criteria: Common convergence criteria include reaching a specified
number of iterations, achieving a desired level of accuracy, or when the change in the
solution between iterations is sufficiently small.
The iterations converge to the square root of 2, similar to the Newton-Raphson method.
The Secant Method is generally less computationally expensive than the Newton-
Raphson method because it doesn't require the computation of the derivative. However, its
convergence may be slower compared to Newton's method, and it may not converge for certain
functions or initial guesses. As with any numerical method, care should be taken to choose
appropriate initial guesses and to check for convergence.
Example 1:
x 0 1 2 3
3 2
f(x)= 4x -15x +17x-6 f(x) -6 0 0 18
Example 2:
x -1 0 1 2 3
f(x) = e x -x f(x) -2.71828 0 0.367879 0.270671 0.149361
Example 3:
f(x) = 3ex-4cos(x) x 0 1 2 3
f(x) -1 6 24 64
x 0 1 2 3
F(x) -6 0 0 18
2. f(x) = e -x x
x -1 0 1 2
F(x) -2.71828 0.00000 0.36788 0.27067
3. 3e x −4 co s ( x )
x 0 1 2 3
F(x) -1.00000 5.99364 23.83176 64.21658
x -1 0 1 2
f(x) -2.71828 0.00000 0.36788 0.27067
• REGULA-FALSI METHOD (FALSE-POSITION METHOD
x 0 1 2 3 4
f(x) -6 0 0 18 78
2. f(x) = e -x x
3. f(x) = 3ex-4cos(x)
x -1 0 1 2
f(x) -1.0575709 -1 5.993636 23.83176