[go: up one dir, main page]

0% found this document useful (0 votes)
53 views30 pages

Numerical Methods in Civil Engineering

The document discusses numerical methods for solving systems of linear equations. It covers small-scale methods like graphical methods, Cramer's rule, and elimination of unknowns. It also covers large-scale methods like Gaussian elimination with pivoting, naive Gaussian elimination, LU decomposition, and iterative methods like Jacobi and Gauss-Seidel.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views30 pages

Numerical Methods in Civil Engineering

The document discusses numerical methods for solving systems of linear equations. It covers small-scale methods like graphical methods, Cramer's rule, and elimination of unknowns. It also covers large-scale methods like Gaussian elimination with pivoting, naive Gaussian elimination, LU decomposition, and iterative methods like Jacobi and Gauss-Seidel.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 30

College of Engineering and Architecture

Civil Engineering Department


Brgy. Bajumpandan, Dumaguete City

ENM 331 – NUMERICAL METHOS AND ANALYSIS


MWF 5:30 PM - 7:30 PM

NUMERICAL METHODS AND ANALYSIS COMPILATION

Veegay T. Kibete
BSCE III

Engr. Irismay T. Jumawan MBA, MEngCE


Instructor

January 2024
TABLE OF CONTENTS
I. LINEAR EQUATION SYSTEM
A. SMALL SCALE
1. GRAPHICAL METHOD
2. CRAMER'S RULE
3. ELIMINATION OF UNKNOWNS
B. LARGE SCALES
1. GAUSSIAN ELIMINATION WITH PIVOTING
2. NAIVE GAUSSIAN ELIMINATION
3. LU DECOMPOSITION
4. ITERATION METHODS
a. JACOBI METHOD
b. GAUSS SEIDEL

II. CASE STUDIES: LINEAR ALGEBRAIC EQUATIONS

III. SYSTEMS OF NON-LINEAR EQUATION


A. OPEN METHOD
1. FIXED-POINT ITERATION
2. NEWTON-RAPHSON METHOD
3. SECANT METHOD

B. BRACKETING METHOD
1. BISECTION METHOD
2. REGULA-FALSI METHOD (FALSE-POSITION METHOD)
INTRODUCTION

Numerical methods are mathematical techniques used to solve problems that may not have
exact analytical solutions. These methods involve representing continuous mathematical models
with discrete approximations and using computational algorithms to obtain numerical solutions.
Common applications include solving equations, integration, differentiation, and simulating
physical systems. Numerical methods are crucial in fields such as engineering, physics, computer
science, and finance where analytical solutions may be impractical or impossible to find.
Numerical methods in the field of engineering encompass a set of computational approaches and
algorithms utilized to address mathematical problems that often lack exact analytical solutions or
involve complex, real-world scenarios. In essence, these methods bridge the gap between
mathematical models and practical engineering applications by employing techniques that
provide numerical approximations to intricate problems.
Numerical methods play a vital role in solving complex problems that arise in the design,
analysis, and optimization of systems. Engineers often encounter situations where analytical
solutions are challenging or impractical. Numerical methods help address these challenges by
providing approximate solutions through computational techniques.
In the context of algebra, numerical methods are employed for finding roots or solutions to
equations where analytical solutions may be elusive or impractical. One common numerical
method for root-finding is the Newton-Raphson method. This iterative technique starts with an
initial guess and refines it through successive calculations, converging toward the actual root.
Numerical methods for root-finding are particularly valuable when dealing with complex or
transcendental equations, offering a practical approach to obtaining numerical approximations
for solutions. While these methods don't guarantee exact solutions, they provide a
computationally efficient means of finding roots in situations where direct algebraic solutions are
challenging.
The goal of numerical methods is to offer efficient and accurate computational techniques for
approximating solutions to mathematical problems, especially when analytical solutions are
challenging or unavailable. These methods strive to strike a balance between precision and
computational efficiency, providing robust and versatile approaches applicable across a range of
mathematical models and engineering scenarios. The key objectives include achieving accuracy,
convergence to solutions through iterative processes, robustness in handling diverse problem
types, and stability to ensure reliability in the face of input variations. Ultimately, numerical
methods aim to deliver practical solutions for complex problems in algebra and other
mathematical domains.
I. LINEAR EQUATION SYSTEM

A. SMALL SCALE

 Graphical Method
Graphical methods are often used in numerical analysis to visually represent data,
relationships, and solutions to problems. These methods can provide insights, aid in
understanding complex phenomena, and facilitate problem-solving.
The term "graphical method in numerical" generally refers to the use of graphical techniques
and visual representations to analyze and solve numerical problems. In the context of
numerical analysis, graphical methods are often employed to gain insights into mathematical
relationships, visualize data, and facilitate problem-solving.
The graphical method involves the utilization of visual representations, such as graphs,
charts, and plots, to analyze numerical data, understand mathematical relationships, and solve
problems in various fields of science, engineering, and mathematics.
Overall, the graphical method serves as a powerful complement to numerical techniques,
providing a visual framework for understanding and interpreting numerical information. It
enhances the clarity and accessibility of numerical analysis, making it a valuable tool in
scientific and engineering disciplines.

Example:
Solve graphically.

0.5x1-x2=-9.5
1.02x1-2x2=-18.8

Solution:
0.5x1-x2=-9.5
x1 0 2 4 6 8 10 12 14 16 18
x2 9.5 10.5 11.5 12.5 13.5 14.5 15.5 16.5 17.5 18.5
1.02x1-2x2=-
18.8
x1 0 2 4 6 8 10 12 14 16 18
10.4 13.4 16.5
x2 9.4 2 11.44 12.46 8 14.5 15.52 4 17.56 18.58
20

18

16
14.5
14.5
14

12

10

0.5x1-x2=-9.5
2
1.02x1-2x2=-18.8

0
0 2 4 6 8 10 12 14 16 18 20

Therefore, the solutions for the unknowns are x1 = 10 and x2 = 14.5, which will
satisfy the given equations.
 Cramer's Rule
Cramer's Rule is a method used to solve a system of linear equations with as many equations
as unknowns, assuming that the system's determinant is nonzero. It provides a formula for
finding the individual unknowns by expressing the solution in terms of determinants.
It's important to note that Cramer's Rule has some limitations and may not be the most
efficient method for solving large systems of equations due to the computational cost of
calculating determinants. Additionally, the method requires the determinant of the coefficient
matrix A to be nonzero for a unique solution to exist.
Example:
x1+x2-x3= -3
6x1+2x2+2x3= 2
-3x1+4x2+x3= 1

Solution:

[ |]
x1+x2-x3= -3 1 1 −1 −3
6x1+2x2+2x3= 2 6 2 2 2
-3x1+4x2+x3= 1 −3 4 1 1

[ ]
1 1 −1 = 1[2(1)-4(2)] - 1[6(1)-(-3)(2)] + (-1)[6(4)-(-3)(2)]
D = 6 2 2 = 1(-6) - 1(12) - 1(30)
−3 4 1 = - 48

x 1=¿

[ ]
−3 1 −1 = -3[2(1)-4(2)] - 1[2(1)-(1)(2)] + (-1)[2(4)-(1) Dx 1
Dx1 = 2 2 2 (2)] D
1 4 1 = -3(-6) - 1(0) + (-1)(6) x1=
12
= 12 −48
−1
x1=
x 2=¿4

[ ]
1 −3 −1 = 1[2(1)-1(2)] - (-3)[6(1)-(3)(2)] + (-1)[6(1)-(-3) Dx 2
Dx2 = 6 2 2 (2)] d
−3 1 1 = 1(0) + 3(12) + (-1)(12) 24
= 24 x2=
−48
−1
xx23=¿
=

[ ]
1 1 −3 = 1[2(1)-4(2)] - 1[6(1)-(-3)(2)] + (-3)[6(4)-(-3) 2
Dx3 = 6 2 2 (2)] Dx 3
−3 4 1 = 1(-6) - 1(12) + (-3)(30) d
= -108 Therefore: −108
x3=
−48
−1 −1 9
x1= x2= x2=
4 2 4
 Elimination of Unknown
The "elimination of unknowns" is a general concept used in various mathematical contexts,
not just limited to systems of linear equations. It often involves manipulating equations or
expressions to eliminate one or more variables, making it easier to solve a problem or analyze a
system.
Elimination of unknowns is a process used to solve systems of linear equations. The primary
goal is to transform the system into a simpler form that makes it easier to find the values of the
variables. There are several methods for eliminating unknowns, but one common approach is
Gaussian Elimination.
The elimination of unknowns is a versatile concept that arises in different mathematical
scenarios, and the specific techniques used depend on the nature of the problem at hand. It
involves systematic manipulations or substitutions to simplify expressions or equations, making
them more amenable to analysis or solution.
Example:
x1+x2-x3= -3
6x1+2x2+2x3= 2-
3x1+4x2+x3= 1
B. LARGE SCALE
• Gaussian Elimination with pivoting
Gaussian Elimination with pivoting is an extension of the Gaussian Elimination method
used for solving systems of linear equations. The basic idea behind Gaussian Elimination is to
transform the system of linear equations into an equivalent upper triangular system, making it
easier to solve. However, when dealing with matrices that may have zero or very small pivots,
pivoting becomes necessary to avoid numerical instability and division by small numbers.
Gaussian Elimination is a method used to solve systems of linear equations by
transforming the augmented matrix (a matrix that includes both the coefficient matrix and the
column of constants) into its row-echelon or reduced row-echelon form. The goal is to simplify
the system and make it easier to find the values of the variables.

Example 1:
6x1-2x2+2x3+4x4= 16
12x1-8x2+6x3+10x4= 26
3x1-13x2+9x3+3x4= -19
-6x1+ 4x2+x3-18x4= -34

Solution:

[ ][ ] [ ]
1 6 −2 2 4 x1 16
2 12 −8 6 10 x2
= 26
0.5 3 −13 9 −3 x 3 −19
−1 −6 4 1 −18 x 4 −34

[ ][ ] [ ]
1 6 −2 2 4 x1 16
2 0 −4 2 2 x2
= −6
3 0 −12 8 1 x3 −27
−0.5 0 2 3 −14 x 4 −18

[ ][ ] [ ]
1 6 −2 2 4 x1 16
0 −4 2 2 x2
2 = −6
3 0 0 2 −5 x3 −9
2 0 0 4 −13 x 4 −21
[ ][ ] [ ]
6 −2 2 4 x1 16
0 −4 2 2 x 2 −6
=
0 0 2 −5 x 3 −9
2x3-5x4= -9 -4x2+2x3+2x4=-6 0 0 0 −3 x 4 −3
2x3-5(1)=-9 -4x2+2(-2)+2(1)=-6
-3x4 =-3 x2 = (-6+4-2)/-4 6x1-2x2+2x3+4x4= 16 Therefore:
x3= (-9+5)/2
x4 = 1 x3= -2 x2 = 1 6x1-2(1)+2(-2)+4(1)= 16 x1= 3
x1= (16+2+4-4)/6 x2 = 1
x1=3 x3 = -2
Example 2:
x4= 1
2x1+4x2-2x3-2x4 = -4
x1+2x2+4x3-3x4 = 5
-3x1-3x2+8x3-2x4 = 7
-x1+x2+6x3-3x4 = 7

Solution:

[ ][ ] [ ] [ ][ ] [ ]
1 2 4 −2 −2 x 1 −4 1 2 4 −2 −2 x1 −4 Swap second and
0.5 1 2 4 −3 x 2 = 5 2 0 0 5 −2 x2
= 7
fourth row of the
→ augmented matrix.
1 .5 −3 −3 8 −2 x 3 7 0.5 0 3 5 −5 x3 1
−0.5 −1 1 6 −3 x 7 −1 0 3 5 −4 x4 5
4

[ ][ ] [ ] [ ][ ] [ ]
1 2 4 −2 −2 x 1 −4 1 2 4 −2 −2 x 1 −4 Swap third and fourth
2 0 3 5 −4 x2 5 −4 x2
= 5 2 0 3 = 5
row of the augmented
5 −5 → 0 −1 matrix.
1 0 3 x3 1 1 0 0 x 3 −4
0 0 0 5 −2 x4 7 0 0 0 5 −2 x4 7

[ ][ ] [ ] [ ][ ] [ ]
1 2 4 −2 −2 x 1 −4 2 4 −2 −2 x 1 −4
2 0 3 5 −4 x2 5 −4 x2
= 5 →
0 3 = 5
1 0 0 5 −2 x3 7 0 0 5 −2 x3 7
0 0 0 0 −1 x 4 −4 0 0 0 −1 x 4 −4

x4=4 5x3-2x4= 7 3x2+5x3-4x4=5 2x1+4x2-2x3-2x4= -4


5x3-2(4)=7 3x2+5(3)-4(4)=5 2x1+4(2)-2(3)-2(4)= -4
x3= (7+8)/5 x2= (5+16-15)/3 x1= (-4+8+6-8)/2
x2= 2 x1= 1
x3= 3
Therefore:
x1= 1
x2= 2
x3=3
x4=4

• Naive Gaussian Elimination


Naive Gaussian Elimination is a method for solving systems of linear equations by
transforming an augmented matrix into its row-echelon form and then back-substituting to find
the solutions. This method is called "naive" because it doesn't employ any optimizations or
pivoting strategies, making it more susceptible to numerical instability and potential division by
zero.
In the context of numerical linear algebra, Naive Gaussian Elimination is a direct method
for solving systems of linear equations by transforming the coefficient matrix into an upper
triangular form using elementary row operations. The method is called "naive" because it doesn't
incorporate pivoting strategies to deal with potential division by zero or to improve numerical
stability.

Example:
6x1-2x2+2x3+4x4= 16
12x1-8x2+6x3+10x4= 26
3x1-13x2+9x3+3x4= -19
-6x1+ 4x2+x3-18x4= -34

12 x 1−8 x 2 +6 x 3+10 x 4 =26


a21 12
= =2 ;− 12 x1 −4 x 2 +4 x3 +8 x 4 =32
a11 6
−4 x 2 +2 x 3+2 x 4 =−6

3 x1−13 x 2 +9 x 3+ 3 x 4=−19 −12 x 2+ 8 x3 + x 4 =−27


a31 3 − a32 −12 −
= =0.5 ; 3 x 1−x 2 + x 3 +2 x 4 =8 = =3 ; −12 x 2+ 6 x3 −6 x 4=18
a11 6 a22 −4
−12 x2 +8 x 3 + x 4=−27 2 x 3−5 x 4 =−19
−6 x 1+ 4 x 2 + x 3−18 x 4=−34
a41 −6 −
= =−1 ; −6+2 x 2−2 x 3−4 x 4 =−16
a11 6
2 x 2 +3 x 3−14 x 4 =−18
6 x 1−2 x 2+2 x 3+ 4 x 4 =16
−4 x 2 +2 x 3 +2 x 4=−6
2 x3 −5 x 4=−19
4 x 3−13 x 4 =−21
6 x 1−2 x 2+2 x 3+ 4 x 4 =16
−4 x 2 +2 x 3 +2 x 4=−6
−12 x 2 +8 x 3+ x 4=−27
2 x 2 +3 x3 −14 x 4=−18

• LU Decomposition
Iterative Methods for solving Linear Systems
 Jacobi Method
The Jacobi method is an iterative numerical technique used for solving systems of linear
equations. It's particularly suitable for large, sparse matrices. The method is named after the
German mathematician Carl Gustav Jacob Jacobi. The Jacobi method aims to find the solution to
the linear system Ax=b by iteratively refining an initial guess until a sufficiently accurate
solution is obtained.
Here's a basic outline of the Jacobi method:
1. System of Equations: Given a system of linear equations Ax=b, where A is a square
matrix, x is the column vector of unknowns, and b is the column vector of constants.
2. Decompose Matrix A: Decompose matrix A into its diagonal (D), lower triangular (L),
and upper triangular (U) components: A=D+L+U where:
 D is the diagonal part of A,
 L is the strictly lower triangular part of A,
 U is the strictly upper triangular part of A.
3. Iteration: The Jacobi iteration formula is given by: x(k+1)=D−1(b−(L+U)x(k)) where x(k) is
the current approximation, D−1 is the inverse of the diagonal matrix D, and k is the
iteration index.
4. Convergence Criteria: Repeat the iteration until a convergence criterion is met.
Common criteria include a specified number of iterations, a desired level of accuracy, or
when the change in the solution between iterations is sufficiently small.
It's important to note that the Jacobi method has some limitations. It may converge slowly
for certain types of matrices, and its convergence is influenced by the spectral radius of the
iteration matrix. For systems with diagonally dominant or symmetric positive definite matrices,
the Jacobi method can be effective. However, for more general cases or when faster convergence
is needed, other iterative methods like the Gauss-Seidel method or more advanced methods like
Conjugate Gradient may be considered.

Example 1:
(16+ 2 x 2−2 x 3−4 x 4)
6x1-2x2+2x3+4x4 = 16 x1=
6
12x1-8x2+6x3+10x4= 26 (−26+ 12 x 1+ 6 x 3+10 x 4)
x2=
8
3x1-13x2+9x3+3x4= -19 (−19−3 x 1+13 x 2−3 x 4 )
x3=
-6x1+ 4x2+x3-18x4= -34 9
(34−6 x 1+ 4 x 2+ x 3)
x4=
18

Assumption:
x1=0 x3=0
x2=0 x4=0
n x1 x2 x3 x4
0 0 0 0 0
1 2.66667 -3.25000 -2.11111 1.88889
2 1.02778 1.52778 -8.32407 0.16049
3 5.84362 -7.75077 -0.30041 1.42335
4 -0.76569 7.06932 -15.72900 -1.79807
5 11.46482 -18.44287 8.95471 2.84124

6 -8.36002 24.21481 -33.51949 -5.53365


7 25.60053 -47.84671 37.49706 8.19444
8 -31.24422 73.51664 -82.48802 -15.19405
9 64.79759 -130.97490 119.55902 24.05799
10 -96.88330 213.68813 -220.91561 -42.17367
11 175.65036 -366.97875 352.90185 69.39649
12 -283.55786 611.64754 -613.87381 -118.60641
13 490.24472 -1037.25015 1015.43454 198.22575
14 -813.71206 1741.47517 -1729.85149 -335.61302
15 1383.51757 -2940.72299 2896.46139 564.01787
16 -2319.07337 4949.39472 -4899.00057 -951.86311
17 3920.04050 -8345.93937 8237.32676 1602.61214
18 -6593.49680 14058.07100 -13898.24108 -2701.81521

19 11122.64750 -23694.44502 23402.42879 4551.61243


20 -18730.69956 39922.05838 -39452.17390 -7670.95757

(16+ 2(0)−2(0)−4 (0))


x1= =2.66667
6
(−26+ 12(0)+6(0)+10(0))
x2= =−3.25
8
(−19−3 x(0)+13 (0)−3(0))
x3= = -2.11111
9
(34−6 (0)+ 4 (0)+(0))
x4= = 1.88889
18
Example 2:
2x1+4x2-2x3-2x4 = -4 x1=
x1+2x2+4x3-3x4 = 5 (−4−4 x 2+2 x 3+2 x 4)
-3x1-3x2+8x3-2x4 = 7 2
-x1+x2+6x3-3x4 = 7 x2=
(5−x 1−4 x 3+3 x 4 )
Assumption: 2
x1=0 x3=0 x3=
x2=0 x4=0
( 7+3 x 1+3 x 2+2 x 4 )
8
x4=
(−4−4 (0)+2(0)+2(0)) (−7−x 1+ x 2+6 x 3)
x1= = -2
2 3
(5−(0)−4 (0)+ 3(0))
x2= =¿2.5
2
( 7+3(0)+3 (0)+ 2(0) )
x3= = 0.875
8
(−7−( 0)+(0)+6(0))
x4= = -2.33333
3
n x1 x2 x3 x4
0 0.00000 0.00000 0.00000 0.00000
1 -2.00000 2.50000 0.87500 -2.33333
2 -8.45833 -1.75000 0.47917 0.91667
3 2.89583 7.14583 -2.72396 0.86111
4 -18.15451 7.79167 4.85590 -6.36458
5 -19.09201 -7.68142 -4.60221 16.02720
6 24.78783 45.29123 -5.15824 -7.73423
7 -105.47493 -11.17878 25.22109 -5.81535
8 39.76331 -3.92773 -44.32398 79.54090
9 41.07239 190.57766 34.19857 -105.54498
10 -454.50172 -244.75079 61.35752 115.89889
11 664.75800 280.88415 -232.36997 190.29869
12 -605.83959 420.30897 403.06548 -595.03123
13 -1034.58369 -1393.25801 -217.45679 1145.84715
14 3712.90637 2673.47614 -623.10385 -556.80501
15 -6528.86114 -1442.95300 2256.56719 -1595.01778
16 3545.45541 -3638.73048 -3387.30975 6206.10376
17 10094.25497 14313.54743 1517.42279 -9171.68146
18 -36283.35353 -21836.99525 6860.88054 4438.94306
19 54971.81410 11080.83028 -20684.52003 18534.88050
20 -24313.30009 41687.95375 29404.33677 -56001.70132

 Gauss Seidel Method


The Gauss-Seidel method is an iterative numerical technique used for solving systems of
linear equations. Like the Jacobi method, it's particularly well-suited for large, sparse matrices.
The method is named after the German mathematicians Carl Friedrich Gauss and Philipp Ludwig
von Seidel.
The Gauss-Seidel method generally converges faster than the Jacobi method for many types
of matrices. However, its effectiveness can still be influenced by the properties of the matrix A,
and in some cases, it may exhibit slow convergence or even divergence. For more robust
convergence, especially in the context of solving linear systems arising from partial differential
equations, techniques like Successive Over-Relaxation (SOR) may be employed.

Example 1:

(16+ 2 x 2−2 x 3−4 x 4)


6x1-2x2+2x3+4x4 = 16 x1=
6
12x1-8x2+6x3+10x4= 26 (−26+ 12 x 1+ 6 x 3+10 x 4)
x2=
3x1-13x2+9x3+3x4= -19 8
(−19−3 x 1+13 x 2−3 x 4 )
-6x1+ 4x2+x3-18x4= -34 x3=
9
(34−6 x 1+ 4 x 2+ x 3)
Assumption: x4=
18
x1=0 x3=0
(16+ 2(0)−2(0)−4 (0))
x2=0 x4=0 x1= =2.66667
6
(−26+ 12(2.66667)+6(0)+10 (0))
x2= =0.75
8
(−19−3 x(2.66667)+ 13(−3.25)−3(0))
x3= = -1.91667
9
(34−6 (2.66667)+4 (−3.25)+(−2.11111))
x4= = 1.06019
18
n x1 x2 x3 x4
0 0 0 0 0
1 2.66667 0.75000 -1.91667 1.06019
2 2.84877 0.91088 -2.09838 1.02514
3 2.98633 0.93713 -2.09463 0.98533
4 3.02037 0.94124 -2.08678 0.97533
5 3.02578 0.94276 -2.08305 0.97407
6 3.02589 0.94413 -2.08068 0.97447
7 3.02529 0.94551 -2.07862 0.97509
8 3.02465 0.94687 -2.07665 0.97572
9 3.02403 0.94820 -2.07473 0.97633
10 3.02343 0.94950 -2.07286 0.97692
11 3.02284 0.95077 -2.07104 0.97750
12 3.02227 0.95200 -2.06926 0.97806
13 3.02171 0.95320 -2.06752 0.97861
14 3.02117 0.95437 -2.06583 0.97915
15 3.02064 0.95551 -2.06418 0.97967
16 3.02012 0.95663 -2.06258 0.98018
17 3.01962 0.95771 -2.06101 0.98068
18 3.01912 0.95877 -2.05948 0.98116
19 3.01865 0.95981 -2.05799 0.98163
20 3.01818 0.96081 -2.05654 0.98209
21 3.01772 0.96179 -2.05513 0.98254
22 3.01728 0.96275 -2.05374 0.98298
23 3.01685 0.96368 -2.05240 0.98340
24 3.01643 0.96459 -2.05109 0.98382
25 3.01601 0.96548 -2.04981 0.98422
26 3.01561 0.96634 -2.04856 0.98462
27 3.01522 0.96719 -2.04734 0.98500
28 3.01484 0.96801 -2.04616 0.98538
29 3.01447 0.96881 -2.04500 0.98575
30 3.01411 0.96959 -2.04388 0.98610
31 3.01375 0.97035 -2.04278 0.98645
32 3.01341 0.97109 -2.04171 0.98679
33 3.01307 0.97182 -2.04066 0.98712
34 3.01275 0.97252 -2.03964 0.98744
35 3.01243 0.97321 -2.03865 0.98776
36 3.01212 0.97388 -2.03768 0.98806
37 3.01181 0.97454 -2.03674 0.98836
38 3.01152 0.97517 -2.03582 0.98865
39 3.01123 0.97580 -2.03492 0.98894
40 3.01095 0.97640 -2.03405 0.98922

Example 2:
x1=
2x1+4x2-2x3-2x4 = -4 (−4−4 x 2+2 x 3+2 x 4)
x1+2x2+4x3-3x4 = 5 2
-3x1-3x2+8x3-2x4 = 7 (5−x 1−4 x 3+3 x 4 )
-x1+x2+6x3-3x4 = 7 x2=
2
( 7+3 x 1+3 x 2+2 x 4 )
Assumption: x3=
8
x1=0 x3=0 (−7−x 1+ x 2+6 x 3)
x2=0 x4=0 x4=
3

(−4−4 (0)+2(0)+2(0))
x1= = -2
2
(5−(−2)−4(0)+3 (0))
x2= =¿ 3.5
2
( 7+3(−2)+3( 3.5)+2(0) )
x3= = 1.4375
8
(−7−(−2)+(3.5)+ 6(1.4375))
x4= = 2.375
3

n x1 x2 x3 x4
0 0 0 0 0
1 -2.00000 3.50000 1.43750 2.37500
2 -5.18750 5.78125 1.69141 4.70573
3 -7.16536 9.75846 3.02384 9.35563
4 -9.13745 15.05448 5.43280 16.59624
5 -10.07994 21.56873 9.33236 26.88094
6 -8.92417 28.61878 14.98071 40.14240
7 -4.11444 34.80940 22.42121 55.48371
8 6.28612 37.74008 31.25575 70.66282
9 24.43841 33.76352 40.36643 81.50790
10 52.34728 17.85534 47.57796 81.32527
11 91.19254 -16.26428 49.30441 60.45655
12 140.28954 -75.56877 40.25943 6.23275
13 195.62971 -166.48458 13.36261 -96.31287
14 248.01890 -292.70399 -39.96012 -262.49454
15 280.95330 -451.79822 -128.81548 -504.21480
16 268.56616 -630.47432 -260.89426 -823.80201
17 174.25236 -798.54068 -439.18362 -1204.96493
18 -49.06719 -902.04655 -657.03388 -1600.72755
19 -455.66833 -856.68940 -891.44104 -1918.88910
20 -1098.95134 -543.47590 -1094.75749 -2006.68983

II. CASE STUDIES: LINEAR ALGEBRAIC EQUATIONS


III. SYSTEMS OF NON-LINEAR EQUATION

A. OPEN METHOD

1. Fixed-Point Iteration
Fixed-point iteration is an iterative numerical method used to find the fixed points of a given
function. In the context of solving equations, it's often applied to find the roots or solutions of
nonlinear equations. The method is based on the concept of repeatedly applying a function to an
initial guess until convergence to a fixed point is achieved.
Fixed-Point Iteration Method:
Given a nonlinear equation f(x)=0, the fixed-point iteration aims to find a fixed point of
the function g(x) such that g(x)=x. This fixed point is also a solution to the original equation. The
iteration formula is given by:
xn+1=g(xn)
where:
 xn is the current approximation at iteration n,
 xn+1 is the next approximation,
 g(x) is a chosen function that satisfies g(x)=x.
Algorithm:
1. Choose an Initial Guess: Select an initial guess x0.
2. Iteration: Apply the iteration formula xn+1=g(xn) repeatedly until convergence. The
process stops when a certain convergence criterion is met.
3. Convergence Criteria: Common convergence criteria include reaching a specified number
of iterations, achieving a desired level of accuracy, or when the change in the solution
between iterations is sufficiently small.

It's important to note that the success of the fixed-point iteration method depends on the
choice of the function g(x) and the initial guess. Additionally, the method may not converge for
all functions or initial guesses, and convergence properties should be carefully considered for
each specific case.

Example:
f(x) = x3-x-1 x 0 1 2
f(x) -1 -1 5
x3=x+1
x= 3√x+1 a= 1
g(x)= 3√x+1 b= 2
( a+b ) ( 1+2 )
X0= = = 1.5
2 2

n x0 x1= g(x0)

0 1.50000 1.35721

1 1.35721 1.33086
2 1.33086 1.32588

3 1.32588 1.32494

4 1.32494 1.32476

5 1.32476 1.32473

6 1.32473 1.32472

7 1.32472 1.32472

8 1.32472 1.32472

9 1.32472 1.32472

10 1.32472 1.32472

2. Newton-Raphson Method
The Newton-Raphson method, also known as the Newton method, is an iterative numerical
technique for finding successively better approximations to the roots (or zeros) of a real-valued
function. It is particularly efficient for finding roots of differentiable functions and is commonly
used for solving nonlinear equations.
Newton-Raphson Method:
Given a continuous and differentiable function f(x), the Newton-Raphson iteration formula is
defined as:
f (x ❑n )
xn+1=xn−
f '( x ❑n )

where:
 xn is the current approximation at iteration n,
 xn+1 is the next approximation,
 f(xn) is the function value at xn,
 f′(xn) is the derivative of the function with respect to x evaluated at xn.
Algorithm:
1. Choose an Initial Guess: Select an initial guess x0.
2. Iteration: Apply the Newton-Raphson iteration formula repeatedly until convergence.
The process stops when a certain convergence criterion is met.
3. Convergence Criteria: Common convergence criteria include reaching a specified
number of iterations, achieving a desired level of accuracy, or when the change in the
solution between iterations is sufficiently small.
The Newton-Raphson method is known for its quadratic convergence, which means that the
number of correct digits roughly doubles with each iteration when the method is converging.
However, it's important to note that the method may not converge for all functions or initial
guesses, and convergence properties should be carefully considered for each specific case.
Additionally, it requires the availability of the derivative f′(x) for the given function.

Example 1:
f(x)= 4x3-15x2+17x-6 x 0 1 2 3
2
f'(x)=12x −30𝑥+17 f(x) -6 0 0 18

n x0 f(x0) f'(x0) x1
0 1.50000 -0.75000 -1.00000 0.75000
1 0.75000 0.00000 23.75000 0.75000
2 0.75000 0.00000 23.75000 0.75000
3 0.75000 0.00000 23.75000 0.75000
4 0.75000 0.00000 23.75000 0.75000
5 0.75000 0.00000 23.75000 0.75000

Example 2:
x -1 0 1 2 3
f(x) -2.71828 0 0.367879 0.270671 0.149361
f(x) = e-xx
f'(x)= -xe-x+e-x

n x0 f(x0) f'(x0) x1
0 -0.50000 -0.82436 2.47308 -0.16667
1 -0.16667 -0.19689 1.37825 -0.02381
2 -0.02381 -0.02438 1.04848 -0.00055
3 -0.00055 -0.00055 1.00111 0.00000
4 0.00000 0.00000 1.00000 0.00000
5 0.00000 0.00000 1.00000 0.00000

Example 3:
f(x) = 3ex-4cos(x) x 0 1 2 3
f(x) -1 6 24 64
f(x) = 3ex-4cos(x)

n x0 f(x0) f'(x0) x1
0 0.50000 1.43583 6.86387 0.29081
1 0.29081 0.18050 5.15947 0.25583
2 0.25583 0.00478 4.88679 0.25485
3 0.25485 0.00000 4.87921 0.25485
4 0.25485 0.00000 4.87921 0.25485
5 0.25485 0.00000 4.87921 0.25485
3. Secant Method

The Secant Method is an iterative numerical technique used for finding the roots of a real-
valued function. It is similar to the Newton-Raphson method but doesn't require the evaluation of
the derivative of the function. Instead of using the derivative, the Secant Method approximates
the derivative using finite differences.
Secant Method:
Given a continuous function f(x), the Secant Method iteration formula is defined as:
f ( x n )∗f ( x n−x n−1 )
xn+1=xn−
f ( x n )−( xn−1 )

where:
 xn is the current approximation at iteration n,
 xn−1 is the previous approximation,
 xn+1 is the next approximation,
 f(xn) is the function value at xn,
 f(xn−1) is the function value at xn−1.
Algorithm:
1. Choose Initial Guesses: Select initial guesses x0 and x1.
2. Iteration: Apply the Secant Method iteration formula repeatedly until convergence. The
process stops when a certain convergence criterion is met.
3. Convergence Criteria: Common convergence criteria include reaching a specified
number of iterations, achieving a desired level of accuracy, or when the change in the
solution between iterations is sufficiently small.
The iterations converge to the square root of 2, similar to the Newton-Raphson method.
The Secant Method is generally less computationally expensive than the Newton-
Raphson method because it doesn't require the computation of the derivative. However, its
convergence may be slower compared to Newton's method, and it may not converge for certain
functions or initial guesses. As with any numerical method, care should be taken to choose
appropriate initial guesses and to check for convergence.

Example 1:
x 0 1 2 3
3 2
f(x)= 4x -15x +17x-6 f(x) -6 0 0 18

n x0 f(x0) x1 f(x1) x2 f(x2)


0 0 -6 3 18 0.75 0

Example 2:
x -1 0 1 2 3
f(x) = e x -x f(x) -2.71828 0 0.367879 0.270671 0.149361

n x0 f(x0) x1 f(x1) x2 f(x2)


0 -1.00000 -2.71828 1.00000 0.36788 0.76159 0.35560
1 1.00000 0.36788 0.76159 0.35560 -6.14512 -2866.28682
2 0.76159 0.35560 -6.14512 -2866.28682 0.76074 0.35551
3 -6.14512 -2866.28682 0.76074 0.35551 0.75988 0.35541
4 0.76074 0.35551 0.75988 0.35541 -2.41173 -26.89862
5 0.75988 0.35541 -2.41173 -26.89862 0.71852 0.35026
6 -2.41173 -26.89862 0.71852 0.35026 0.67828 0.34422
7 0.71852 0.35026 0.67828 0.34422 -1.61521 -8.12280
8 0.67828 0.34422 -1.61521 -8.12280 0.58504 0.32592
9 -1.61521 -8.12280 0.58504 0.32592 0.50017 0.30332
10 0.58504 0.32592 0.50017 0.30332 -0.63892 -1.21038
11 0.50017 0.30332 -0.63892 -1.21038 0.27192 0.20718
12 -0.63892 -1.21038 0.27192 0.20718 0.13880 0.12081
13 0.27192 0.20718 0.13880 0.12081 -0.04741 -0.04971
14 0.13880 0.12081 -0.04741 -0.04971 0.00687 0.00683
15 -0.04741 -0.04971 0.00687 0.00683 0.00032 0.00032
16 0.00687 0.00683 0.00032 0.00032 0.00000 0.00000
17 0.00032 0.00032 0.00000 0.00000 0.00000 0.00000
18 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
19 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
20 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000

Example 3:
f(x) = 3ex-4cos(x) x 0 1 2 3
f(x) -1 6 24 64

n x0 f(x0) x1 f(x1) x2 f(x2)


0 0 -1 1 5.993636 0.142987 -0.49803411
1 1.00000 5.99364 0.14299 -0.49803 0.20874 -0.21681
2 0.14299 -0.49803 0.20874 -0.21681 0.25943 0.02241
3 0.20874 -0.21681 0.25943 0.02241 0.25468 -0.00084
4 0.25943 0.02241 0.25468 -0.00084 0.25485 0.00000
5 0.25468 -0.00084 0.25485 0.00000 0.25485 0.00000
• BISECTION METHOD

x 0 1 2 3
F(x) -6 0 0 18

Iteration xL xU xR f(xL) f(xU) f(xR)


0 0.00000 3.00000 1.50000 -6.00000 18.00000 -0.75000
1 1.50000 3.00000 2.25000 -0.75000 18.00000 1.87500
2 1.50000 2.25000 1.87500 -0.75000 1.87500 -0.49219
3 1.87500 2.25000 2.06250 -0.49219 1.87500 0.34863
4 1.87500 2.06250 1.96875 -0.49219 0.34863 -0.14758
5 1.96875 2.06250 2.01563 -0.14758 0.34863 0.08034
6 1.96875 2.01563 1.99219 -0.14758 0.08034 -0.03852
7 1.99219 2.01563 2.00391 -0.03852 0.08034 0.01967
8 1.99219 2.00391 1.99805 -0.03852 0.01967 -0.00973
9 1.99805 2.00391 2.00098 -0.00973 0.01967 0.00489
10 1.99805 2.00098 1.99951 -0.00973 0.00489 -0.00244
11 1.99951 2.00098 2.00024 -0.00244 0.00489 0.00122
12 1.99951 2.00024 1.99988 -0.00244 0.00122 -0.00061
13 1.99988 2.00024 2.00006 -0.00061 0.00122 0.00031
14 1.99988 2.00006 1.99997 -0.00061 0.00031 -0.00015
15 1.99997 2.00006 2.00002 -0.00015 0.00031 0.00008
16 1.99997 2.00002 1.99999 -0.00015 0.00008 -0.00004
17 1.99999 2.00002 2.00000 -0.00004 0.00008 0.00002
18 1.99999 2.00000 2.00000 -0.00004 0.00002 -0.00001
19 2.00000 2.00000 2.00000 -0.00001 0.00002 0.00000
20 2.00000 2.00000 2.00000 -0.00001 0.00000 0.00000

2. f(x) = e -x x

x -1 0 1 2
F(x) -2.71828 0.00000 0.36788 0.27067

Iteration xL xU xR f(xL) f(xU) f(xR)


0 -1.00000 1.00000 0.00000 -2.71828 0.36788 0.00000
1 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000

3. 3e x −4 co s ( x )

x 0 1 2 3
F(x) -1.00000 5.99364 23.83176 64.21658

Iteration xL xU xR f(xL) f(xU) f(xR)


0 0.00000 1.00000 0.50000 -1.00000 5.99364 1.43583
1 0.00000 0.50000 0.25000 -1.00000 1.43583 -0.02357
2 0.25000 0.50000 0.37500 -0.02357 1.43583 0.64294
3 0.25000 0.37500 0.31250 -0.02357 0.64294 0.29424
4 0.25000 0.31250 0.28125 -0.02357 0.29424 0.13152
5 0.25000 0.28125 0.26563 -0.02357 0.13152 0.05302
6 0.25000 0.26563 0.25781 -0.02357 0.05302 0.01449
7 0.25000 0.25781 0.25391 -0.02357 0.01449 -0.00460
8 0.25391 0.25781 0.25586 -0.00460 0.01449 0.00493
9 0.25391 0.25586 0.25488 -0.00460 0.00493 0.00016
10 0.25391 0.25488 0.25439 -0.00460 0.00016 -0.00222
11 0.25439 0.25488 0.25464 -0.00222 0.00016 -0.00103
12 0.25464 0.25488 0.25476 -0.00103 0.00016 -0.00044
13 0.25476 0.25488 0.25482 -0.00044 0.00016 -0.00014
14 0.25482 0.25488 0.25485 -0.00014 0.00016 0.00001
15 0.25482 0.25485 0.25484 -0.00014 0.00001 -0.00006
16 0.25484 0.25485 0.25484 -0.00006 0.00001 -0.00003
17 0.25484 0.25485 0.25485 -0.00003 0.00001 -0.00001
18 0.25485 0.25485 0.25485 -0.00001 0.00001 0.00000
19 0.25485 0.25485 0.25485 -0.00001 0.00000 0.00000
20 0.25485 0.25485 0.25485 0.00000 0.00000 0.00000

x -1 0 1 2
f(x) -2.71828 0.00000 0.36788 0.27067
• REGULA-FALSI METHOD (FALSE-POSITION METHOD
x 0 1 2 3 4
f(x) -6 0 0 18 78

n X0 f(x0) x1 f(x1) x2 f(x2)


0 0.00000 -6.00000 3 18 0.75000 0.00000
1 0.00000 -6.00000 3

2. f(x) = e -x x

n X0 f(x0) x1 f(x1) x2 f(x2)


0 -1.00000 -2.71828 1.00000 0.36788 0.76159 0.35560
1 -1.00000 -2.71828 0.76159 0.35560 0.55780 0.31932
2 -1.00000 -2.71828 0.55780 0.31932 0.39404 0.26571
3 -1.00000 -2.71828 0.39404 0.26571 0.26991 0.20606
4 -1.00000 -2.71828 0.26991 0.20606 0.18043 0.15064
5 -1.00000 -2.71828 0.18043 0.15064 0.11844 0.10521
6 -1.00000 -2.71828 0.11844 0.10521 0.07677 0.07109
7 -1.00000 -2.71828 0.07677 0.07109 0.04932 0.04695
8 -1.00000 -2.71828 0.04932 0.04695 0.03151 0.03053
9 -1.00000 -2.71828 0.03151 0.03053 0.02005 0.01965
10 -1.00000 -2.71828 0.02005 0.01965 0.01273 0.01257
11 -1.00000 -2.71828 0.01273 0.01257 0.00807 0.00800
12 -1.00000 -2.71828 0.00807 0.00800 0.00511 0.00508
13 -1.00000 -2.71828 0.00511 0.00508 0.00323 0.00322
14 -1.00000 -2.71828 0.00323 0.00322 0.00204 0.00204
15 -1.00000 -2.71828 0.00204 0.00204 0.00129 0.00129
16 -1.00000 -2.71828 0.00129 0.00129 0.00082 0.00082
17 -1.00000 -2.71828 0.00082 0.00082 0.00052 0.00052
18 -1.00000 -2.71828 0.00052 0.00052 0.00033 0.00033
19 -1.00000 -2.71828 0.00033 0.00033 0.00021 0.00021
20 -1.00000 -2.71828 0.00021 0.00021 0.00013 0.00013
21 -1.00000 -2.71828 0.00013 0.00013 0.00008 0.00008
22 -1.00000 -2.71828 0.00008 0.00008 0.00005 0.00005
23 -1.00000 -2.71828 0.00005 0.00005 0.00003 0.00003
24 -1.00000 -2.71828 0.00003 0.00003 0.00002 0.00002
25 -1.00000 -2.71828 0.00002 0.00002 0.00001 0.00001
26 -1.00000 -2.71828 0.00001 0.00001 0.00001 0.00001
27 -1.00000 -2.71828 0.00001 0.00001 0.00001 0.00001
28 -1.00000 -2.71828 0.00001 0.00001 0.00000 0.00000
29 -1.00000 -2.71828 0.00000 0.00000 0.00000 0.00000
30 -1.00000 -2.71828 0.00000 0.00000 0.00000 0.00000

3. f(x) = 3ex-4cos(x)
x -1 0 1 2
f(x) -1.0575709 -1 5.993636 23.83176

n X0 f(x0) x1 f(x1) x2 f(x2)


0 0.00000 -1.00000 1.00000 5.99364 0.14299 -0.49803
1 0.14299 -0.49803 1.00000 5.99364 0.20874 -0.21681
2 0.20874 -0.21681 1.00000 5.99364 0.23636 -0.08890
3 0.23636 -0.08890 1.00000 5.99364 0.24752 -0.03555
4 0.24752 -0.03555 1.00000 5.99364 0.25196 -0.01408
5 0.25196 -0.01408 1.00000 5.99364 0.25371 -0.00555
6 0.25371 -0.00555 1.00000 5.99364 0.25440 -0.00219
7 0.25440 -0.00219 1.00000 5.99364 0.25467 -0.00086
8 0.25467 -0.00086 1.00000 5.99364 0.25478 -0.00034
9 0.25478 -0.00034 1.00000 5.99364 0.25482 -0.00013
10 0.25482 -0.00013 1.00000 5.99364 0.25484 -0.00005
11 0.25484 -0.00005 1.00000 5.99364 0.25485 -0.00002
12 0.25485 -0.00002 1.00000 5.99364 0.25485 -0.00001
13 0.25485 -0.00001 1.00000 5.99364 0.25485 0.00000
14 0.25485 0.00000 1.00000 5.99364 0.25485 0.00000
15 0.25485 0.00000 1.00000 5.99364 0.25485 0.00000

You might also like