Lecture Notes: Error Analysis in Numerical Methods
1 Introduction to Error Analysis
Numerical methods approximate exact mathematical operations using finite precision
arithmetic. Errors are inevitable, and understanding their types, causes, and propagation
is crucial for designing accurate and reliable algorithms.
2 Types of Errors in Numerical Computation
2.1 True Error (Absolute Error)
E = |x − x̂|
Example 1: √
Exact value x = 2 ≈ 1.414213562
Approximate value x̂ = 1.4142
E = |1.414213562 − 1.4142| = 0.000013562
Example 2:
Exact value π = 3.1415926535
Approximate value π̂ = 3.14
E = |3.1415926535 − 3.14| = 0.0015926535
2.2 Relative Error
|x − x̂|
ϵ=
|x|
Example
√ 1:
Using x = 2 and x̂ = 1.4142,
0.000013562
ϵ= ≈ 9.58 × 10−6
1.414213562
Example 2:
True population: 10,000,000
Approximate population: 9,800,000
10, 000, 000 − 9, 800, 000
ϵ= = 0.02 (2%)
10, 000, 000
1
2.3 Approximation Error
Error from approximating infinite processes.
Example 1:
Approximating sin(x) ≈ x for x = 0.1:
| sin(0.1) − 0.1| = |0.0998334 − 0.1| = 0.0001666
Example 2:
Numerical derivative approximation for f (x) = ex , x = 1, h = 0.01:
′ e1.1 − e1 3.00417 − 2.71828
f (1) ≈ = = 2.8589
0.1 0.1
Exact derivative = e = 2.71828, error = 0.1406.
2.4 Round-off Error
Finite digits in computers cause rounding.
Example 1:
Representing π with 4 decimals: 3.1416 (error ≈ 4.65 × 10−5 ).
Example 2:
Floating point addition in Python:
>>> 0.1 + 0.2 == 0.3
False
>>> 0.1 + 0.2
0.30000000000000004
3 Finite Arithmetic and Finite Digits
Computers use floating-point formats with limited precision, leading to round-off errors
and unexpected results.
Example: Adding 0.1 fifty times may not exactly equal 5.0 due to floating-point
representation errors.
4 Propagation of Error Through Computations
4.1 Error Propagation Model
If y = f (x) and x has uncertainty ∆x, then
∆y ≈ |f ′ (x)|∆x
Example 1:
For y = x2 , x = 2, and ∆x = 0.01:
∆y ≈ 4 × 0.01 = 0.04
Example 2:
For y = sin(x), x = π4 , ∆x = 0.01:
2
π
∆y ≈ cos × 0.01 = 0.707 × 0.01 = 0.00707
4
4.2 Catastrophic Cancellation
Loss of significant digits when subtracting nearly equal numbers.
Example 1:
1.000001 − 1.000000 = 0.000001, but rounding errors in inputs can swamp this small
difference.
Example 2:
Calculate 1 − cos(0.001) directly vs. using
2 0.001
2 sin
2
Direct subtraction loses precision, but the trigonometric form is more stable and
accurate.
5 Causes of Error in Numerical Methods
• Modeling Error: Simplifying real-world problems.
• Truncation Error: Approximating infinite sums or derivatives.
• Round-off Error: Due to finite precision.
• Algorithmic Error: Using unstable algorithms.
• Data Error: Inaccurate inputs.
6 Minimizing and Optimizing Errors
6.1 Use Higher Precision Arithmetic
Use higher precision data types (e.g., float64) or arbitrary precision libraries for critical
calculations.
6.2 Avoid Subtraction of Nearly Equal Numbers
Rewrite expressions to reduce cancellation.
Example:
√ b2
a2 + b 2 − a = √
a2 + b2 + a
For a = 106 , b = 1, this form avoids loss of precision.
6.3 Algorithm Choice: Stability & Conditioning
Use numerically stable algorithms like partial pivoting in Gaussian elimination.
3
6.4 Error Estimation and Step Control
Adaptive methods adjust step sizes to maintain error bounds (e.g., adaptive Runge-Kutta
methods).
6.5 Use Analytical Simplification
Simplify expressions before computation to avoid unnecessary error.
Example:
a2 − b 2
=a+b
a−b
6.6 Compensated Summation (Kahan Summation)
Reduces rounding error when summing large sequences of numbers.
7 Tricks and Practical Tips
• Scale the problem to keep values in reasonable ranges.
• Use iterative refinement to improve solutions of linear systems.
• Symbolic computation can avoid numerical differentiation errors.
• Monitor residuals in iterative solvers to control accuracy.
• Use logarithmic transformations for large powers and products.
8 Homework Problems
1. Absolute and Relative Error
√
(a) Calculate absolute and relative errors for approximate 3 = 1.73 (exact ≈
1.73205).
(b) Compute relative error if a rod’s true length is 150 cm but measured as 148.5
cm.
2. Approximation Error in Numerical Differentiation
Approximate f ′ (x) for f (x) = ln(x), x = 2, h = 0.01 using
f (x + h) − f (x)
f ′ (x) ≈
h
and calculate absolute error given exact f ′ (x) = x1 .
3. Round-off Error in Floating Point
Sum 0.1 added 100 times using standard floating-point addition. Compare with
exact 10 and find the error.
4
4. Propagation of Error
For y = sin(x2 ), estimate error in y when x = 1.5 has uncertainty ∆x = 0.02.
5. Catastrophic Cancellation
Evaluate f (x) = 1 − cos(x) at x = 0.0001:
(a) Directly.
(b) Using f (x) = 2 sin2 x
2
.
Compare accuracy and explain.
6. Minimizing
√ Error by Formula Rearrangement
Rewrite a2 + b2 − a to avoid precision loss when b ≪ a. Calculate for a = 106 ,
b = 1.
7. Kahan Summation Algorithm
Write Python code summing 106 and 1, then subtracting 106 using:
(a) Simple arithmetic.
(b) Kahan summation.
Compare results and explain difference.
8. Error Estimation in Numerical Integration
Approximate Z 1
2
e−x dx
0
using trapezoidal rule with n = 4 subintervals. Estimate error using trapezoidal
error bound.
9. Stability of Algorithms
Solve system (
0.0001x + y = 1.0001
x+y =2
using
(a) Naive Gaussian elimination.
(b) Gaussian elimination with partial pivoting.
Compare and discuss stability.
10. Iterative Refinement
Solve
4 1 1
A= , b=
1 3 2
Find approximate solution x0 , then improve it by one iterative refinement step.
Calculate residual before and after.