Lecture 1
Basic Concepts in Numerical Analysis
Backward Error Well/Ill-Conditioned
Forward Error Unstable
Rounding Error Conditionally Stable
Truncation Error Linear Error Growth
Propagated Error Exponential Error Growth
Computational Error Accuracy/Precision
Conditioning Algorithm
Well/ill-Posed Convergence
Backward vs. forward errors
Suppose we want to compute y f ( x ) , where f :
but obtain approximate value ŷ
Forward Error: y yˆ y fˆ ( x ) f ( x )
Backward Error: x xˆ x ,
f
x y f ( x)
f̂
Backward error Forward error
f
x̂ yˆ fˆ ( x ) f ( xˆ )
Backward vs. forward errors – Example
As approximation to y 2 , yˆ 1.4 has absolute
forward error
y yˆ y 1.4 1.41421 0.0142
or relative forward error of about 1 percent.
Since 1.96 1.4 , the absolute backward error is
x xˆ x 1.96 2 0.04
or relative backward error of 2 percent.
• Backward error analysis
true operation y
x
backward forward
error x̂ approximate o error
per ation ŷ
input output
Backward error analysis
• How much must the original problem change to give
the result actually obtained?
• How much data error in the input would explain all
error in computed result?
• Approximate solution is good if it is the exact solution
to a nearby problem.
• Backward error is often easier to estimate than
forward error.
Sources of numerical errors
• Before computation
– modeling approximations
– empirical measurements, human errors Cannot be controlled
– previous computations
• During computation
– truncation or discretization
Can be controlled through
– Rounding errors
error analysis
• Accuracy depends on both, but we can only control the second part
• Uncertainty in input may be amplified by the problem being solved
• Perturbations during computation may be amplified by algorithm
Sources of numerical errors
• Rounding vs. truncation error ( 舍入 vs. 截断误差 )
– Rounding error: introduced by finite precision calculations in the
computer arithmetic
– Truncation error: introduced by algorithm via problem
simplification, e.g. series truncation, iterative process truncation etc.
Computational error = Truncation error + rounding error
• Propagated (传播) vs. computational error
– x = exact value, x̂ = approx. value
– f = exact function, f̂ = its approximation
– fˆ ( xˆ ) f ( x ) [ fˆ ( xˆ ) f ( xˆ )] [ f ( xˆ ) f ( x )]
Total error = Computational error: Propagated data error:
+
affected by algorithm not affected by algorithm
Well-Posed Problems
• well-posed problem: mathematical problems
having the properties that
A solution exists
The solution is unique
The solution depends continuously on the data.
• Problems that are not well-posed are ill-posed.
Example of Well-Posed Problem
Evaluate f ( x ) x 2 x 1150 at x =2 .
f ( 2) 1141
If there is an error in x is x , and x x 2.4, then
| x | 0.4 f ( 2.4) 1141 .84
The absolute error in f ( 2) : | f ( x x ) f ( x ) | 0.84
The relative error in f ( 2) :
| f ( x x ) f ( x ) | / f ( x ) 0.84 / 1141 7.362 10 4.
A well-posed problem away from f ( x) 0
9
Example of Ill-posed Problem:
The Butterfly Effect ( 蝴蝶效应 )
If a butterfly flaps it's wings in Beijing in March,
then, by August,
hurricane patterns in the Atlantic will be completely different.
Beijing Atlantic
/* ill-posed problem*/ ( 病态问题 )
Here is an example of this theory:
If the butterfly in Beijing flapped its wings, it would create tiny
wind patterns that would affect a passing breeze, which would in
turn affect local wind patterns, which in turn might turn a storm
slightly east. Since the storm's original direction was two degrees
more to the south, the storm will no longer hit a particular
mountain range and dissipate. Instead, it continues over a grassy
plain and lasts longer than it otherwise might have, which causes
more water to hit the plain. Over the next month, more water
evaporates from the plain than normal, which leads to higher
humidity, which increases the temperature across the continent.
As summer hits, hurricanes begin. Due to the warmer
temperature, the hurricanes last longer than they would have at a
lower temperature, and a hurricane that would have dissipated
before it even got close to Florida ends up wreaking havoc on
New Orleans and the Gulf of Mexico.
All because one little butterfly flapped its wings one day.
Example of Ill-Posed Problem
1 1 11 Solution
x1 2 x2 3 x3 6 x1 1
1 1 1 13
x1 x2 x3 x2 1
2 3 4 12
1 x1 1 x2 1 x3 47 x3 1
3 4 5 60
2 significant digits rounding
Solution
x1 0.50 x2 0.33 x3 1.8 x1 6.22
0.50 x1 0.33 x2 0.25 x3 1.1 x2 38.25
0.33 x 0.25 x 0.20 x 0.78 x3 33.65
1 2 3
Possible Project Topic on Ill-Posed Problems
Lorenz attractor is generated by the following
nonlinear differential equations:
dx
s( y x )
dt
dy
rx y xz
dt
dz
xy bz
dt
r 28, s 10, b 8 / 3
x 0.0, y 1.0, z 0.0.
Conditioning
• Well-conditioned (insensitive) problem: small relative
change in input gives commensurate relative change in the
solution.
• Ill-conditioned (sensitive): relative change in output is
much larger than that in the input data.
• Condition number = measure of sensitivity
relative change in solution
cond
relative change in input data
[ f ( xˆ ) f ( x )] / f ( x ) y / y
( xˆ x ) / x x / x
Condition number = |rel. forward error| / |rel. backward error|
= amplification factor
Conditioning
Evaluating function f for approximate input
xˆ x x instead of true input x gives
Absolute forward error: f ( x x ) f ( x ) f ' ( x ) x
f ( x x ) f ( x ) f ' ( x ) x
Relative forward error:
f ( x) f ( x)
f ' ( x )x / f ( x ) xf ' ( x )
Condition number: cond
x / x f ( x)
Relative error in function value can be much
larger or smaller than that in input, depending on
particular f and x
1 1 n x
Evaluate the integrals I n 0 x e dx , n 0 , 1 , 2 , ......
e
Method I : Integration by parts I n 1 n I n1
1 1 x 1 记为 *
I 0 e dx 1 0 .63212056 I0 What
e 0 e
E I I
0 .5 10 8 happened?!
Initial Error 0 0 0
1 1 n 0 1 1 n 1 1 1
e 0
x e dx I n x e dx
e 0
e(n 1 )
In
n1
I1* 1 1 I 0* 0 .36787944
... ... ... ...
I10* 1 10 I 9* 0 .08812800 I13* 1 13 I12* 7 .2276480 ??
I11* 1 11 I10* 0 .03059200 I14* 1 14 I13* 94 .959424 ?!
I12* 1 12 I11* 0 .63289600 ? I15 1 15 I14 1423.3914 !!
To Analyze the Error
| E n | | I n I n* | | (1 nI n1 ) (1 nI n*1 ) | n |E n1| ... n ! | E 0 |
The initial perturbation | E 0 | 0 .5 10 8 is accumulating
in a quick manner, so is the error increasing
/* unstable algorithm */
1
Method II: I n 1 n I n 1 I n 1 (1 I n )
n
1 1
IN
e( N 1) N 1
1 1 1
I N* IN
2 e( N 1) N 1
As N , E N I N I N* 0
1 1 1
I
*
15 0 .042746233
2 e 16 16
We just got lucky?
1
I 14*
(1 I 15*
) 0 .063816918
15
1
*
I 13 (1 I 14*
) 0 .066870220
14
1
*
I 12 (1 I 13*
) 0 .071779214
13
1
I 11 (1 I 12
* *
) 0 .077351732
12
1
*
I 10 (1 I 11* ) 0 .083877115
11
... ... ... ...
1
I 1* (1 I 2* ) 0 .36787944
2
1
I 0* (1 I 1* ) 0 .63212056
1
To Analyze the Error
1 1 1
| E N 1 | (1 I N ) (1 I N ) | E N |
*
N N N
Similarly, 1 1 1
| EN 2 | (1 I N 1 ) (1 I N* 1 ) | E N 1 |
N 1 N 1 N 1
1
| En | | EN |
N ( N 1) ... ( n 1)
The error is decreasing rapidly for n<<N.
/* stable algorithm */
Stability
• Algorithm is stable if small changes in the initial data
produce correspondingly small changes in the final results.
• Stability of algorithms is analogous to conditioning of
problems.
• From point of view of backward error analysis, algorithm
is stable if result produced is the exact solution to a nearby
problem.
• For stable algorithm, effect of computational error is no
worse than effect of small data error in input.
• Algorithm is unstable if not stable
• Algorithm is conditionally stable if the algorithm
is stable only for certain choice of initial data.
• Suppose an error with magnitude is introduced at some stage in the calculations and
E
the magnitude of the error after n subsequent operations is 0denoted by
• If , where is independent of . The growth of the error is said to be linear.
En
• If , for some , then the growth of the error is called exponential.
En C n | E0 | C n
En C n | E0 | C 1
Example: The recursive equation
10
pn pn-1 pn- 2 for n 2,3,
3 n
1
has the solution pn c1 c2 3n
3
for any constants c1 and c2 .
1
If p0 1 and p1 3 , we have c1 1, c2 0 , and pn 3n
Suppose that five-digit rounding is used to compute pn
Then pˆ 0 1.0000 and pˆ 1 0.33333 , which requires to modify
5
the constants to cˆ1 1.0000 and cˆ2 0.12500 10 .
n
1
Therefore pˆ n 1.0000 0.12500 105 3n
3
The round-off error pn pˆ n 0.12500 105 ( 3n ) grows
exponentially with n.
• Linear growth of error is usually unavoidable, and
when C and E0 are small the results are generally
acceptable.
• Exponential growth of error should be avoided, since
the term C n becomes large for even relatively small
values of n . This leads to unacceptable inaccuracies,
regardless of the size of E0 .
• As a consequence, an algorithm that exhibits linear
growth of error is stable, whereas an algorithm
exhibiting exponential error growth is unstable.
Accuracy
• Accuracy: closeness of computed solution to the true
solution of a problem
• Stability alone does not guarantee accurate results
• Accuracy depends on conditioning of the problem as well
as the stability of the algorithm.
• Inaccuracy can result from applying stable algorithm to
ill-conditioned problem or unstable algorithm to well-
conditioned problem
• Applying stable algorithm to well-conditioned problem
yields accurate solution.
Precision
• Precision: closeness of computed solution with each other
after repeated sampling
neither Precise Accurate both
Precise but but Precise
nor not not and
Accurate Accurate Precise Accurate
Algorithm
Algorithm: An algorithm is a procedure that
describes, in an unambiguous manner,
a finite sequence of steps
to be performed in a specified order.
The objective of an algorithm:
• to implement a procedure; ( direct )
• to solve a problem ( ultimate )
or approximate a solution to the problem
Example: An algorithm to compute
i 1 xi x1 x2 xN
N
where N and the numbers x1 , x2 ,, x N are given, is described
by the following
INPUT N , x1 , x2 ,, x N .
SUM i 1 xi .
N
OUTPUT
Step 1 Set SUM 0. (Initialize
Accumulator)
Step 2 For i 1, 2,, N do (Add the
set SUM SUM xi . next term)
Step 3 Output SUM ;
STOP.
• Very often, we use iterative techniques involving sequences.
In general, we would like the technique to converge as
rapidly as possible. The following definition is used to
compare the convergence rates of various methods.
• Clearly, the sequences and both converge to
zero.
1 1
2
n n 1 n n 1
• It should be observed that the first sequence converges
to zero more rapidly than the second sequence.
• Definition Suppose n n1 is a sequence known to
converge to zero, and n n1 converges to a number . If a
positive constant K exists with
n K n for large n
then we say that n n1 converges to with rate of
convergence O ( n ).
Any sequence n n1 can be used for the comparison.
1
However, in nearly every situation we use n
np
for some number p>0. We are generally interested in the
n O (1 / n p ).
largest value of p with
Example: Suppose that, for n 1,
n1 n3
n 2 and ˆ n
n n3
1 ˆ 1
If we let n and n 2 , we see that
n n
n1 n n 1
n 0 2
2
2 2 n
n n n
n 3 n 3n 1
ˆ n 0 3
3
4 2 4ˆ n
n n n
1 1
So n 0 O n 0 O 2
ˆ
n n
The rate of convergence of n to zero is similar
to the convergence of 1 / n to zero.
• Definition Suppose that limh0 G ( h) 0 and limh0 F ( h) L
If a positive constant K exists with
F ( h) L K G ( h) for sufficiently small h
then we write F ( h) L O (G ( h))
Similarly, the functions we use for comparison generally
have the form G ( h) h p
, where p 0. We are
interested in the largest value of p for which
F ( h) L O (G ( h))
Example: The third Taylor Polynomial gives
1 2 1 4
cos h 1 h h cos ( h)
2 24
for some (h) between zero and h. Consequently,
1 2 1 4
cos h h 1 h cos ( h)
2 24
This implies that
1 2
cos h h 1 O ( h4 )
2
1 2 1 1 4
Since cos h h 1 cos ( h) h h 4
2 24 24
1 2
The implication is that as h 0, cos h h converges
2
to its limit, 1, about as fast as h converges to 0.
4