Lecture Polynomial Interpolation Annotated1
Lecture Polynomial Interpolation Annotated1
Polynomial Interpolation
���� Spring
�/��
first half
I
the
.
Numerical linear
algebra (NLA)
[Discrete]
the second half
I : Numerica approximation
of functions
[Continuous]
Ed
Motivating questions :
fina
can weuse alt
picmp
gif
He
Why !
In know discrete number
practice may only
·
we a
,
Simpler =>
Interpretable
=>
efficient to
compute with
He This lecture ,
polynomial interpolation #
Task :
Given (Xi .
fixi) OFIN ,
goal is to find g
Sit
g(xi) f(X)
.
=
#How ?
Simple functions can be constructed
by
linear combination
of simple basis functions
g(x)
=
(jPi(x) =
GP(x) + Gp (x) +... + (n(x)
,
Here :
D &j() ozjen ,
are called basis functions
pre-determined
② G ,
ozjan are scalars determined by data
Hl How
many
basis functions ? Typicaly n=
(# unknowns =
#data)
We have an
important assumption :
MI How do we
find Cj ?
Let g(x) =
City)
We know
givi) =
fixi) = jPjxi) =
fixin
Po(Xo) & (x)
[=
,
...
Pn(X0)
t
I Remark :
1957 ojan linearly independent
invertible
= is
non-singular , so
(because AC = O CE C=
0)
It
Today :
9j(x) = x3 Pumial
In such case ,
let us denote
Given Pn(Xi) =
fixi) ,
we have
I= F-
1 X
Vandemonde matrix
Cost
of the approach ,
solving Ac b =
,
which is
E +0 (n')
& Gaussian elimination]
This
gives us c.
Pn(x) =
((((nX+ (n 1)x - + (n -z)X + Ca - 3) ...
Cost :
2U .
I What is the
problem of the
algorithm ?
D Fact Vandemonde ill conditioned !
: matrix is
very
just take Xi ,
Xi close
very stable
② The value
of Cj is not
very informative of the
behavior
of Pn and in
particular ,
) the
data
.
e
g if we
change fixi) little how does
by change ?
.
a ,
Not
easy to answer in the above
framework .
-
Ad How ?
to
improve
Idea : Can we
find basic functions such that
f(x19j(x)
& : can we
find such Pj ?
(1
x= X
Let ,
Pn(Xi) fixi) take 9jx
·
we
=
=
,
can
0 , X= Xi
Since
Pj is a
polynomial ,
Pj(x) =
(x X0)
-
-
...
(X -
Xj -
1)(X -
Xy+ ) xn)
(Xj Xo) (Xj-Xj 1) (X: XH) (Xi Xu)
-
...
- -
... -
property
f(x)
= &
H
:
Gx) =
[Lagrange polynomial]
often denoted
The by (j
polynomial interpolant
Ph(x) f(x)P(x)
=
*
Hl
Algorithm : O compute Lagrange polynomials by ojan
Gevaluate
equation * to
get Pn(X)
El
Advantages : (a) no need to solve ill-conditioned linear
system
(b) Once fixr) changes ,
can
change In .
easily
(2)
idea
imple . Both C , Ce can be understood as
evaluation cost .
Here :
computation of Pj(x) : 0 (n)
for 0 (2)
total cost
evaluating Pn :
.
Ride
= ((x)
be
=
Here Wj
--
=
i
j
Then
Puix) =
f(x) (jkx)
[make the shared
part
((x) fix) only computed !]
=
once
Cost :
pre-compute Wj . Dejan : 0(n2)
((x)
evaluating In
s
:
evaluating fix) : In
al
evaluation
ost 5n :
(it is 0(n)
evaluation cost ]
!
Polynomial Interpolation
Goal
For a function f , find a simpler function g ⇡ f
�/��
Linear Combinations of Basis Functions
�/��
Linear Combinations of Basis Functions
�/��
Interpolants
�/��
Interpolants
�/��
Interpolants
�/��
Polynomial interpolation
We consider polynomial interpolation
n
X
v(x) = c j xj = c 0 + c 1 x1 + · · · + c n xn ,
j=0
�/��
Polynomial interpolation
We consider polynomial interpolation
n
X
v(x) = c j xj = c 0 + c 1 x1 + · · · + c n xn ,
j=0
�/��
Unique interpolating polynomial and its disadvantages
Disadvantages:
�/��
Unique interpolating polynomial and its disadvantages
Disadvantages:
I The calculated coe�cients cj are not directly indicative of
the interpolated function, and they may completely
change if we wish to slightly modify the interpolation
problem.
�/��
Unique interpolating polynomial and its disadvantages
Disadvantages:
I The calculated coe�cients cj are not directly indicative of
the interpolated function, and they may completely
change if we wish to slightly modify the interpolation
problem.
I The Vandermonde matrix X is often ill-conditioned, so
the coe�cients thus determined are prone to
inaccuracies.
�/��
Unique interpolating polynomial and its disadvantages
Disadvantages:
I The calculated coe�cients cj are not directly indicative of
the interpolated function, and they may completely
change if we wish to slightly modify the interpolation
problem.
I The Vandermonde matrix X is often ill-conditioned, so
the coe�cients thus determined are prone to
inaccuracies.
I This approach requires about 23 n3 operations (flops) to
carry out Gaussian elimination for the construction stage.
�/��
Lagrange interpolation
It would be nice if a polynomial basis is found such that
cj = f (xj ):
n
X
pn (x) = f (xj ) j (x)
j=0
�/��
Lagrange interpolation
It would be nice if a polynomial basis is found such that
cj = f (xj ):
n
X
pn (x) = f (xj ) j (x)
j=0
�/��
An Improved Lagrange Formula
Denote
n
Y
L(x) = (x x0 ) (x x1 ) · · · (x xn ) = (x xi )
i=0
Then,
wj
Lj (x) = L(x)
x xj
and
n
X n
X wj
pn (x) = Lj (x)f (xj ) = L(x) f (xj )
x xj
j=0 j=0
j (x) = xj
��/��
Summary
j (x) = xj
��/��
Existence of an adaptive interpolant
Consider
pn (x) = c0 +c1 (x x0 )+· · ·+cn (x x0 ) (x x1 ) · · · (x xn 1) .
��/��
Existence of an adaptive interpolant
Consider
pn (x) = c0 +c1 (x x0 )+· · ·+cn (x x0 ) (x x1 ) · · · (x xn 1) .
j (xi ) = 0, i = 0, 1, · · · , j 1, j (xj ) 6= 0.
��/��
Existence of an adaptive interpolant
Consider
pn (x) = c0 +c1 (x x0 )+· · ·+cn (x x0 ) (x x1 ) · · · (x xn 1) .
j (xi ) = 0, i = 0, 1, · · · , j 1, j (xj ) 6= 0.
The coe�cient matrix is lower triangular and the elements on
its diagonal are nonzero, and thus the coe�cient matrix is
non-singular.
2 3
1 ··· 0
6 1 x1 x0 7
6 7
6 .. 7
6 1 x x (x x ) (x x ) . 7
6 2 0 2 0 2 1 7
6 . .. .. 7
6 . . 7
6 . . 7
6 7
4 nQ1 5
1 xn x0 ... ··· (xn xj )
j=0
��/��
Representation in terms of divided di�erences
Determine c0 using the condition pn (x0 ) = f (x0 ) :
��/��
Representation in terms of divided di�erences
Determine c0 using the condition pn (x0 ) = f (x0 ) :
��/��
Representation in terms of divided di�erences
Determine c0 using the condition pn (x0 ) = f (x0 ) :
��/��
Representation in terms of divided di�erences
Determine c0 using the condition pn (x0 ) = f (x0 ) :
��/��
Newton divided di�erence interpolation formula
��/��
Newton’s form
��/��
Example
Suppose f (x0 ) = 1, f (x1 ) = 3, f (x3 ) = 3 where (x0 , x1 , x3 ) = (1, 2, 4)
3 1 3 3 0 2 2
f [x0 , x1 ] = = 2, f [x1 , x2 ] = = 0, f [x0 , x1 , x2 ] = = .
2 1 4 2 4 1 3
Å ã
2
p2 (x) = 1 + (x 1) 2 (x 2) .
3
If we wish to add another data point, say, (x3 , f (x3 )) = (5, 4),
we calculate
4 3 1 0 1
f [x3 ] = 4, f [x2 , x3 ] = = 1, f [x1 , x2 , x3 ] = = ,
5 4 5 2 3
(1/3) ( 2/3) 1
f [x0 , x1 , x2 , x3 ] = = .
5 1 4
��/��
Note: FLOPs complexity of Newton
��/��
The error in polynomial interpolation
��/��
Runge’s phenomenon
Consider the Runge function:
f (x) = (1 + 25x2 ) 1
Recall that
nY
1
e(x) = f (x) p(x) = f (n+1) (⇠x ) (x xi )
(n + 1)!
i=0
��/��
Chebyshev interpolation
Suppose we are free to choose the n + 1 data abscissae,
x0 , x1 , · · · , xn : how should we choose these points?
Q
n
The minimization of (x xi ) leads to the choice of
i=0
Chebyshev points. These points are defined on the interval
[ 1, 1] by
Å ã
2i + 1
xi = cos ⇡ , i = 0, . . . , n.
2(n + 1)
��/��