[go: up one dir, main page]

0% found this document useful (0 votes)
10 views44 pages

Lecture Polynomial Interpolation Annotated1

The document discusses polynomial interpolation, focusing on finding a simpler function that approximates a given function based on discrete data points. It explains the construction of interpolating functions using linear combinations of basis functions and highlights the challenges associated with the Vandermonde matrix, which can be ill-conditioned. The document also addresses the unique existence of interpolating polynomials and their disadvantages, such as sensitivity to changes in data and the non-intuitive nature of the resulting coefficients.

Uploaded by

gadakrish4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views44 pages

Lecture Polynomial Interpolation Annotated1

The document discusses polynomial interpolation, focusing on finding a simpler function that approximates a given function based on discrete data points. It explains the construction of interpolating functions using linear combinations of basis functions and highlights the challenges associated with the Vandermonde matrix, which can be ill-conditioned. The document also addresses the unique existence of interpolating polynomials and their disadvantages, such as sensitivity to changes in data and the non-intuitive nature of the resulting coefficients.

Uploaded by

gadakrish4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Numerical Analysis

Polynomial Interpolation

Instructor: Yifan Chen

Courant Institute, New York University

���� Spring

�/��
first half
I
the
.
Numerical linear
algebra (NLA)
[Discrete]
the second half
I : Numerica approximation
of functions
[Continuous]
Ed
Motivating questions :

fina
can weuse alt
picmp
gif
He
Why !
In know discrete number
practice may only
·
we a
,

of values (fNi)] OCIEN

Simpler =>
Interpretable
=>
efficient to
compute with

He This lecture ,
polynomial interpolation #
Task :
Given (Xi .
fixi) OFIN ,
goal is to find g
Sit
g(xi) f(X)
.

=
#How ?
Simple functions can be constructed
by
linear combination
of simple basis functions

g(x)
=
(jPi(x) =
GP(x) + Gp (x) +... + (n(x)
,

Here :
D &j() ozjen ,
are called basis functions
pre-determined

② G ,
ozjan are scalars determined by data

Hl How
many
basis functions ? Typicaly n=

(# unknowns =
#data)

We have an
important assumption :

fix) orjan are


linearly independent.
That is Cjk) = 0 () (j =
0 Erejn
-

MI How do we
find Cj ?
Let g(x) =
City)
We know
givi) =
fixi) = jPjxi) =

fixin
Po(Xo) & (x)

[=
,
...
Pn(X0)
t
I Remark :
1957 ojan linearly independent
invertible
= is
non-singular , so

(because AC = O CE C=
0)

It
Today :
9j(x) = x3 Pumial
In such case ,
let us denote

Pn(x) = CnX"+ Cn + X" + . + C, X + Co

Given Pn(Xi) =
fixi) ,
we have

I= F-
1 X

Vandemonde matrix

Cost
of the approach ,

solving Ac b =
,
which is
E +0 (n')
& Gaussian elimination]
This
gives us c.

Evaluation cost : Pn(x) = CnX"+ Cn+1X+...+ CX + Co

Pn(x) =
((((nX+ (n 1)x - + (n -z)X + Ca - 3) ...

Cost :
2U .
I What is the
problem of the
algorithm ?
D Fact Vandemonde ill conditioned !
: matrix is
very
just take Xi ,
Xi close

Numerically the clg is not


=>
very close to be
singular)
,

very stable
② The value
of Cj is not
very informative of the
behavior
of Pn and in
particular ,
) the
data
.
e
g if we
change fixi) little how does
by change ?
.

a ,

Not
easy to answer in the above
framework .

-
Ad How ?
to
improve
Idea : Can we
find basic functions such that

the A in "Ac bir = is


identity ?
This means (i = bi =
fix) Foxien
=> Pn(x) =

f(x19j(x)
& : can we
find such Pj ?
(1
x= X
Let ,
Pn(Xi) fixi) take 9jx
·

we
=
=
,
can
0 , X= Xi

Since
Pj is a
polynomial ,

Pj(x) =
(x X0)
-
-
...
(X -

Xj -
1)(X -

Xy+ ) xn)
(Xj Xo) (Xj-Xj 1) (X: XH) (Xi Xu)
-
...
- -
... -
property
f(x)
= &
H
:

Gx) =

[Lagrange polynomial]
often denoted
The by (j
polynomial interpolant
Ph(x) f(x)P(x)
=
*

which satisfies PnIXi) =


f(x) ,
on

Hl
Algorithm : O compute Lagrange polynomials by ojan
Gevaluate
equation * to
get Pn(X)
El
Advantages : (a) no need to solve ill-conditioned linear
system
(b) Once fixr) changes ,
can
change In .
easily

All What is the cost ?


We have a Lagrange polynomials
to determine s (C,)
and cost
evaluation ·

(2)
idea
imple . Both C , Ce can be understood as
evaluation cost .
Here :
computation of Pj(x) : 0 (n)

for 0 (2)
total cost
evaluating Pn :
.
Ride
= ((x)

be
=

Here Wj
--
=

i
j
Then
Puix) =
f(x) (jkx)
[make the shared
part
((x) fix) only computed !]
=
once

Cost :
pre-compute Wj . Dejan : 0(n2)
((x)
evaluating In
s
:

evaluating fix) : In

al
evaluation
ost 5n :
(it is 0(n)

evaluation cost ]
!
Polynomial Interpolation

Goal
For a function f , find a simpler function g ⇡ f

Particular task: interpolation

Given a set of data points {(xi , f (xi )} for 1  i  N , we aim to


find g(x) that interpolates the data, i.e., g(xi ) = f (xi ) for
1  i  N.

�/��
Linear Combinations of Basis Functions

We assume a linear form for interpolating functions g(x)


n
X
g(x) = cj j (x) = c0 0 (x) + · · · + cn n (x)
j=0

where {cj } are unknown coe�cients or parameters


determined from the data, and { j (x)} are predetermined
basis functions.

�/��
Linear Combinations of Basis Functions

We assume a linear form for interpolating functions g(x)


n
X
g(x) = cj j (x) = c0 0 (x) + · · · + cn n (x)
j=0

where {cj } are unknown coe�cients or parameters


determined from the data, and { j (x)} are predetermined
basis functions.
I We assume that the number of basis functions equals the
number of data.
I These basis functions are further assumed to be linearly
independent, which means that if coe�cients {cj } are
found such that g(x) = 0 for all x, then all these
coe�cients themselves must vanish: cj = 0.

�/��
Interpolants

The resulting linear system of equations is


á ë á ë á ë
0 (x0 ) 1 (x0 ) ··· n (x0 ) c0 f (x0 )
0 (x1 ) 1 (x1 ) ··· n (x1 ) c1 f (x1 )
.. .. .. .. · .. = ..
. . . . . .
0 (xn ) 1 (xn ) ··· n (xn ) cn f (xn )

�/��
Interpolants

The resulting linear system of equations is


á ë á ë á ë
0 (x0 ) 1 (x0 ) ··· n (x0 ) c0 f (x0 )
0 (x1 ) 1 (x1 ) ··· n (x1 ) c1 f (x1 )
.. .. .. .. · .. = ..
. . . . . .
0 (xn ) 1 (xn ) ··· n (xn ) cn f (xn )

Polynomial interpolation: This simplest form of representing a


polynomial implies the choice of a monomial basis:
n
n (x) = x

�/��
Interpolants

The resulting linear system of equations is


á ë á ë á ë
0 (x0 ) 1 (x0 ) ··· n (x0 ) c0 f (x0 )
0 (x1 ) 1 (x1 ) ··· n (x1 ) c1 f (x1 )
.. .. .. .. · .. = ..
. . . . . .
0 (xn ) 1 (xn ) ··· n (xn ) cn f (xn )

Polynomial interpolation: This simplest form of representing a


polynomial implies the choice of a monomial basis:
n
n (x) = x

Trigonometric interpolation: n (x) = einx

�/��
Polynomial interpolation
We consider polynomial interpolation
n
X
v(x) = c j xj = c 0 + c 1 x1 + · · · + c n xn ,
j=0

which lead to a system of (n + 1) linear equations with the


(n + 1) unknowns c0 , c1 , · · · , cn given by
á ë á ë á ë
1 x0 1 x0 2 . . . x0 n c0 f (x0 )
1 x1 1 x1 2 . . . x1 n c1 f (x1 )
.. .. .. .. · .. = ..
. . . . . .
1 xn 1 xn 2 · · · xn n cn f (xn )

�/��
Polynomial interpolation
We consider polynomial interpolation
n
X
v(x) = c j xj = c 0 + c 1 x1 + · · · + c n xn ,
j=0

which lead to a system of (n + 1) linear equations with the


(n + 1) unknowns c0 , c1 , · · · , cn given by
á ë á ë á ë
1 x0 1 x0 2 . . . x0 n c0 f (x0 )
1 x1 1 x1 2 . . . x1 n c1 f (x1 )
.. .. .. .. · .. = ..
. . . . . .
1 xn 1 xn 2 · · · xn n cn f (xn )

The coe�cient matrix X is called a Vandermonde matrix.


Y
det(X ) = (xj xi ) 6= 0
0i<jn
�/��
Unique interpolating polynomial and its disadvantages

Polynomial Interpolant Unique Existence


For any real data points {xi , f (xi )} with distinct abscissae xi ,
there exists a unique polynomial v(x) of degree at most n
which satisfies the interpolation conditions
v(xi ) = f (xi ), i = 0, 1, · · · , n.

�/��
Unique interpolating polynomial and its disadvantages

Polynomial Interpolant Unique Existence


For any real data points {xi , f (xi )} with distinct abscissae xi ,
there exists a unique polynomial v(x) of degree at most n
which satisfies the interpolation conditions
v(xi ) = f (xi ), i = 0, 1, · · · , n.

Disadvantages:

�/��
Unique interpolating polynomial and its disadvantages

Polynomial Interpolant Unique Existence


For any real data points {xi , f (xi )} with distinct abscissae xi ,
there exists a unique polynomial v(x) of degree at most n
which satisfies the interpolation conditions
v(xi ) = f (xi ), i = 0, 1, · · · , n.

Disadvantages:
I The calculated coe�cients cj are not directly indicative of
the interpolated function, and they may completely
change if we wish to slightly modify the interpolation
problem.

�/��
Unique interpolating polynomial and its disadvantages

Polynomial Interpolant Unique Existence


For any real data points {xi , f (xi )} with distinct abscissae xi ,
there exists a unique polynomial v(x) of degree at most n
which satisfies the interpolation conditions
v(xi ) = f (xi ), i = 0, 1, · · · , n.

Disadvantages:
I The calculated coe�cients cj are not directly indicative of
the interpolated function, and they may completely
change if we wish to slightly modify the interpolation
problem.
I The Vandermonde matrix X is often ill-conditioned, so
the coe�cients thus determined are prone to
inaccuracies.

�/��
Unique interpolating polynomial and its disadvantages

Polynomial Interpolant Unique Existence


For any real data points {xi , f (xi )} with distinct abscissae xi ,
there exists a unique polynomial v(x) of degree at most n
which satisfies the interpolation conditions
v(xi ) = f (xi ), i = 0, 1, · · · , n.

Disadvantages:
I The calculated coe�cients cj are not directly indicative of
the interpolated function, and they may completely
change if we wish to slightly modify the interpolation
problem.
I The Vandermonde matrix X is often ill-conditioned, so
the coe�cients thus determined are prone to
inaccuracies.
I This approach requires about 23 n3 operations (flops) to
carry out Gaussian elimination for the construction stage.
�/��
Lagrange interpolation
It would be nice if a polynomial basis is found such that
cj = f (xj ):
n
X
pn (x) = f (xj ) j (x)
j=0

�/��
Lagrange interpolation
It would be nice if a polynomial basis is found such that
cj = f (xj ):
n
X
pn (x) = f (xj ) j (x)
j=0

Such a polynomial basis is provided by the Lagrange


interpolation
I For this we define the Lagrange polynomials, Lj (x), which
are polynomials of degree n that satisfy
®
0, i 6= j,
Lj (xi ) =
1, i = j.
I Given data f (xi ) as before, the unique polynomial
interpolant of degree at most n can now be written as
n
X
pn (x) = f (xj )Lj (x)
j=0
�/��
Properties of Lagrange polynomials

In the general case where n + 1 data locations xi are specified,


the Lagrange polynomials uniquely exist, because they are
nothing other than polynomial interpolants.
We also have an explicit formula
n
Y (x
(x x0 ) · · · (x xj 1 ) (x xj+1 ) · · · (x xn ) xi )
Lj (x) = =
(xj x0 ) · · · (xj xj 1 ) (xj xj+1 ) · · · (xj xn ) i=0 (xj xi )
i6=j

With this, we have


n
X
pn (x) = f (xj )Lj (x)
j=0

Lagrange interpolation leads to a formula requiring O(n2 )


flops

�/��
An Improved Lagrange Formula
Denote
n
Y
L(x) = (x x0 ) (x x1 ) · · · (x xn ) = (x xi )
i=0

and barycentric weights as


Ñ é 1
Y
wj = (xj xi ) ,
i:i6=j

Then,
wj
Lj (x) = L(x)
x xj
and
n
X n
X wj
pn (x) = Lj (x)f (xj ) = L(x) f (xj )
x xj
j=0 j=0

FLOPs complexity. Construction of weights: O(n2 ).


Evaluation: O(n) �/��
Summary
Standard monomial basis:

j (x) = xj

Lagrange polynomial basis:


n
Y (x xi )
j (x) =
(xj xi )
i=0
i6=j

Basis Construction Evaluation Selling


j (x)
name cost cost feature
Monomial xj 2 3
3n 2n simple
cj = f (xj )
Lagrange Lj (x) n2 5n
most stable

��/��
Summary

Standard monomial basis:

j (x) = xj

Lagrange polynomial basis:


n
Y (x xi )
j (x) =
(xj xi )
i=0
i6=j

Newton polynomial basis (Next class):


jY1
j (x) = (x xi )
i=0

��/��
Existence of an adaptive interpolant
Consider
pn (x) = c0 +c1 (x x0 )+· · ·+cn (x x0 ) (x x1 ) · · · (x xn 1) .

��/��
Existence of an adaptive interpolant
Consider
pn (x) = c0 +c1 (x x0 )+· · ·+cn (x x0 ) (x x1 ) · · · (x xn 1) .

Given the form of the basis functions, we have

j (xi ) = 0, i = 0, 1, · · · , j 1, j (xj ) 6= 0.

��/��
Existence of an adaptive interpolant
Consider
pn (x) = c0 +c1 (x x0 )+· · ·+cn (x x0 ) (x x1 ) · · · (x xn 1) .

Given the form of the basis functions, we have

j (xi ) = 0, i = 0, 1, · · · , j 1, j (xj ) 6= 0.
The coe�cient matrix is lower triangular and the elements on
its diagonal are nonzero, and thus the coe�cient matrix is
non-singular.
2 3
1 ··· 0
6 1 x1 x0 7
6 7
6 .. 7
6 1 x x (x x ) (x x ) . 7
6 2 0 2 0 2 1 7
6 . .. .. 7
6 . . 7
6 . . 7
6 7
4 nQ1 5
1 xn x0 ... ··· (xn xj )
j=0
��/��
Representation in terms of divided di�erences
Determine c0 using the condition pn (x0 ) = f (x0 ) :

f (x0 ) = pn (x0 ) = c0 + 0 + · · · + 0 =) c0 = f (x0 ) .

��/��
Representation in terms of divided di�erences
Determine c0 using the condition pn (x0 ) = f (x0 ) :

f (x0 ) = pn (x0 ) = c0 + 0 + · · · + 0 =) c0 = f (x0 ) .

Next, determine c1 using the condition pn (x1 ) = f (x1 ) :


f (x1 ) f (x0 )
f (x1 ) = pn (x1 ) = c0 + c1 (x1 x0 ) =) c1 = .
x1 x0

��/��
Representation in terms of divided di�erences
Determine c0 using the condition pn (x0 ) = f (x0 ) :

f (x0 ) = pn (x0 ) = c0 + 0 + · · · + 0 =) c0 = f (x0 ) .

Next, determine c1 using the condition pn (x1 ) = f (x1 ) :


f (x1 ) f (x0 )
f (x1 ) = pn (x1 ) = c0 + c1 (x1 x0 ) =) c1 = .
x1 x0
Next, impose the condition pn (x2 ) = f (x2 ). The result can be
arranged as
f (x2 ) f (x1 ) f (x1 ) f (x0 )
x2 x1 x1 x0
c2 = .
x2 x0

��/��
Representation in terms of divided di�erences
Determine c0 using the condition pn (x0 ) = f (x0 ) :

f (x0 ) = pn (x0 ) = c0 + 0 + · · · + 0 =) c0 = f (x0 ) .

Next, determine c1 using the condition pn (x1 ) = f (x1 ) :


f (x1 ) f (x0 )
f (x1 ) = pn (x1 ) = c0 + c1 (x1 x0 ) =) c1 = .
x1 x0
Next, impose the condition pn (x2 ) = f (x2 ). The result can be
arranged as
f (x2 ) f (x1 ) f (x1 ) f (x0 )
x2 x1 x1 x0
c2 = .
x2 x0
Continue the process until all the cj have been determined.
The coe�cient cj in Newton’s form is called the jth divided
di�erence, denoted f [x0 , x1 , . . . , xj ].

f [x0 ] = c0 , f [x0 , x1 ] = c1 , . . . , f [x0 , x1 , . . . , xn ] = cn


��/��
Newton divided di�erence interpolation formula

Thus, the Newton divided di�erence interpolation formula is


given by

pn (x) =f [x0 ] + f [x0 , x1 ] (x x0 ) + f [x0 , x1 , x2 ] (x x0 ) (x x1 )


+ · · · + f [x0 , x1 , . . . , xn ] (x x0 ) (x x1 ) · · · (x xn 1)
n jY1
!
X
= f [x0 , x1 , . . . , xj ] (x xi ) .
j=0 i=0

��/��
Newton divided di�erence interpolation formula

Thus, the Newton divided di�erence interpolation formula is


given by

pn (x) =f [x0 ] + f [x0 , x1 ] (x x0 ) + f [x0 , x1 , x2 ] (x x0 ) (x x1 )


+ · · · + f [x0 , x1 , . . . , xn ] (x x0 ) (x x1 ) · · · (x xn 1)
n jY1
!
X
= f [x0 , x1 , . . . , xj ] (x xi ) .
j=0 i=0

The divided di�erence coe�cients satisfy the recursive


formula
f [x1 , x2 , . . . , xj ] f [x0 , x1 , . . . , xj 1]
f [x0 , x1 , . . . , xj ] = .
xj x0

��/��
Newton’s form

Newton’s representation is evolutionary, or recursive: having


determined pn 1 (x) that interpolates at the first n data points,
pn (x) which interpolates all previous data as well as
(xn , f (xn )) can be easily constructed.

The recursion means that we do not have to build the


coe�cients from scratch each time we add a new
interpolation point. Thus, we do not need to know all data
points ahead of time. Rather, we can do this adaptively.

��/��
Example
Suppose f (x0 ) = 1, f (x1 ) = 3, f (x3 ) = 3 where (x0 , x1 , x3 ) = (1, 2, 4)
3 1 3 3 0 2 2
f [x0 , x1 ] = = 2, f [x1 , x2 ] = = 0, f [x0 , x1 , x2 ] = = .
2 1 4 2 4 1 3
Å ã
2
p2 (x) = 1 + (x 1) 2 (x 2) .
3
If we wish to add another data point, say, (x3 , f (x3 )) = (5, 4),
we calculate
4 3 1 0 1
f [x3 ] = 4, f [x2 , x3 ] = = 1, f [x1 , x2 , x3 ] = = ,
5 4 5 2 3

(1/3) ( 2/3) 1
f [x0 , x1 , x2 , x3 ] = = .
5 1 4

p3 (x) = p2 (x) + f [x0 , x1 , x2 , x3 ](x x0 )(x x1 )(x x2 )


Å ã
2 1
= 1 + (x 1) 2 (x 2) + (x 1)(x 2)(x 4).
3 4
In nested form, this reads
Å Å ãã
2 1
p3 (x) = 1 + (x 1) 2 + (x 2) + (x 4) .
3 4
��/��
Algorithms comparison

Basis Construction Evaluation Selling


j (x)
name cost cost feature
Monomial xj 2 3
3n 2n simple
cj = f (xj )
Lagrange Lj (x) n2 5n
most stable
jQ1
Newton (x xi ) 3 2
2n 2n adaptive
i=0

��/��
Note: FLOPs complexity of Newton

How do we use all of these recurrences and formulas in


practice? Note that in order to compute yn,n = f [x0 , x1 , . . . , xn ]
we must compute all of

j,l = f [xj 1 , xj l+1 , . . . , xj ], 0  l  j  n.


Thus, we construct a divided di�erence table, which is a lower
triangular array depicted next.

i xi f [xi ] f [xi 1 , xi ] f [xi 2 , x i 1 , xi ] ··· f [xi n , . . . , xi ]


0 x0 f (x0 )
f [x1 ] f [x0 ]
1 x1 f (x1 )
x1 x0
f [x2 ] f [x1 ]
2 x2 f (x2 ) f [x0 , x1 , x2 ]
x2 x1
.. .. .. .. .. ..
. . . . . .
f [xn ] f [xn 1 ]
n xn f (xn ) f [xn 2 , xn 1 , x n ] ··· f [x0 , x1 , . . . , xn ]
xn xn 1

��/��
The error in polynomial interpolation

Theorem: Let f be a function in C n+1 [a, b], and let p be a


polynomial of degree n that interpolates the function f at
n + 1 distinct points x0 , x1 , . . . , xn 2 [a, b]. Then to each
x 2 [a, b] there exists a point ⇠x 2 [a, b] such that
n
Y
1
f (x) p(x) = f (n+1) (⇠x ) (x xi )
(n + 1)!
i=0

��/��
Runge’s phenomenon
Consider the Runge function:

f (x) = (1 + 25x2 ) 1

Figure: Polynomial interpolation at uniformly spaced abscissae can


be worse with increasing polynomial degree n.
��/��
Runge’s phenomenon

Recall that
nY
1
e(x) = f (x) p(x) = f (n+1) (⇠x ) (x xi )
(n + 1)!
i=0

Runge’s phenomenon is the consequence of two properties.

The magnitude of the nth order derivatives of this particular


function grows quickly when n increases.

The uniformly spaced abscissae lead to an increase of


Qn
(x xi ) when n increases.
i=0

��/��
Chebyshev interpolation
Suppose we are free to choose the n + 1 data abscissae,
x0 , x1 , · · · , xn : how should we choose these points?
Q
n
The minimization of (x xi ) leads to the choice of
i=0
Chebyshev points. These points are defined on the interval
[ 1, 1] by
Å ã
2i + 1
xi = cos ⇡ , i = 0, . . . , n.
2(n + 1)

For a general interval [a, b] we apply the a�ne transformation


that maps [ 1, 1] onto [a, b] to shift and scale the Chebyshev
points. By the transformation, we redefine the interpolation
abscissae as
b a
xi a+ (ti + 1) , ti 2 [ 1, 1]
2
��/��
Chebyshev interpolation

Figure: Polynomial interpolation at Chebyshev points. Results are


much improved especially for larger n.

��/��

You might also like