[go: up one dir, main page]

0% found this document useful (0 votes)
29 views7 pages

Hermite Interpolation

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 7

Jim Lambers

MAT 772
Fall Semester 2010-11
Lecture 6 Notes

These notes correspond to Sections 6.4 and 6.5 in the text.

Hermite Interpolation
Suppose that the interpolation points are perturbed so that two neighboring points 𝑥𝑖 and 𝑥𝑖+1 ,
0 ≤ 𝑖 < 𝑛, approach each other. What happens to the interpolating polynomial? In the limit, as
𝑥𝑖+1 → 𝑥𝑖 , the interpolating polynomial 𝑝𝑛 (𝑥) not only satisfies 𝑝𝑛 (𝑥𝑖 ) = 𝑦𝑖 , but also the condition
𝑦𝑖+1 − 𝑦𝑖
𝑝′𝑛 (𝑥𝑖 ) = lim .
𝑥𝑖+1 →𝑥𝑖 𝑥𝑖+1 − 𝑥𝑖

It follows that in order to ensure uniqueness, the data must specify the value of the derivative of
the interpolating polynomial at 𝑥𝑖 .
In general, the inclusion of an interpolation point 𝑥𝑖 𝑘 times within the set 𝑥0 , . . . , 𝑥𝑛 must be
(𝑗)
accompanied by specification of 𝑝𝑛 (𝑥𝑖 ), 𝑗 = 0, . . . , 𝑘 − 1, in order to ensure a unique solution.
These values are used in place of divided differences of identical interpolation points in Newton
interpolation.
Interpolation with repeated interpolation points is called osculatory interpolation, since it can
be viewed as the limit of distinct interpolation points approaching one another, and the term
“osculatory” is based on the Latin word for “kiss”.
In the case where each of the interpolation points 𝑥0 , 𝑥1 , . . . , 𝑥𝑛 is repeated exactly once, the
interpolating polynomial for a differentiable function 𝑓 (𝑥) is called the Hermite polynomial of 𝑓 (𝑥),
and is denoted by 𝐻2𝑛+1 (𝑥), since this polynomial must have degree 2𝑛 + 1 in order to satisfy the
2𝑛 + 2 constraints

𝐻2𝑛+1 (𝑥𝑖 ) = 𝑓 (𝑥𝑖 ), 𝐻2𝑛+1 (𝑥𝑖 ) = 𝑓 ′ (𝑥𝑖 ), 𝑖 = 0, 1, . . . , 𝑛.

To satisfy these constraints, we define, for 𝑖 = 0, 1, . . . , 𝑛,

𝐻𝑖 (𝑥) = [𝐿𝑖 (𝑥)]2 (1 − 2𝐿′𝑖 (𝑥𝑖 )(𝑥 − 𝑥𝑖 )),


𝐾𝑖 (𝑥) = [𝐿𝑖 (𝑥)]2 (𝑥 − 𝑥𝑖 ),

where, as before, 𝐿𝑖 (𝑥) is the 𝑖th Lagrange polynomial for the interpolation points 𝑥0 , 𝑥1 , . . . , 𝑥𝑛 .
It can be verified directly that these polynomials satisfy, for 𝑖, 𝑗 = 0, 1, . . . , 𝑛,

𝐻𝑖 (𝑥𝑗 ) = 𝛿𝑖𝑗 , 𝐻𝑖′ (𝑥𝑗 ) = 0,

1
𝐾𝑖 (𝑥𝑗 ) = 0, 𝐾𝑖′ (𝑥𝑗 ) = 𝛿𝑖𝑗 ,
where 𝛿𝑖𝑗 is the Kronecker delta {
1 𝑖=𝑗
𝛿𝑖𝑗 = .
∕ 𝑗
0 𝑖=
It follows that
𝑛

𝐻2𝑛+1 (𝑥) = [𝑓 (𝑥𝑖 )𝐻𝑖 (𝑥) + 𝑓 ′ (𝑥𝑖 )𝐾𝑖 (𝑥)]
𝑖=0

is a polynomial of degree 2𝑛 + 1 that satisfies the above constraints.


To prove that this polynomial is the unique polynomial of degree 2𝑛 + 1, we assume that there
is another polynomial 𝐻 ˜ 2𝑛+1 of degree 2𝑛 + 1 that satisfies the constraints. Because 𝐻2𝑛+1 (𝑥𝑖 ) =
𝐻˜ 2𝑛+1 (𝑥𝑖 ) = 𝑓 (𝑥𝑖 ) for 𝑖 = 0, 1, . . . , 𝑛, 𝐻2𝑛+1 − 𝐻
˜ 2𝑛+1 has at least 𝑛 + 1 zeros. It follows from Rolle’s
′ ˜ ′
Theorem that 𝐻2𝑛+1 − 𝐻2𝑛+1 has 𝑛 zeros that lie within the intervals (𝑥𝑖−1 , 𝑥𝑖 ) for 𝑖 = 0, 1, . . . , 𝑛−1.
′ ˜′ ′
Furthermore, because 𝐻2𝑛+1 (𝑥𝑖 ) = 𝐻 2𝑛+1 (𝑥𝑖 ) = 𝑓 (𝑥𝑖 ) for 𝑖 = 0, 1, . . . , 𝑛, it follows that 𝐻2𝑛+1 −
˜ 2𝑛+1 has 𝑛 + 1 additional zeros, for a total of at least 2𝑛 + 1. However, 𝐻 ′ ˜′
𝐻 2𝑛+1 − 𝐻2𝑛+1 is a
polynomial of degree 2𝑛 + 1, and the only way that a polynomial of degree 2𝑛 + 1 can have 2𝑛 + 1
zeros is if it is identically zero. Therefore, 𝐻2𝑛+1 = 𝐻 ˜ 2𝑛+1 , and the Hermite polynomial is unique.
Using a similar approach as for the Lagrange interpolating polynomial, combined with ideas
from the proof of the uniqueness of the Hermite polynomial, the following result can be proved.
Theorem Let 𝑓 be 2𝑛 + 2 times continuously differentiable on [𝑎, 𝑏], and let 𝐻2𝑛+1 denote the
Hermite polynomial of 𝑓 with interpolation points 𝑥0 , 𝑥1 , . . . , 𝑥𝑛 in [𝑎, 𝑏]. Then there exists a point
𝜉(𝑥) ∈ [𝑎, 𝑏] such that

𝑓 (2𝑛+2) (𝜉(𝑥))
𝑓 (𝑥) − 𝐻2𝑛+1 (𝑥) = (𝑥 − 𝑥0 )2 (𝑥 − 𝑥1 )2 ⋅ ⋅ ⋅ (𝑥 − 𝑥𝑛 )2 .
(2𝑛 + 2)!

The representation of the Hermite polynomial in terms of Lagrange polynomials and their
derivatives is not practical, because of the difficulty of differentiating and evaluating these polyno-
mials. Instead, one can construct the Hermite polynomial using a Newton divided-difference table,
in which each entry corresponding to two identical interpolation points is filled with the value of
𝑓 ′ (𝑥) at the common point. Then, the Hermite polynomial can be represented using the Newton
divided-difference formula.

Differentiation
We now discuss how polynomial interpolation can be applied to help solve a fundamental prob-
lem from calculus that frequently arises in scientific applications, the problem of computing the
derivative of a given function 𝑓 (𝑥).

2
Finite Difference Approximations
Recall that the derivative of 𝑓 (𝑥) at a point 𝑥0 , denoted 𝑓 ′ (𝑥0 ), is defined by

𝑓 (𝑥0 + ℎ) − 𝑓 (𝑥0 )
𝑓 ′ (𝑥0 ) = lim .
ℎ→0 ℎ
This definition suggests a method for approximating 𝑓 ′ (𝑥0 ). If we choose ℎ to be a small positive
constant, then
𝑓 (𝑥0 + ℎ) − 𝑓 (𝑥0 )
𝑓 ′ (𝑥0 ) ≈ .

This approximation is called the forward difference formula.
To estimate the accuracy of this approximation, we note that if 𝑓 ′′ (𝑥) exists on [𝑥0 , 𝑥0 + ℎ],
then, by Taylor’s Theorem, 𝑓 (𝑥0 + ℎ) = 𝑓 (𝑥0 ) + 𝑓 ′ (𝑥0 )ℎ + 𝑓 ′′ (𝜉)ℎ2 /2, where 𝜉 ∈ [𝑥0 , 𝑥0 + ℎ]. Solving
for 𝑓 ′ (𝑥0 ), we obtain
𝑓 (𝑥0 + ℎ) − 𝑓 (𝑥0 ) 𝑓 ′′ (𝜉)
𝑓 ′ (𝑥0 ) = + ℎ,
ℎ 2
so the error in the forward difference formula is 𝑂(ℎ). We say that this formula is first-order
accurate.
The forward-difference formula is called a finite difference approximation to 𝑓 ′ (𝑥0 ), because it
approximates 𝑓 ′ (𝑥) using values of 𝑓 (𝑥) at points that have a small, but finite, distance between
them, as opposed to the definition of the derivative, that takes a limit and therefore computes the
derivative using an “infinitely small” value of ℎ. The forward-difference formula, however, is just
one example of a finite difference approximation. If we replace ℎ by −ℎ in the forward-difference
formula, where ℎ is still positive, we obtain the backward-difference formula

𝑓 (𝑥0 ) − 𝑓 (𝑥0 − ℎ)
𝑓 ′ (𝑥0 ) ≈ .

Like the forward-difference formula, the backward difference formula is first-order accurate.
If we average these two approximations, we obtain the centered-difference formula

𝑓 (𝑥0 + ℎ) − 𝑓 (𝑥0 − ℎ)
𝑓 ′ (𝑥0 ) ≈ .
2ℎ
To determine the accuracy of this approximation, we assume that 𝑓 ′′′ (𝑥) exists on the interval
[𝑥0 − ℎ, 𝑥0 + ℎ], and then apply Taylor’s Theorem again to obtain

𝑓 ′′ (𝑥0 ) 2 𝑓 ′′′ (𝜉+ ) 3


𝑓 (𝑥0 + ℎ) = 𝑓 (𝑥0 ) + 𝑓 ′ (𝑥0 )ℎ + ℎ + ℎ ,
2 6
𝑓 ′′ (𝑥0 ) 2 𝑓 ′′′ (𝜉− ) 3
𝑓 (𝑥0 − ℎ) = 𝑓 (𝑥0 ) − 𝑓 ′ (𝑥0 )ℎ + ℎ − ℎ ,
2 6

3
where 𝜉+ ∈ [𝑥0 , 𝑥0 + ℎ] and 𝜉− ∈ [𝑥0 − ℎ, 𝑥0 ]. Subtracting the second equation from the first and
solving for 𝑓 ′ (𝑥0 ) yields

𝑓 (𝑥0 + ℎ) − 𝑓 (𝑥0 − ℎ) 𝑓 ′′′ (𝜉+ ) + 𝑓 ′′′ (𝜉− ) 2


𝑓 ′ (𝑥0 ) = − ℎ .
2ℎ 12
By the Intermediate Value Theorem, 𝑓 ′′′ (𝑥) must assume every value between 𝑓 ′′′ (𝜉− ) and 𝑓 ′′′ (𝜉+ )
on the interval (𝜉− , 𝜉+ ), including the average of these two values. Therefore, we can simplify this
equation to
𝑓 (𝑥0 + ℎ) − 𝑓 (𝑥0 − ℎ) 𝑓 ′′′ (𝜉) 2
𝑓 ′ (𝑥0 ) = − ℎ ,
2ℎ 6
where 𝜉 ∈ [𝑥0 −ℎ, 𝑥0 +ℎ]. We conclude that the centered-difference formula is second-order accurate.
This is due to the cancellation of the terms involving 𝑓 ′′ (𝑥0 ).
Example Consider the function (√ )
𝑥2 +𝑥
sin2 cos 𝑥−𝑥
𝑓 (𝑥) = (√ ) .
sin √𝑥𝑥−1
2 +1

Our goal is to compute 𝑓 ′ (0.25). Differentiating, using the Quotient Rule and the Chain Rule, we
obtain
(√ ) (√ )[ √ ]
2 +𝑥 2 +𝑥 𝑥2 +1(sin 𝑥+1)
2 sin cos𝑥𝑥−𝑥 cos cos𝑥𝑥−𝑥 √ 2𝑥+1
2 𝑥2 +1(cos 𝑥−𝑥)
+ (cos 𝑥−𝑥)2
𝑓 ′ (𝑥) = (√ ) −
sin √𝑥𝑥−12 +1
(√ ) (√ )[ √ ]
2 𝑥−1 𝑥( 𝑥−1)
sin cos𝑥𝑥−𝑥 +𝑥
cos √𝑥2 +1 2√𝑥√1𝑥2 +1 − (𝑥 2 +1)3/2
(√ ) .
sin2 √𝑥𝑥−12 +1

Evaluating this monstrous function at 𝑥 = 0.25 yields 𝑓 ′ (0.25) = −9.066698770.


An alternative approach is to use a centered difference approximation,

𝑓 (𝑥 + ℎ) − 𝑓 (𝑥 − ℎ)
𝑓 ′ (𝑥) ≈ .
2ℎ
Using this formula with 𝑥 = 0.25 and ℎ = 0.005, we obtain the approximation

𝑓 (0.255) − 𝑓 (0.245)
𝑓 ′ (0.25) ≈ = −9.067464295,
0.01
which has absolute error 7.7 × 10−4 . While this complicated function must be evaluated twice to
obtain this approximation, that is still much less work than using differentiation rules to compute
𝑓 ′ (𝑥), and then evaluating 𝑓 ′ (𝑥), which is much more complicated than 𝑓 (𝑥). □

4
While Taylor’s Theorem can be used to derive formulas with higher-order accuracy simply by
evaluating 𝑓 (𝑥) at more points, this process can be tedious. An alternative approach is to compute
the derivative of the interpolating polynomial that fits 𝑓 (𝑥) at these points. Specifically, suppose
we want to compute the derivative at a point 𝑥0 using the data

(𝑥−𝑗 , 𝑦−𝑗 ), . . . , (𝑥−1 , 𝑦−1 ), (𝑥0 , 𝑦0 ), (𝑥1 , 𝑦1 ), . . . , (𝑥𝑘 , 𝑦𝑘 ),

where 𝑗 and 𝑘 are known nonnegative integers, 𝑥−𝑗 < 𝑥−𝑗+1 < ⋅ ⋅ ⋅ < 𝑥𝑘−1 < 𝑥𝑘 , and 𝑦𝑖 = 𝑓 (𝑥𝑖 ) for
𝑖 = −𝑗, . . . , 𝑘. Then, a finite difference formula for 𝑓 ′ (𝑥0 ) can be obtained by analytically computing
the derivatives of the Lagrange polynomials {ℒ𝑛,𝑖 (𝑥)}𝑘𝑖=−𝑗 for these points, where 𝑛 = 𝑗 + 𝑘 + 1,
and the values of these derivatives at 𝑥0 are the proper weights for the function values 𝑦−𝑗 , . . . , 𝑦𝑘 .
If 𝑓 (𝑥) is 𝑛 + 1 times continuously differentiable on [𝑥−𝑗 , 𝑥𝑘 ], then we obtain an approximation of
the form
𝑘 𝑘
∑ 𝑓 (𝑛+1) (𝜉) ∏
𝑓 ′ (𝑥0 ) = 𝑦𝑖 ℒ′𝑛,𝑖 (𝑥0 ) + (𝑥0 − 𝑥𝑖 ),
(𝑛 + 1)!
𝑖=−𝑗 𝑖=−𝑗,𝑖∕=0

where 𝜉 ∈ [𝑥−𝑗 , 𝑥𝑘 ].
Among the best-known finite difference formulas that can be derived using this approach is the
second-order-accurate three-point formula

−3𝑓 (𝑥0 ) + 4𝑓 (𝑥0 + ℎ) − 𝑓 (𝑥0 + 2ℎ) 𝑓 ′′′ (𝜉) 2


𝑓 ′ (𝑥0 ) = + ℎ , 𝜉 ∈ [𝑥0 , 𝑥0 + 2ℎ],
2ℎ 3
which is useful when there is no information available about 𝑓 (𝑥) for 𝑥 < 𝑥0 . If there is no
information available about 𝑓 (𝑥) for 𝑥 > 𝑥0 , then we can replace ℎ by −ℎ in the above formula to
obtain a second-order-accurate three-point formula that uses the values of 𝑓 (𝑥) at 𝑥0 , 𝑥0 − ℎ and
𝑥0 − 2ℎ.
Another formula is the five-point formula

𝑓 (𝑥0 − 2ℎ) − 8𝑓 (𝑥0 − ℎ) + 8𝑓 (𝑥0 + ℎ) − 𝑓 (𝑥0 + 2ℎ) 𝑓 (5) (𝜉) 4


𝑓 ′ (𝑥0 ) = + ℎ , 𝜉 ∈ [𝑥0 − 2ℎ, 𝑥0 + 2ℎ],
12ℎ 30
which is fourth-order accurate. The reason it is called a five-point formula, even though it uses
the value of 𝑓 (𝑥) at four points, is that it is derived from the Lagrange polynomials for the five
points 𝑥0 − 2ℎ, 𝑥0 − ℎ, 𝑥0 , 𝑥0 + ℎ, and 𝑥0 + 2ℎ. However, 𝑓 (𝑥0 ) is not used in the formula because
ℒ′4,0 (𝑥0 ) = 0, where ℒ4,0 (𝑥) is the Lagrange polynomial that is equal to one at 𝑥0 and zero at the
other four points.
If we do not have any information about 𝑓 (𝑥) for 𝑥 < 𝑥0 , then we can use the following five-point
formula that actually uses the values of 𝑓 (𝑥) at five points,

−25𝑓 (𝑥0 ) + 48𝑓 (𝑥0 + ℎ) − 36𝑓 (𝑥0 + 2ℎ) + 16𝑓 (𝑥0 + 3ℎ) − 3𝑓 (𝑥0 + 4ℎ) 𝑓 (5) (𝜉) 4
𝑓 ′ (𝑥0 ) = + ℎ ,
12ℎ 5

5
where 𝜉 ∈ [𝑥0 , 𝑥0 + 4ℎ]. As before, we can replace ℎ by −ℎ to obtain a similar formula that
approximates 𝑓 ′ (𝑥0 ) using the values of 𝑓 (𝑥) at 𝑥0 , 𝑥0 − ℎ, 𝑥0 − 2ℎ, 𝑥0 − 3ℎ, and 𝑥0 − 4ℎ.
The strategy of differentiating Lagrange polynomials to approximate derivatives can be used
to approximate higher-order derivatives. For example, the second derivative can be approximated
using a centered difference formula,
𝑓 (𝑥0 + ℎ) − 2𝑓 (𝑥0 ) + 𝑓 (𝑥0 − ℎ)
𝑓 ′′ (𝑥0 ) ≈ ,
ℎ2
which is second-order accurate.
Example We will construct a formula for approximating 𝑓 ′ (𝑥) at a given point 𝑥0 by interpolating
𝑓 (𝑥) at the points 𝑥0 , 𝑥0 + ℎ, and 𝑥0 + 2ℎ using a second-degree polynomial 𝑝2 (𝑥), and then
approximating 𝑓 ′ (𝑥0 ) by 𝑝′2 (𝑥0 ). Since 𝑝2 (𝑥) should be a good approximation of 𝑓 (𝑥) near 𝑥0 ,
especially when ℎ is small, its derivative should be a good approximation to 𝑓 ′ (𝑥) near this point.
Using Lagrange interpolation, we obtain
𝑝2 (𝑥) = 𝑓 (𝑥0 )ℒ2,0 (𝑥) + 𝑓 (𝑥0 + ℎ)ℒ2,1 (𝑥) + 𝑓 (𝑥0 + 2ℎ)ℒ2,2 (𝑥),
where {ℒ2,𝑗 (𝑥)}2𝑗=0 are the Lagrange polynomials for the points 𝑥0 , 𝑥1 = 𝑥0 + ℎ and 𝑥2 = 𝑥0 + 2ℎ.
Recall that these polynomials satisfy
{
1 if 𝑗 = 𝑘
ℒ2,𝑗 (𝑥𝑘 ) = 𝛿𝑗𝑘 = .
0 otherwise
Using the formula for the Lagrange polynomials,
2
∏ (𝑥 − 𝑥𝑖 )
ℒ2,𝑗 (𝑥) = ,
(𝑥𝑗 − 𝑥𝑖 )
𝑖=0,𝑖∕=𝑗

we obtain
(𝑥 − (𝑥0 + ℎ))(𝑥 − (𝑥0 + 2ℎ))
ℒ2,0 (𝑥) =
(𝑥0 − (𝑥0 + ℎ))(𝑥0 − (𝑥0 + 2ℎ))
𝑥2 − (2𝑥0 + 3ℎ)𝑥 + (𝑥0 + ℎ)(𝑥0 + 2ℎ)
= ,
2ℎ2
(𝑥 − 𝑥0 )(𝑥 − (𝑥0 + 2ℎ))
ℒ2,1 (𝑥) =
(𝑥0 + ℎ − 𝑥0 )(𝑥0 + ℎ − (𝑥0 + 2ℎ))
𝑥2 − (2𝑥0 + 2ℎ)𝑥 + 𝑥0 (𝑥0 + 2ℎ)
= ,
−ℎ2
(𝑥 − 𝑥0 )(𝑥 − (𝑥0 + ℎ))
ℒ2,2 (𝑥) =
(𝑥0 + 2ℎ − 𝑥0 )(𝑥0 + 2ℎ − (𝑥0 + ℎ))
𝑥2 − (2𝑥0 + ℎ)𝑥 + 𝑥0 (𝑥0 + ℎ)
= .
2ℎ2

6
It follows that
2𝑥 − (2𝑥0 + 3ℎ)
ℒ′2,0 (𝑥) =
2ℎ2
2𝑥 − (2𝑥0 + 2ℎ)
ℒ′2,1 (𝑥) = −
ℎ2
2𝑥 − (2𝑥0 + ℎ)
ℒ′2,2 (𝑥) =
2ℎ2
We conclude that 𝑓 ′ (𝑥0 ) ≈ 𝑝′2 (𝑥0 ), where

𝑝′2 (𝑥0 ) = 𝑓 (𝑥0 )ℒ′2,0 (𝑥0 ) + 𝑓 (𝑥0 + ℎ)ℒ′2,0 (𝑥0 ) + 𝑓 (𝑥0 + 2ℎ)ℒ′2,0 (𝑥0 )
−3 2 −1
≈ 𝑓 (𝑥0 ) + 𝑓 (𝑥0 + ℎ) + 𝑓 (𝑥0 + 2ℎ)
2ℎ ℎ 2ℎ
3𝑓 (𝑥0 ) + 4𝑓 (𝑥0 + ℎ) − 𝑓 (𝑥0 + 2ℎ)
≈ .
2ℎ
Using Taylor’s Theorem to write 𝑓 (𝑥0 + ℎ) and 𝑓 (𝑥0 + 2ℎ) in terms of Taylor polynomials centered
at 𝑥0 , it can be shown that the error in this approximation is 𝑂(ℎ2 ), and that this formula is exact
when 𝑓 (𝑥) is a polynomial of degree 2 or less. □
In a practical implementation of finite difference formulas, it is essential to note that roundoff
error in evaluating 𝑓 (𝑥) is bounded independently of the spacing ℎ between points at which 𝑓 (𝑥)
is evaluated. It follows that the roundoff error in the approximation of 𝑓 ′ (𝑥) actually increases
as ℎ decreases, because the errors incurred by evaluating 𝑓 (𝑥) are divided by ℎ. Therefore, one
must choose ℎ sufficiently small so that the finite difference formula can produce an accurate
approximation, and sufficiently large so that this approximation is not too contaminated by roundoff
error.

You might also like