Numerical Integration
(Quadrature)
Sachin Shanbhag
Dept. Scientific Computing
(based on material borrowed from Dennis Duke, Samir Al-Amer,
David Kofke, Holistic Numerical Methods Institute)
Numerical Integration
Why do we need it?
many integrals cannot be evaluated analytically
even if you can, you might need to check your answer
even if you can, numerical evaluation of the answer can be bothersome
Examples:
"
%0
dx = 2 ! " (#1)k
$
x cosh x
k =0 2k +1
b
a
"x 2
dx
Error function
An example of an integral that needs checking:
Possible Issues
the integrand is some sort of table of numbers
regularly spaced
irregularly spaced
contaminated with noise (experimental data)
the integrand is computable everywhere in the range of integration,
but there may be
infinite range of integration
local discontinuities
considerations
time to compute the integral
estimate of the error due to
- truncation
- round-off
- noise in tabulated values
Integral as Riemann sum
In the differential limit, an integral is equivalent to a summation
operation:
i= n
"
a
N (1
f (x)dx = lim & f (x i )%x ' & f (x i )%x
n #$
i= 0
i= 0
Approximate methods for determining integrals are mostly based on
idea of area between integrand and axis.
!
Lets try a simple example
n
intervals
dx
error
0.785398
-0.340759
0.392699
-0.183465
0.196350
-0.094960
16
0.098175
-0.048284
32
0.049087
-0.024343
64
0.024544
-0.012222
128
0.012272
-0.006123
256
0.006136
-0.003065
512
0.003068
-0.001533
10
1024
0.001534
-0.000767
Analytically
! /2
"0
cos xdx = sin x
! /2
0
=1
Note that the error is decreasing by a factor 2, just like our discretization interval dx.
Question: Why is the error = I(exact) - I(calc) negative?
Instead of having the top of the rectangle hit the left (or right) edge we could also
have it hit the function at the midpoint of each interval:
N %1
"
f (x)dx # & f (
dx
i= 0
intervals
0.785398
!
-0.026172153
0.392699
-0.006454543
0.196350
-0.001608189
16
0.098175
-0.000401708
32
0.049087
-0.000100406
64
0.024544
-0.000025100
128
0.012272
-0.000006275
256
0.006136
-0.000001569
512
0.003068
-0.000000392
10
1024
0.001534
-0.000000098
x i + x i+1
)$x
2
error
now the error is falling by a factor 4 with
each halving of the interval dx.
Note that the lines at the top of the
rectangles can have any slope
whatsoever and we will always get
the same answer.
Question: Why is the error smaller?
Question: Why is the error smaller?
Answer:
One reason is that in the mid-point rule, the maximum distance over which we
extrapolate our knowledge of f(x) is halved.
Different integration schemes result from what we think the function is doing
between evaluation points.
Link between interpolation and numerical integration
Orientation
Newton-Cotes Methods
Use intepolating polynomials. Trapezoid, Simpsons 1/3 and 3/8 rules,
Bodes are special cases of 1st, 2nd, 3rd and 4th order polynomials are
used, respectively
Romberg Integration (Richardson Extrapolation)
use knowledge of error estimates to build a recursive higher order
scheme
Gauss Quadrature
Like Newton-Cotes, but instead of a regular grid, choose a set that lets
you get higher order accuracy
Monte Carlo Integration
Use randomly selected grid points. Useful for higher dimensional
integrals (d>4)
Newton-Cotes Methods
In Newton-Cotes Methods, the function is approximated by a polynomial
of order n
To do this, we use ideas learnt from interpolation
Computing the integral of a polynomial is easy.
b
a
f (x)dx " #
b
a
(a
n
+
a
x
+
...+
a
x
)dx
0
1
n
we approximate the function f(x) in the interval [a,b] as:
f (x) " a0 + a1 x + ...+ an x n
interpolation
(b 2 $ a 2 )
(b n +1 $ a n +1 )
# a f (x)dx "a0 (b $ a) + a1 2 + ...+ an n + 1
!
b
Newton-Cotes Methods
Trapezoid Method (First Order Polynomial are used)
b
a
f (x)dx " # a ( a0 + a1 x ) dx
b
! f (b) ! f (a)
f (a) +
b!a
I = ) f ( x)dx
a
( x ! a)
f (b) ! f (a )
'
$
I ( ) % f (a) +
( x ! a ) "dx
a
b!a
&
#
b
f(x)
f (b) ! f (a ) $
'
= % f (a) ! a
"x
b!a
&
# a
2 b
+
a
f (b) ! f (a ) x
b!a
2
= (b ! a )
f (b) + f (a )
2
Multi-step Trapezoid Method
If the interval is divided into n segments(not necessarily equal)
a = x 0 " x1 " x 2 " ... " x n = b
b
a
n%1
1
f (x)dx # & ( x i+1 % x i )( f (x i+1 ) + f (x i ))
2
i= 0
Special Case (Equally spaced base points)
x i+1 " x i = h for all i
n"1
&1
)
b
$ a f (x)dx # h (2 [ f (x 0 ) + f (x n )] + % f (x i ) +
'
*
i=1
Multi-step Trapezoid Method
intervals
dx
error
0.78539816
0.05194055
0.39269908
0.01288420
0.19634954
0.00321483
16
0.09817477
0.00080332
32
0.04908739
0.00020081
64
0.02454369
0.00005020
128
0.01227185
0.00001255
256
0.00613592
0.00000314
512
0.00306796
0.00000078
10
1024
0.00153398
0.00000020
! /2
"0
cos xdx = sin x
! /2
0
=1
Now the error is again decreasing
by a factor 4, so like dx2.
In fact, it can be shown that:
b#a 2
Error "
h max f ''(x)
x $[a,b ]
12
Newton-Cotes Methods
Simpson 1/3 Rule
f(x)
Second Order Polynomial are used
b
a
f (x)dx " #
b
a
(a
+ a1 x + a2 x ) dx
2
h=(b-a)/2
Simpson 3/8 Rule
Third Order Polynomial are used,
b
a
f(x)
f (x)dx " # a ( a0 + a1 x + a2 x 2 + a3 x 3 ) dx
h=(b-a)/3
Newton-Cotes Methods
These are called closed because we use function evaluations at the end-points
of the interval. There are open formulae which dont evalute f(a) and f(b), but we
wont discuss them here.
wikipedia.org
Romberg Integration
Trapezoid formula with an interval h gives error of the order O(h2)
Can we combine two Trapezoid estimates with intervals 2h and h to get a
better estimate?
For a multistep trapezoidal rule, the error is:
n
f ##($ )
%
(b " a)
=
3
Et
where i [ a+(i-1)h, a+ih ]
i=1
12n 2
Think of
$ f ""(# )
i
i=1
as an approximate average value of f(x) in [a,b]. Then,
Et "
!
C
n2
Romberg Integration
How good is this approximation?
Consider
30
140000
'
$
.
+
x = ! % 2000 ln ,
(
9
.
8
t
"dt
)
-140000 ( 2100t *
#
8&
n
Value
Et
11868
807
11266
205
11153
91.4
11113
51.5
11094
33.0
11084
22.9
11078
16.8
11074
12.9
Vertical distance covered by a
rocket between 8 to 30
seconds
Exact value x=11061 meters
Romberg Integration
The true error gets approximately quartered as the number of
segments is doubled. This information is used to get a better
approximation of the integral, and is the basis of Romberg
Integration (or Richardsons extrapolation).
C
Et ! 2
n
where C is an approximately constant
If Itrue = true value and In= approx. value of the integral
Itrue In + Et
Et(n) C/n2 Itrue - In
Et(2n) C/4n2 Itrue - I2n
Therefore, eliminate C/n2 between these two equations
Itrue " Itrue,est
I2n # In
= I2n +
3
Note: What we calculate
is still an approximation
for Itrue
Example
The vertical distance covered by a rocket from 8 to 30 seconds is given by
140000
'
$
.
+
x = ! % 2000 ln ,
(
9
.
8
t
"dt
)
-140000 ( 2100t *
#
8&
30
1.
2.
Use Richardsons rule to find the
distance covered (use table for
multistep trapezoidal rule).
Find the true error, Et for part (1).
Exact value=11061 meters
Value
Et
RelErr
11868
807
7.296
11266
205
1.854
11153
91.4
0.8265
11113
51.5
0.4655
11094
33.0
0.2981
11084
22.9
0.2070
11078
16.8
0.1521
11074
12.9
0.1165
Multistep trapezoidal rule
Solution
I 2 = 11266m
I 4 = 11113m
Using Richardsons extrapolation formula for Trapezoidal
rule, choosing n=2
Itrue
I2n # In
" I2n +
3
= 11062 m (Itrue,est)
Et = Iexact - Itrue,est = -1 m
#t =
11061 " 11062
! 100
11061
Solution
30
140000
'
$
.
+
x = ! % 2000 ln ,
(
9
.
8
t
"dt
)
-140000 ( 2100t *
#
8&
n
Trapezoidal
Rule
et for Trapezoidal
Rule
Richardsons
Extrapolation
et for Richardsons
Extrapolation
1
2
4
8
11868
11266
11113
11074
7.296
1.854
0.4655
0.1165
-11065
11062
11061
-0.03616
0.009041
0.0000
Usually much better estimates
Romberg Integration: Successive Refinement
A general expression for Romberg integration can be written as
k (k"1)
(k"1)
4
I
"
I
(k )
2n
n
I2n
=
,k # 2
k"1
4 "1
The index k represents the order of extrapolation.
!
In(1) represents
the values obtained from the regular Trapezoidal
rule with n intervals.
k=2 represents values obtained using the true estimate as O(h2).
In(k) has an error of the order 1/n2k.
Romberg Integration: Successive Iteration
For our particular example:
1-segment
11868
First Order
(k=2)
Second Order
(k=3)
Third Order
(k=4)
11065
2-segment
1126
11062
11062
4-segment
11113
11061
11061
8-segment
11074
11061
Questions from last class:
1.
What is the error in Romberg integration?
Itrue " Itrue,est
Et "
I2n # In
= I2n +
3
C1 C2 C3
+ 4 + 6 ...
2
n
n
n
O(1/n4)
Over here identical to
Simpsons rule.
In fact!this is how Numerical Recipes (Press et al.) implements the Simpsons rule
Successive !
iterations:
(k )
2n
(k"1)
4 k I2n
" In(k"1)
=
,k # 2
k"1
4 "1
This has an error of the order 1/n2k.
Questions from last class:
2. Is Romberg better than Simpsons?
Successive iterations:
(k )
2n
(k"1)
4 k I2n
" In(k"1)
=
,k # 2
k"1
4 "1
This has an error of the order 1/n2k.
So usually, yes!
To evaluate an integral to the same degree of accuracy, you need fewer
function evaluations with Romberg.
Numerical Recipes:
2
"x
0
log(x + x 2 + 1)dx
Simpsons rule makes 8 times
as many function calls
Romberg Integration
Questions:
1.
Do I have to use In and I2n?
2.
Is this true only for the trapezoidal rule?
Romberg Integration
Questions:
1.
Do I have to use In and I2n?
2.
Is this true only for the trapezoidal rule?
No!
But you have to derive new relationships in lieu of:
(k )
2n
(k"1)
4 k I2n
" In(k"1)
=
,k # 2
k"1
4 "1
But note that it may destroy recursive structure used in the expression
above to minimize function calls.
Gauss Quadrature
Motivation
Multistep Trapezoid Method
% n#1
(
b
1
" a f (x)dx = h'$ f (x i ) + 2 ( f (x 0 ) + f (x n ))*
& i=1
)
It can be expressed as
"
b
a
f (x)dx = $ c i f (x i )
i= 0
+ h
where c i = ,
- 0.5 h
i = 1,2,...,n #1
i = 0 and n
Gauss Quadrature
"
b
a
f (x)dx = # c i f (x i )
i= 0
c i : Weights
x i :Nodes
Problem
How do we select ci and xi so that the formula gives a
better (higher order) approximation of the integral?
Approximate function with Polynomial
b
a
f (x)dx " # Pn (x)dx
a
where Pn (x) is a polynomial that interpolates f(x)
at the nodes x 0 , x1,..., x n
n
(
b
b
b%
# a f (x)dx " # a Pn (x)dx = # a '$ l i (x) f (x i )*dx
& i= 0
)
+
b
a
f (x)dx " $ c i f (x i )
i= 0
where c i =
b
a
l i (x) dx
If the points xi are chosen on a uniform grid, this is exactly Newton-Cotes
Newton-Cotes
For a uniform grid { xi } Pn(x) is exact if f(x) is a polynomial d(n)
Gaussian Quadrature
Choose the n+1 grid points { xi } so that the polynomial
Pn(x) is exact if f(x) is a polynomial d(2n+1)
How do we get nodes and weights
Example:
Can we select nodes and weights so that a (n+1)=2 nodes allow us
to write a formula that is exact for polynomials of degree (2n+1) = 3?
#
!
1
"1
f (x) dx = c 0 f (x 0 ) + c1 f (x1 )
Brute Force:
Set up equations for all polynomials d(0) to d(2n+1) and solve for ci and xi
f (x) = 1; c 0 + c1 =
1
"1
1 dx = 2
f (x) = x; c 0 x 0 + c1 x1 =
2
1
"1
f (x) = x ; c 0 x 0 + c1 x1 =
f (x) = x ; c 0 x 0 + c1 x1 =
x dx = 0
2
x
dx = 2 /3
"1
1
3
x
# "1 dx = 2
Solve simultaneously, get
c 0 = c1 = 1
x 0 = "1/ 3;x1 = 1/ 3
Nodes and weights for larger n:
ci
wikipedia.org
What is my limits are not [-1,1]?
For a range of integration other than [-1,1], change of variables
"
b
a
b#a 1 b#a
a+b
f (y) dy =
f
(
x
+
)dx
"
#1
2
2
2
n
b"a
b"a
a+b
=
c
f
(
x
+
)
#
i
i
2 i=1
2
2
!
Example
1 - x2
e dx
0
.!
1
=
2
. -1 e
-(.5t +.5 )
& ,*
- -0.5
1 $ *+
= $e
2 $
%
dt
)
1
+.5 ''
3
(
,
)
1
-** 0.5 +.5 ''
3
+
(
+e
#
!
!
!
"
2 points
Advantages/Disadvantages
1. For functions that are smooth or approximately polynomial beats
Newton-Cotes in accuracy.
2
erf(1) =
"
$e
#x 2
dx
with n=3, get 5 correct
significant places
2. Not easy to get error bounds (need to know derivative f2n+2).
!
3. Unlike
Romberg Integration, we cannot successively refine (GaussKonrad tries to overcome that.)
Gauss Quadrature: Generalization
What we just looked at was a special case of:
"
b
a
w(x) f (x) dx = # c i f (x i )
i=1
with w(x) = 1. This is called Gauss-Legendre.
!
There are other forms of Gauss Quadrature (not only Gauss-Legendre)
which are useful, when:
1. there are discontinuties,
2. range of integration is not finite,
3. when the weight w(x) can help the function look more polynomial
4. Etc.
Generalization
The fundamental theorem of Gaussian quadrature states that
the optimal nodes xi of the n-point Gaussian quadrature
formulas are precisely the roots of the orthogonal polynomial
for the same interval and weighting function.
Generalization
"
b
a
w(x) f (x) dx = # c i f (x i )
i=1
wikipedia
Gauss-Legendre
All we do are look for zeros of Pn(x) in [-1,1]. These are our xis.
The cis can be obtained from
ci =
2
(1" x i2 )( Pn#(x i )) 2
Generalization
In practice,
1.
Gauss-Legendre is the most widely used Gauss quadrature formula.
2.
We look at the limits and the weighting function w(x) for the integral we
want to evaluate and decide what quadrature formula might be best.
3.
We dont calculate the nodes and weights ourselves. Instead, we look
them up for a give n, and simply carry out the weighted sum.
http://www.efunda.com/math/num_integration/num_int_gauss.cfm
4.
Note that this may require a change of variables.
Monte Carlo Integration
Adapting notes from David Kofkes
Molecular Simulation class.
One-Dimensional Integrals
Methodical approaches
trapezoid rule, Simpsons rule, Gauss quadrature
f ( x)
Sum areas of shapes
approximating shape
of curve
Evaluating the general integral I = ! f ( x)dx
a
n uniformly separated points
Quadrature formula
n
I " #x $
i =1
b!a n
f ( xi ) =
f ( xi )
$
n i =1
Monte Carlo Integration
Stochastic approach
Same quadrature formula, different selection of points
b!a n
I"
f ( xi )
#
n i =1
! ( x)
n points selected from
uniform distribution p(x)
http://www.eng.buffalo.edu/~kofke/ce530/Applets/applets.html
Random Number Generation
Random number generators
subroutines that provide a new random deviate with each call
basic generators give value on (0,1) with uniform probability
uses a deterministic algorithm (of course)
usually involves multiplication and truncation of leading bits of a
number
X n +1 = (aX n + c) mod m linear congruential sequence
Returns set of numbers that meet many statistical
measures of randomness
histogram is uniform
no systematic correlation of deviates
Plot of successive
deviates (Xn,Xn+1)
no idea what next value will be from knowledge of
present value (without knowing generation algorithm)
but eventually, the series must end up repeating
Some famous failures
be careful to use a good quality generator
Not so random!
Random Number Generation
RANDU
Linear congruential sequence developed in the 1960s at IBM
Not so random!
http://www.purinchu.net/wp/2009/02/06/the-randu-pseudo-random-number-generator/
Errors in Random vs. Methodical Sampling
for example (Simpsons rule)
Comparison of errors
methodical approach
Monte Carlo integration
"I # $x 2 # n %2
"I # n $1/ 2
MC error vanishes much more slowly for increasing n
For one-dimensional! integrals, MC offers no advantage
!
This conclusion changes as the dimension d of the
integral increases
methodical approach
MC integration
"I # n $2 / d
"I # n $1/ 2
independent of dimension!
!
!
MC wins at about d = 4
!x = L / n1/ d
d=2
36 points,
361/2 = 6 in
each row
Shape of High-Dimensional Regions
Two (and higher) dimensional shapes can be
complex
How to construct and weight points in a grid
that covers the region R?
Example: mean-square
distance from origin
r2 =
2
2
(
x
+
y
)dxdy
!!
R
!! dxdy
R
Shape of High-Dimensional Regions
Two (and higher) dimensional shapes can be
complex
How to construct and weight points in a grid
that covers the region R?
hard to formulate a methodical algorithm in a
complex boundary
usually do not have analytic expression for
position of boundary
complexity of shape can increase unimaginably
as dimension of integral grows
wi ?
Example: mean-square
distance from origin
r2 =
2
2
(
x
+
y
)dxdy
!!
R
!! dxdy
R
High-Dimensional Integrals
Sample Integral from Statistical Physics
U =
1 1
ZN N !
# dr U (r )e
" !U (r N )
3Nparticle dimensional integral
N=100 modest (course project)
Therefore, in 3D, 300 dimensional integral
Say 10 grid points in each dimension (very coarse)
# function evaluations: 10300 (assume 1 flop)
IBM BlueGene/L-system: 300 Tflop
Total time: 10300/1015 ~10285 s = 10277 years
Age of the universe: 1014
# atoms on earth: 1050
High-Dimensional Integrals
Sample Integral from Statistical Physics
U =
1 1
ZN N !
# dr U (r )e
" !U (r N )
3Nparticle dimensional integral
N=100 modest (course project)
Therefore, in 3D, 300 dimensional integral
Say 10 grid points in each
dimension
(very coarse)
But
we
routinely
# function evaluations: 10300 (assume 1 flop)
compute such
IBM BlueGene/L-system:
300 Tflop using MC
properties
Total time: 10300/1015 ~10285 s = 10277 years
Age of the universe: 1014
# atoms on earth: 1050
Integrate Over a Simple Shape? 1.
Modify integrand to cast integral into a
simple shaped region
define a function indicating if inside or
outside R
+0.5
+0.5
dx "
dy ( x + y ) s ( x, y )
"
!
0.5
!
0.5
=
+0.5
+0.5
dx
"!0.5 "!0.5 dys( x, y)
!1 inside R
s="
#0 outside R
Difficult problems remain
grid must be fine enough to resolve shape
many points lie outside region of interest
too many quadrature points for our highdimensional integrals (see applet again)
http://www.eng.buffalo.edu/~kofke/ce530/Applets/applets.html
Integrate Over a Simple Shape? 2.
Statistical-mechanics integrals typically have
significant contributions from miniscule regions of the
integration space
U =
1 1
ZN N !
N
N " !U (r
dr
U
(
r
)e
#
contributions come only when no spheres overlap
e.g., 100 spheres at freezing the fraction is 10-260
Evaluation of integral is possible only if restricted to
region important to integral
must contend with complex shape of region
MC methods highly suited to importance sampling
(e" !U # 0)
Importance Sampling
Put more quadrature points in regions where integral receives its
greatest contributions
1
Return to 1-dimensional example
Most contribution from region near x = 1
Choose quadrature points
not uniformly, but according
to distribution (x)
I = ! 3 x 2 dx
0
f ( x) = 3x 2
linear form is one possibility
How to revise the integral to
remove the bias?
! ( x) = 2 x
The Importance-Sampled Integral
Consider a rectangle-rule quadrature with
unevenly spaced abscissas
n
I ! # f ( xi )"xi
i =1
Spacing between points
reciprocal of local number of points per unit length
#xi =
b"a 1
n ! ( xi )
!x1
!x2 !x3
Greater (x) more points smaller spacing
Importance-sampled rectangle rule
Same formula for MC sampling
b " a n f ( xi )
I#
$
n i =1 ! ( xi )
! ( x)
choose x points
according to (x)
!xn
The Importance-Sampled Integral
Error in MC is related to the variance:
"2 #
f2 $ f
Cant control the n-1/2
dependence
If f=constant, then numerator, and error vanish
!
Choose to make f/ approximately constant, then can
make error go to zero even if f is not constant.
"2 #
% f (2
%f(
' * + ' *
&$ )
&$ )
n
Generating Nonuniform Random Deviates
Probability theory says...
...given a probability distribution u(z)
if x is a function x(z),
dz
then the distribution of (x) obeys ! ( x) = u ( z )
dx
Prescription for (x)
solve this equation for x(z)
generate z from the uniform random generator
compute x(z)
Example
we want ! ( x) = ax on x = (0,1)
2
2
1
a and c from boundary conditions
then z = 2 ax + c = x
so x = z1/2
taking square root of uniform deviate gives linearly distributed values
Generating (x) requires knowledge of
# " (x)dx
Generating Nonuniform Random Deviates
Example:
Generate x from linearly distributed random numbers between [a,b), (x)
If (x) is normalized then,
" (x) =
2x
b2 # a2
(x)
If we have u(z) a uniform random number [0,1)
! = 2x = 1 dz
" (x)
b2 # a2
dx
x
z
2x
$ dx b2 # a2 = $ dz
a
0
2
x = a + (b # a )z
U(z)
Choosing a Good Weighting Function
MC importance-sampling quadrature formula
1 n f ( xi )
I" #
n i =1 ! ( xi )
! ( x)
Do not want (x) to be too much smaller or too much larger than f(x)
too small leads to significant contribution from poorly sampled region
too large means that too much sampling is done in region that is not (now)
contributing much
! = 3x 2
! = 2x
! ( x)
f ( x)
! ( x)
! = 3x 4
Variance in Importance Sampling Integration
Choose to minimize variance in average
f ( x) = 3x 2
! ( x)
2$
#' % f ( x) & 2
%
&
%
&
1
f
(
x
)
'
! I2 = ) 1 +
"
(
x
)
dx
(
"
(
x
)
dx
+ 1 + " ( x) ,
, *
n ' - " ( x) ,.
.
. '0
/
4x
0.09
0.03
0.04
0.01
0.04
0.01
5n
8n
8n
Smallest variance in average corresponds to (x) = c f(x)
2x
3x
n = 1000
"I
2
n = 100
not a viable choice
the constant here is selected to normalize (x)
if we can normalize (x) we can evaluate " ! ( x)dx
this is equivalent to solving the desired integral of f(x)
http://www.eng.buffalo.edu/~kofke/ce530/Applets/applets.html
Summary
Monte Carlo methods use stochastic process to
answer a non-stochastic question
generate a random sample from an ensemble
compute properties as ensemble average
permits more flexibility to design sampling algorithm
Monte Carlo integration
good for high-dimensional integrals
better error properties
better suited for integrating in complex shape
Importance Sampling
focuses selection of points to region contributing most to
integral
selecting of weighting function is important
choosing perfect weight function is same as solving integral
Extra Slides
Approximate function with Polynomial
Recall, that the interpolating polynomial depends on the chosen grid points
n
Pn (x) = " li (x) f (x i )
i= 0
Langrange interpolants can be written as,
li (x) =
" (x)
(x # x i )" $(x i )
n
" (x) = (x # x1 )(x # x 2 )..(x # x n ) = % (x # x i )
i= 0
# "(x i) = & (x i $ x j )
j= 0
j%i
!
!
Note that here,
# (x)
=1
x "x i (x $ x )# %(x )
i
i
lim li (x) = lim
x "x i
Theorem (Gauss)
Let P(x) be a nontrivial polynomial of degree n such that it is orthogonal to
polynomials of lesser degree
"
b
a
x k P(x)dx = 0
0 # k # n $1
If x0, x1, x2, . xn are zeros of P(x) and
"
b
a
f (x)dx # $ c i f (x i ) where c i =
i= 0
"
b
a
l i (x)dx
Then this approximation is exact for all polynomials of
degree less than or equal to 2n+1
Method 2:
In practice, we use Gauss Theorem and well-studied classes of orthogonal
polynomials
Here, Legendre Polynomials (hence sometimes Gauss-Legendre Quadrature)
2
# "1 Pm (x)Pn (x)dx = 2n + 1$nm
1
All we do are look for zeros of Pn(x) in [-1,1]. These are our xis.
The cis can be obtained from
2
ci =
(1" x i2 )( Pn#(x i )) 2