[go: up one dir, main page]

0% found this document useful (0 votes)
292 views13 pages

2mb3 6

The document summarizes maximum likelihood estimation and provides examples using Rayleigh and negative binomial distributions. For a Rayleigh distribution, it derives an unbiased estimator for the parameter θ as the sample mean of the squared observations. For a negative binomial distribution, it derives the maximum likelihood estimator for the parameter p as the ratio of known parameter r to the sum of r and the observed values. It also calculates MLEs using example data sets.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
292 views13 pages

2mb3 6

The document summarizes maximum likelihood estimation and provides examples using Rayleigh and negative binomial distributions. For a Rayleigh distribution, it derives an unbiased estimator for the parameter θ as the sample mean of the squared observations. For a negative binomial distribution, it derives the maximum likelihood estimator for the parameter p as the ratio of known parameter r to the sum of r and the observed values. It also calculates MLEs using example data sets.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Stats 2MB3, Tutorial 6

Feb 27th, 2015


Maximum Likelihood Estimation
• The likelihood function is the joint density of
all the observations, X1,…,Xn ,
f(x1 ,…,xn ;θ1 ,…,θm) where θ1 ,…,θm are
parameters.
• We need to find θ 1 ,...,θ m such that
f ( x1 ,..., xn ;θ 1 ,..., θ m ) = max f ( x1 ,..., xn ;θ1 ,..., θ m )
θ1 ,...,θ m

for all θ1 ,…,θm. These estimators are called


maximum likelihood estimators (MLE).
Ex 15, page 255
• Let X1 , X2 ,…, Xn represent a random sample
from a Rayleigh distribution with pdf
x − x 2 /(2θ )
f (x;θ )
= e ,x >0
θ

• a) It can be shown that E(X2)=2θ. Use this fact


to construct an unbiased estimator of θ based
on ∑ X i2 (and use rules of expected value to
show that it is unbiased).
• b) Estimate θ from the following n=10
observations on vibratory stress of a turbine
blade under specified conditions:
16.88, 10.23, 4.59, 6.66, 13.68, 14.23, 19.87,
9.40, 6.51, 10.95.

Solution:
a) Since E (X ) = 2θ implies ∑ X i / n = 2θ , then
2 2

θ = ∑ i
X 2

2n is the unbiased estimator of θ.


 1490.1058
• b) ∑ X = 1490.1058=
i
2
, then θ = 74.505 .
20
Ex 28, page 265
• Let X1 , X2 ,…, Xn represent a random sample
from a Rayleigh distribution with density
function given in Ex 15. Determine
a) The maximum likelihood estimator of θ,
and then calculate the estimate for the
vibratory stress data. Is this estimator the
same as the unbiased estimator suggested in
Ex 15?
• b) The MLE of the median of the vibratory
stress distribution.

• Solution:
a) From the likelihood function
n

n
xi
x 2 ∏x i
∑i
x 2

; x) ∏ exp(− =
L(θ= ) i i =1
exp(− )
i =1 θ 2θ θ 2θ
n

then take logarithm for both sides


l (θ ; x)= log L(θ ; x)=
n

∑ log x − n log θ − ∑i
x 2


i
i =1
• Take the derivative with respect to θ and
obtain
∂l
=
n
− +
∑i
x 2

= 
0 ⇒θ =
∑ x 2
i

∂θ θ 2θ 2 2n

which is the same as the expression of


unbiased estimator in Ex 15.
• b)
If we set m is the median, then P(X<m)=1/2.
m x x2 m2 1
∫0 θ exp(− 2θ ) dx =
1 − exp(− ) =
2θ 2
⇒ m =(2 log 2)θ

By Invariance Principle, the MLE of the median



m (2 log 2)θ
= =
(2 log 2)
∑i
x 2

log 2
∑i
x 2

2n n
 = 6.698.
Plug in the actual data and we get m
Exercise 3
• Consider a random sample of random
variables X1 , X2 ,…, Xn from a negative
binomial population with a known parameter
r and unknown parameter p.

a) Derive the maximum likelihood estimator of


p.
• b) A numerical sample of size 6 from this
population was gathered, resulting in the
values.
7, 2, 10, 3, 5, 14
Calculate the maximum likelihood estimate of
p arising from this particular numerical
sample.
• a)
The likelihood function is
 xi + r − 1 r
n n
 xi + r − 1 nr ∑
L(p; x, r) ∏ 
= =  p (1 − p) ∏  p (1 − p)
xi xi

 r −1 
i 1= i 1  r −1 

and take the logarithm


n
 xi + r − 1
l (p; x, r)= logL(p; x, r)= ∑ log   + nr log(p) + ∑ xi log(1 − p)
i =1  r −1 
take the derivative with respect to p
∂l nr ∑ xi  nr
= − =0 ⇒ p =
∂p p 1 − p nr + ∑ xi .
• b)
Plug in the real numbers,
p nr 6r
=
nr + ∑ xi 6r + 41 . (r is known)

You might also like