HPD Explained From 17 Slides
HPD Explained From 17 Slides
David B. Hitchcock
E-Mail: hitchcock@stat.sc.edu
Spring 2022
λy e −λ
f (y |λ) =
y!
Prior:
r s s−1 −r λ
f (λ) = λ e , λ > 0.
Γ(s)
⇒ Posterior:
yi +s−1 −(n+r )λ
P
f (λ|y ) ∝ λ e , λ > 0.
⇒ f (λ|y ) is gamma yi + s, n + r . (Conjugate!)
P
P
yi + s
λ̂B =
Pn + r
yi s
= +
n + r n P
+r
n yi r s
= +
n+r n n+r r
▶ Again, the data get weighted more heavily as n → ∞.
▶ If θL∗ is the α/2 posterior quantile for θ, and θU∗ is the 1 − α/2
∗ ∗
posterior quantile for θ, then (θL , θU ) is a 100(1 − α)%
credible interval for θ.
Note: P[θ < θL∗ |y ] = α/2 and P[θ > θU
∗ |y ] = α/2.
⇒ P{θ ∈ (θL∗ , θU
∗
)|y }
/ (θL∗ , θU
= 1 − P{θ ∈ ∗
)|y }
∗ ∗
= 1 − P[θ < θL |y ] + P[θ > θU |y ]
= 1 − α.
0.5
0.4
0.3
f(λ|y)
0.2
area=0.025 area=0.025
area=0.95
0.1
0.0
0 2 4 6 8
p(θ) = 1, 0 ≤ θ ≤ 1.
p(θ|y ) ∝ p(θ)L(θ|y )
10 y
= (1) θ (1 − θ)10−y
y
∝ θy (1 − θ)10−y , 0 ≤ θ ≤ 1.
0 5 10 15
0.5
0.4
0.3
f(λ|y)
0.2
0.1
area=0.90
0.0
0 2 4 6 8
0.25
0.20
0.15
f(λ|y)
0.10
0.05
0.00
0 2 4 6 8
X
1 1
p(µ|y ) ∝ exp − 2 2 τ 2 Yi2 − 2τ 2 µnȳ + nµ2 τ 2
2σ τ
2 2 2 2 2
+ σ µ − 2σ µδ + σ δ
1 1
= exp − 2 2 µ2 (σ 2 + nτ 2 ) − 2µ(δσ 2 + τ 2 nȳ )
2σ τ
X
2 2 2
+ δ σ +τ Yi2
n 1h 1 n δ nȳ io
∝ exp − µ2 2 + 2 − 2µ 2 + 2 + k
2 τ σ τ σ
(where k is some constant)
+ nȳ
n 1 h 1 δ
n 2 τ2 σ2
io
Hence p(µ|y ) ∝ exp − + µ − 2µ 1
+ k
2 τ 2 σ2 τ2
+ σn2
( " 2 #)
δ
+ nȳ
1 1 n τ2 σ2
∝ exp − + µ− 1
2 τ 2 σ2 τ2
+ σn2
1/τ 2 n/σ 2
1/τ 2
δ+ ȳ ,
+ n/σ 2 1/τ 2 + n/σ2
n
n β+ w
2 −(α+ 2 +1) − σ22
= (σ ) e
▶ Hence the posterior is clearly an IG(α + n2 , β + n2 w )
distribution, where w = n1 (Yi − µ)2 .
P
Conjugate!
David B. Hitchcock E-Mail: hitchcock@stat.sc.edu Chapter 5: More Conjugate Priors
A Conjugate analysis with Normal Data (mean known)
[E (σ 2 )]2 [E (σ 2 )]2
2
α= + 2 and β = E (σ ) +1
var (σ 2 ) var (σ 2 )
2
p(σ 2 ) ∝ (σ 2 )−(α+1) e −β/σ
1 − 1
(µ−δ)2
p(µ|σ 2 ) ∝ (σ 2 )− 2 e 2σ 2 /s0
n 3 − β − 1 ( Y 2 −2nȳ µ+nµ2 )− 1
(µ2 −2µδ+δ 2 )
P
= (σ 2 )−α− 2 − 2 e σ2 2σ2 i 2σ 2 /s0
β
n 1 1 P
2 −α− 2 − 2 − σ 2 − 2σ 2 ( Yi2 −nȳ 2 )
= (σ ) e
1 2 2 2
× (σ 2 )−1 e − 2σ2 {(n+s0 )µ −2(nȳ +δs0 )µ+(nȳ +s0 δ )}
nȳ + δs0 σ 2
2
µ|σ , y ∼ N ,
n + s0 n + s0
2
˙ ȳ , σn .
▶ Note as s0 → 0, µ|σ 2 , y ∼N
▶ Note also the conditional posterior mean is
n s0
ȳ + δ.
n + s0 n + s0
▶ The relative sizes of n and s0 determine the weighting of the
sample mean ȳ and the prior mean δ.
A A
Letting A = 2β + (s0 + n)(µ − δ)2 , z = 2σ 2
⇒ σ2 = 2z and
dσ 2 = − 2zA2 dz,
Z ∞ −α− n − 3
A 2 2 A −z
p(µ|y ) ∝ e dz
0 2z 2z 2
Z ∞ −α− n − 1
A 2 2 1 −z
= e dz
0 2z z
n 1
Z ∞ n 1
−α− 2 − 2
∝A z −α− 2 − 2 −1 e −z dz
0
n 1
p(µ|y ) ∝ A−α− 2 − 2
− 2α+n+1
2
2
= 2β + (s0 + n)(µ − δ)
" #− 2α+n+1
2
(s0 + n)(µ − δ)2
∝ 1+
2β