[go: up one dir, main page]

0% found this document useful (0 votes)
66 views62 pages

Nonlife Actuarial Models: Claim-Severity Distribution

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 62

Nonlife Actuarial Models

Chapter 2
Claim-Severity Distribution
Learning Objectives

• Continuous and mixed distributions

• Exponential, gamma, Weibull and Pareto distributions

• Mixture distributions

• Tail weights, limiting ratios and conditional tail expectation

• Coverage modification and claim-severity distribution

2
2.1 Review of Statistics

2.1.1 Survival function and hazard function

• Survival function: The survival function of a random variable


X, also called decumulative function, denoted by SX (x), is the
complement of the df, i.e.,

SX (x) = 1 − FX (x) = Pr(X > x). (2.1)

• The following properties hold:

dFX (x) dSX (x)


fX (x) = =− . (2.2)
dx dx

• The sf SX (x) is monotonic nonincreasing.

3
• Also, we have FX (−∞) = SX (∞) = 0 and FX (∞) = SX (−∞) = 1.

• If X is nonnegative, then FX (0) = 0 and SX (0) = 1.

• The hazard function of a nonnegative random variable X, denoted


by hX (x), is defined as
fX (x)
hX (x) = . (2.3)
SX (x)
• We have
fX (x) dx
hX (x) dx =
SX (x)
Pr(x ≤ X < x + dx)
=
Pr(X > x)
Pr(x ≤ X < x + dx and X > x)
=
Pr(X > x)
= Pr(x < X < x + dx | X > x), (2.4)

4
• Thus, hX (x) dx can be interpreted as the conditional probability of
X taking value in the infinitesimal interval (x, x + dx) given X > x.

• To derive the sf given the hf, we note that


1 dSX (x) d log SX (x)
hX (x) = − =− , (2.5)
SX (x) dx dx
so that
hX (x) dx = −d log SX (x). (2.6)
Integrating both sides of the equation, we obtain
x x
hX (s) ds = − d log SX (s) = − log SX (s)]x0 = − log SX (x),
0 0
(2.7)
as log SX (0) = log(1) = 0. Thus, we have
x
SX (x) = exp − hX (s) ds , (2.8)
0

5
• Example 2.1: Let X be a uniformly distributed random variable
in the interval [0, 100], denoted by U(0, 100). Compute the pdf, df,
sf and hf of X.

• Solution: The pdf, df and sf of X are, for x ∈ [0, 100],

fX (x) = 0.01,

FX (x) = 0.01x,
and
SX (x) = 1 − 0.01x.
From equation (2.3) we obtain the hf as
fX (x) 0.01
hX (x) = = ,
SX (x) 1 − 0.01x
which increases with x. In particular, hX (x) → ∞ as x → 100.

6
2.1.2 Mixed distribution

• A random variable X is said to be of the mixed type if its df FX (x)


is continuous and differentiable except for some values x belonging
to a countable set ΩX .

• Thus, if X has a mixed distribution, there exists a function fX (x)


such that
x
FX (x) = Pr(X ≤ x) = fX (x) dx + Pr(X = xi ).
−∞ xi ∈ ΩX , xi ≤ x
(2.9)

• Using Stieltjes integral we write, for any constants a and b,


b
Pr(a ≤ X ≤ b) = dFX (x), (2.10)
a

7
which is equal to
b
fX (x) dx, if X is continuous, (2.11)
a

Pr(X = xi ), if X is discrete with support ΩX ,


xi ∈ ΩX , a ≤ xi ≤b
(2.12)
and
b
fX (x) dx + Pr(X = xi ), if X is mixed. (2.13)
a xi ∈ ΩX , a ≤ xi ≤b

• Expectation of a function of X: The expected value of g(X),


denoted by E[g(X)], is defined as the Stieltjes integral

E[g(X)] = g(x) dFX (x), (2.14)
−∞

8
• If X is continuous and nonnegative, and g(·) is a nonnegative, monotonic
and differentiable function, the following result holds
∞ ∞
E[g(X)] = g(x) dFX (x) = g(0)+ g (x)[1−FX (x)] dx, (2.17)
0 0

where g (x) is the derivative of g(x) with respect to x.

• Defining g(x) = x, so that g(0) = 0 and g (x) = 1, the mean of X


can be evaluated by
∞ ∞
E(X) = [1 − FX (x)] dx = SX (x) dx. (2.18)
0 0

Example 2.2: Let X ∼ U(0, 100). Define a random variable Y as


follows
0, for X ≤ 20,
Y =
X − 20, for X > 20.
Determine the df of Y , and its density and mass function.

9
1.2
Distribution function of X
Distribution function of Y
1

0.8
Distribution function

0.6

0.4

0.2

0
−10 0 10 20 30 40 50 60 70 80 90 100 110
Loss variables X and Y
2.1.3 Distribution of functions of random variables

• Let g(·) be a continuous and differentiable function, and X be a


continuous random variable with pdf fX (x). We define Y = g(X) .

• Theorem 2.1: Let X be a continuous random variable taking


values in [a, b] with pdf fX (x), and let g(·) be a continuous and
differentiable one-to-one transformation. Denote α = g(a) and β =
g(b). The pdf of Y = g(X) is
⎧ −1

⎨ f (g −1 (y)) dg (y)
, if y ∈ [α, β],
fY (y) = ⎪ X dy (2.19)

0, otherwise.

10
2.2 Some Continuous Distributions

2.2.1 Exponential Distribution

• A random variable X has an exponential distribution with parameter


λ, denoted by E(λ), if its pdf is

fX (x) = λe−λx , for x ≥ 0. (2.21)

• The df and sf of X are

FX (x) = 1 − e−λx , (2.22)

and
SX (x) = e−λx . (2.23)

11
Thus, the hf of X is
fX (x)
hX (x) = = λ, (2.24)
SX (x)
which is a constant, irrespective of the value of x. The mean and
variance of X are
1 1
E(X) = and Var(X) = 2 . (2.25)
λ λ
The mgf of X is
λ
MX (t) = . (2.26)
λ−t
2.2.2 Gamma distribution

• X is said to have a gamma distribution with parameters α and β


(α > 0 and β > 0), denoted by G(α, β), if its pdf is
1 x
α−1 − β
fX (x) = α
x e , for x ≥ 0. (2.27)
Γ(α)β

12
The function Γ(α) is called the gamma function defined by

Γ(α) = y α−1 e−y dy, (2.28)
0

which exists (i.e., the integral converges) for α > 0.

• For α > 1, Γ(α) satisfies the following recursion

Γ(α) = (α − 1)Γ(α − 1). (2.29)

In addition, if α is a positive integer, we have

Γ(α) = (α − 1)!. (2.30)

• The mean and variance of X are

E(X) = αβ and Var(X) = αβ 2 , (2.31)

13
and its mgf is
1 1
MX (t) = , for t < . (2.32)
(1 − βt)α β

2.2.3 Weibull distribution

• A random variable X has a 2-parameter Weibull distribution with


parameters α and λ, denoted by W(α, λ), if its pdf is
α x α−1 x α
fX (x) = exp − , for x ≥ 0, (2.34)
λ λ λ
where α is the shape parameter and λ is the scale parameter.

• The mean and variance of X are


1 2 2
E(X) = μ = λ Γ 1 + and Var(X) = λ Γ 1 + −μ2 .
α α
(2.35)

14
• The df of X is
x α
FX (x) = 1 − exp − , for x ≥ 0. (2.36)
λ

2.2.4 Pareto distribution

• A random variable X has a Pareto distribution with parameters


α > 0 and γ > 0, denoted by P(α, γ), if its pdf is
αγ α
fX (x) = α+1 , for x ≥ 0. (2.37)
(x + γ)

• The df of X is
α
γ
FX (x) = 1 − , for x ≥ 0. (2.38)
x+γ

15
3.5
alpha = 0.5
alpha = 1
3 alpha = 2
Probability density function

2.5

1.5

0.5

0
0 0.5 1 1.5 2 2.5 3 3.5 4
Standard Weibull loss variable
• The kth moment of X exists for k < α. For α > 2, the mean and
variance of X are
γ αγ 2
E(X) = and Var(X) = 2
. (2.40)
α−1 (α − 1) (α − 2)

16
Table A.2: Some continuous distributions

Distribution,
parameters,
notation
and support pdf fX (x) mgf MX (t) Mean Variance

Exponential λ 1 1
E(λ) λe−λx
x ∈ [0, ∞) λ−t λ λ2

Gamma −x
xα−1 e β 1
G(α, β) αβ αβ 2
x ∈ [0, ∞), Γ(α)β α (1 − βt)α

Pareto αγ α γ αγ 2
P(α, γ) Does not exist
x ∈ [0, ∞), (x + γ)α+1 α−1 (α − 1)2 (α − 2)

Weibull α x α−1 x α 1 2
W(α, λ) e−( λ ) Not presented μ = λΓ 1 + λ2 Γ 1 + − μ2
x ∈ [0, ∞), λ λ α α

17
2.3 Creating New Distributions

2.3.1 Transformation of random variable:

• Scaling: Let X ∼ W(α, λ). ˙ Consider the scaling of X by the


scale parameter λ and define
X
Y = . (2.41)
λ
Then Y has a standard Weibull distribution.

• Power transformation: Assume X ∼ E(λ) and define Y = X 1/α


for an arbitrary constant α > 0. Then Y ∼ W(α, β) ≡ W(α, 1/λ1/α ).

• Exponential transformation: Let X be normally distributed


with mean μ and variance σ 2 , denoted by X ∼ N (μ, σ 2 ). A new

18
random variable may be created by taking the exponential of X.
Thus, we define Y = eX , so that x = log y.

• The pdf of X is

1 (x − μ)2
fX (x) = √ exp − 2
. (2.48)
2πσ 2σ

• The pdf of Y as

1 (log y − μ)2
fY (y) = √ exp − 2
. (2.50)
2πσy 2σ

• A random variable Y with pdf given by equation (2.50) is said to


have a lognormal distribution with parameters μ and σ 2 , denoted
by L(μ, σ 2 ).

19
• In other words, if log Y ∼ N (μ, σ 2 ), then Y ∼ L(μ, σ 2 ). The mean
and variance of Y ∼ L(μ, σ 2 ) are given by

σ2
E(Y ) = exp μ + , (2.51)
2

and
Var(Y ) = exp 2μ + σ 2 exp(σ 2 ) − 1 . (2.52)

2.3.2 Mixture distribution

• Let X be a continuous random variable with pdf fX (x | λ), which


depends on the parameter λ.

• We allow λ to be the realization of a random variable Λ with sup-


port ΩΛ and pdf fΛ (λ | θ), where θ is the parameter determining the
distribution of Λ, sometimes called the hyperparameter.

20
• A new random variable Y may then be created by mixing the pdf
fX (x | λ) to form the pdf
fY (y | θ) = fX (x | λ)fΛ (λ | θ) dλ. (2.54)
λ∈ ΩΛ

• Example 2.4: Assume X ∼ E(λ), and let the parameter λ be


distributed as G(α, β). Determine the mixture distribution.

• Solution: We have
fX (x | λ) = λe−λx ,
and
1 λ
α−1 − β
fΛ (λ | α, β) = α
λ e .
Γ(α)β
Thus,
∞ ∞
−λx 1 λ
α−1 − β
fX (x | λ)fΛ (λ | α, β) dλ = λe α
λ e dλ
0 0 Γ(α)β

21
1
λα exp −λ x +
∞ β
= dλ
0 Γ(α)β α
α+1
Γ(α + 1) β
= .
Γ(α)β α βx + 1

If we let γ = 1/β, the above expression can be written as


α+1
Γ(α + 1) β αγ α
= ,
Γ(α)β α βx + 1 (x + γ)α+1

which is the pdf of P(α, γ). Thus, the gamma—exponential mixture


has a Pareto distribution. We also see that the distribution of the
mixture distribution depends on α and β (or α and γ).

• Another important result is that the gamma-Poisson mixture has a


negative-binomial distribution. See Q2.27 in NAM.

22
• The example below illustrates the computation of the mean and
variance of a continuous mixture using rules for conditional ex-
pectation. For the mean, we use the following result

E(X) = E [E (X | Λ)] . (2.56)

For the variance, we use the result

Var(X) = E [Var(X | Λ)] + Var [E(X | Λ)] . (2.57)

• Example 2.5: Assume X | Λ ∼ E(Λ), and let the parameter Λ be


distributed as G(α, β). Calculate the unconditional mean and vari-
ance of X using rules for conditional expectation.

• Solution: As the conditional distribution of X is E(Λ), from


Table A.2 we have
1
E(X | Λ) = .
Λ

23
Thus, from equation (2.56) we have
1
E(X) = E
Λ
∞ 1 1 λ
α−1 − β
= α
λ e dλ
0 λ Γ(α)β
Γ(α − 1)β α−1
=
Γ(α)β α
1
= .
(α − 1)β
Also, from Table A.2 we have
1
Var(X | Λ) = 2
,
Λ
so that using equation (2.57) we have
2
1 1 1 1
Var(X) = E + Var = 2E − E .
Λ2 Λ Λ2 Λ

24
As
1 1
∞ 1 λ
α−1 − β
E = λ e dλ
Λ2 0 2
λ Γ(α)β α

Γ(α − 2)β α−2


=
Γ(α)β α
1
= 2
,
(α − 1)(α − 2)β

we conclude
2
2 1 α
Var(X) = − = .
(α − 1)(α − 2)β 2 (α − 1)β (α − 1)2 (α − 2)β 2

• The above results can be obtained directly from the mean and vari-
ance of a Pareto distribution.

25
2.3.3 Splicing

• Splicing is a technique to create a new distribution from standard


distributions using different standard pdf in different parts of the
support. Suppose there are k pdf, denoted by f1 (x), · · · , fk (x) de-
fined on the support ΩX = [0, ∞), a new pdf fX (x) can be defined
as follows


⎪ p1 f1∗ (x), x ∈ [0, c1 ),

⎪ ∗

⎨ p2 f2 (x), x ∈ [c1, c2 ),
fX (x) = ⎪ · · (2.58)

⎪ · ·



pk fk∗ (x), x ∈ [ck−1, ∞),

where pi ≥ 0 for i = 1, · · · , k with ki=1 pi = 1, c0 = 0 < c1 <


c2 · · · < ck−1 < ∞ = ck , and fi∗ (x) is a legitimate pdf based on fi (x)
in the interval [ci−1 , ci ) for i = 1, · · · , k.

26
• Example 2.6: Let X1 ∼ E(0.5), X2 ∼ E(2) and X3 ∼ P(2, 3), with
corresponding pdf fi (x) for i = 1, 2 and 3. Construct a spliced dis-
tribution using f1 (x) in the interval [0, 1), f2 (x) in the interval [1, 3)
and f3 (x) in the interval [3, ∞), so that each interval has a proba-
bility content of one third. Also, determine the spliced distribution
so that its pdf is continuous, without imposing equal probabilities
for the three segments.

• See Figure 2.4.

27
0.7
Equal−probability restriction
Continuity restriction
0.6
Probability density function

0.5

0.4

0.3

0.2

0.1

0
0 2 4 6 8 10
Spliced distribution X
2.4 Tail Properties

• A severity distribution with high probability of heavy loss is said to


have a fat tail, heavy tail or thick tail.

• To compare the tail behavior of two distributions we may take the


limiting ratio of their sf. The faster the sf approaches zero, the
thinner is the tail.

• If S1 (x) and S2 (x) are the sf of the random variables X1 and X2 ,


respectively, with corresponding pdf f1 (x) and f2 (x), we have
S1 (x) S (x) f1 (x)
lim lim 1
= x→∞ = x→∞
lim . (2.61)
x→∞ S2 (x) S2 (x) f2 (x)

• Example 2.7: Let f1 (x) be the pdf of a P(α,γ) distribution, and


f2 (x) be the pdf of a G(θ,β) distribution. Determine the limiting

28
ratio of these distributions, and suggest which distribution has a
thicker tail.

• Solution: The limiting ratio of the Pareto versus the gamma dis-
tribution is
αγ α
f1 (x) (x + γ)α+1
lim = x→∞
lim
x→∞ f2 (x) 1 θ−1
x
−β
θ
x e
Γ(θ)β
x
α θ e β
= αγ Γ(θ)β x→∞
lim α+1 θ−1 ,
(x + γ) x

which tends to infinity as x tends to infinity.

• Thus, we conclude that the Pareto distribution has a thicker tail


than the gamma distribution.

29
• The quantile function (qf ) is the inverse of the df. Thus, if

FX (xδ ) = δ, (2.64)

then
xδ = FX−1 (δ). (2.65)

• FX−1 (·) is called the quantile function and xδ is the δ-quantile (or
the 100δ-percentile) of X. Equation (2.65) assumes that for any
0 < δ < 1 a unique value xδ exists.

• Example 2.8: Let X ∼ E(λ) and Y ∼ L(μ, σ 2 ). Derive the


quantile functions of X and Y . If λ = 1, μ = −0.5 and σ 2 = 1,
compare the quantiles of X and Y for δ = 0.95 and 0.99.

• Solution: We have

FX (xδ ) = 1 − e−λxδ = δ,

30
so that e−λxδ = 1 − δ, implying
log(1 − δ)
xδ = − .
λ
For Y we have

δ = Pr(Y ≤ yδ )
= Pr(log Y ≤ log yδ )
= Pr(N (μ, σ 2 ) ≤ log yδ )
log yδ − μ
= Pr Z ≤ ,
σ
where Z follows the standard normal distribution. Thus,
log yδ − μ
= Φ−1 (δ),
σ
where Φ−1 (·) is the quantile function of the standard normal. Hence,
yδ = exp [μ + σΦ−1 (δ)].

31
For X, given the parameter value λ = 1, E(X) = Var(X) = 1 and
x0.95 = − log(0.05) = 2.9957.
For Y with μ = −0.5 and σ 2 = 1, from equations (2.51) and (2.52)
we have E(Y ) = 1 and Var(Y ) = exp(1) − 1 = 1.7183.
Hence, X and Y have the same mean, while Y has a larger variance.
For the quantile of Y we have Φ−1 (0.95) = 1.6449, so that
y0.95 = exp μ + σΦ−1 (0.95) = exp(1.6449 − 0.5) = 3.1421.

Similarly, we obtain x0.99 = 4.6052 and y0.99 = 6.2109.


Thus, Y has larger quantiles for δ = 0.95 and 0.99, indicating it has
a thicker upper tail.
• The quantile xδ indicates the loss which will be exceeded with prob-
ability 1 − δ. However, it does not provide information about how
bad the loss might be if loss exceeds this threshold.

32
• To address this issue, we may compute the expected loss conditional
on the threshold being exceeded. We call this the conditional tail
expectation (CTE) with tolerance probability 1 − δ, denoted by
CTEδ , which is defined as
CTEδ = E(X | X > xδ ). (2.66)

• CTEδ is computed by

CTEδ = xfX | X > xδ (x) dx

∞ fX (x)
= x dx
xδ SX (xδ )

xδ xfX (x) dx
= . (2.68)
1−δ
• Example 2.9: For the loss distributions X and Y given in Example
2.8, calculate CTE0.95 .

33
• Solution: We first consider X. As fX (x) = λe−λx , the numerator
of the last line of equation (2.68) is
∞ ∞
−λx
λxe dx = − x de−λx
xδ xδ
∞ ∞
= − xe−λx − e−λx dx
xδ xδ
e−λxδ
= xδ e−λxδ + ,
λ
which, for δ = 0.95 and λ = 1, is equal to

3.9957e−2.9957 = 0.1997866.

Thus, CTE0.95 of X is
0.1997866
= 3.9957.
0.05

34
• The pdf of the lognormal distribution is given in equation (2.50). To
compute the numerator of (2.68) we need to calculate
∞ 1 (log x − μ)2
√ exp − 2
dx.
yδ 2πσ 2σ

• To do this, we define the transformation


log x − μ
z= − σ.
σ

35
• As
(log x − μ)2 (z + σ)2
exp − = exp −
2σ 2 2
z2 σ2
= exp − exp −σz − ,
2 2
and
dx = σx dz = σ exp(μ + σ 2 + σz) dz,
we have
∞ ∞
1 (log x − μ)2 σ2 1 z2
√ exp − dx = exp μ + √ exp − dz
yδ 2πσ 2σ 2 2 z∗ 2π 2
σ2
= exp μ + [1 − Φ(z ∗ )] ,
2
where Φ(·) is the df of the standard normal and
∗ log yδ − μ
z = − σ.
σ

36
• Now we substitute μ = −0.5 and σ 2 = 1 to obtain

z ∗ = log y0.95 − 0.5 = log(3.1424) − 0.5 = 0.6450,

so that the CTE0.95 of Y is

e0 [1 − Φ(0.6450)]
CTE0.95 = = 5.1900,
0.05
which is larger than that of X. Thus, Y gives rise to more extreme
losses compared to X, whether we measure the extreme events by
the upper quantiles or CTE.

37
2.5 Coverage Modifications

• To reduce risks and/or control problems of moral hazard, insurance


companies often modify the policy coverage.

• Examples of such modifications are deductibles, policy limits and


coinsurance.

• We need to distinguish between a loss event and a payment event.


A loss event occurs whenever there is a loss, while a payment event
occurs only when the insurer is liable to pay for (some or all of) the
loss.

• We define the following notations:

38
1. X = amount paid in a loss event when there is no coverage modifi-
cation

2. XL = amount paid in a loss event when there is coverage modifica-


tion

3. XP = amount paid in a payment event when there is coverage mod-


ification

• Thus, X and XP are positive and XL is nonnegative.

39
2.5.1 Deductibles

• An insurance policy with a per-loss deductible of d will not pay the


insured if the loss X is less than or equal to d, and will pay the
insured X − d if the loss X exceeds d.

• Thus, the amount paid in a loss event, XL , is given by


0, if X ≤ d,
XL = (2.69)
X − d, if X > d.

• If we adopt the notation


0, if x ≤ 0,
x+ = (2.70)
x, if x > 0,
then XL may also be defined as

XL = (X − d)+ . (2.71)

40
1

0.9

0.8

0.7
Distribution function

0.6

0.5

0.4

0.3

0.2
Loss variable X
Loss in a loss event XL
0.1
Loss in a payment event XP
0
0 5 10 15
Loss variable
• Note that Pr(XL = 0) = FX (d). Thus, XL is a mixed-type random
variable. It has a probability mass at point 0 of FX (d) and a density
function of
fXL (x) = fX (x + d), for x > 0. (2.72)

• The random variable XP , called the excess-loss variable, is defined


only when there is a payment, i.e., when X > d. It is a conditional
random variable, defined as XP = X − d | X > d.

• Figure 2.6 plots the df of X, XL and XP .

• The mean of XL can be computed as follows



E(XL ) = xfXL (x) dx
0

= (x − d)fX (x) dx
d

41

= − (x − d) dSX (x)
d

= − (x − d)SX (x)]∞
d − SX (x) dx
d

= SX (x) dx. (2.76)
d

• The mean of XP , called the mean excess loss, is given by the


following formula

E(XP ) = xfXP (x) dx
0
∞ fX (x + d)
= x dx
0 SX (d)

0 xfX (x + d) dx
=
SX (d)

d (x − d)fX (x) dx
=
SX (d)

42
E(XL )
= . (2.77)
SX (d)
• Using conditional expectation, we have
E(XL ) = E(XL | XL > 0) Pr(XL > 0) + E(XL | XL = 0) Pr(XL = 0)
= E(XL | XL > 0) Pr(XL > 0)
= E(XP ) Pr(XL > 0), (2.78)
which implies
E(XL ) E(XL ) E(XL )
E(XP ) = = = , (2.79)
Pr(XL > 0) SXL (0) SX (d)
as proved in equation (2.77).

• Also, from the fourth line of equation (2.77), we have


∞ ∞
d xfX (x) dx − d d fX (x) dx
E(XP ) =
SX (d)

43

xfX (x) dx − d[SX (d)]
d
=
SX (d)
= CTEδ − d, where δ = FX−1 (d). (2.81)

• Example 2.10: For the loss distributions X and Y given in Exam-


ples 2.8 and 2.9, assume there is a deductible of d = 0.25. Calculate
E(XL ), E(XP ), E(YL ) and E(YP ).

• Solution: For X, we compute E(XL ) from equation (2.76) as


follows

E(XL ) = e−x dx = e−0.25 = 0.7788.
0.25
Now SX (0.25) = e−0.25 = 0.7788. Thus, from equation (2.77),
E(XP ) = 1. For Y , we use the results in Example 2.9. First, we
have
∞ ∞
E(YL ) = (y − d)fY (y) dy = yfY (y) dy − d [SY (d)] .
d d

44
Replacing yδ in Example 2.9 by d, the first term of the above ex-
pression becomes
∞ ∞
1 (log y − μ)2 σ2
yfY (y) dy = √ exp − dy = exp μ + [1 − Φ(z ∗ )] ,
d d 2πσ 2σ 2 2

where

∗ log d − μ
z = − σ = log(0.25) − 0.5 = −1.8863.
σ
As Φ(−1.8663) = 0.0296, we have
∞ 1 (log y − μ)2 −0.5+0.5
√ exp − 2
dy = (e )[1−0.0296] = 0.9704.
d 2πσ 2σ

Now,

log(d) − μ
SY (d) = Pr Z > = Pr(Z > −0.8863) = 0.8123.
σ

45
Hence,
E(YL ) = 0.9704 − (0.25)(0.8123) = 0.7673,
and
0.7673
E(YP ) = = 0.9446.
0.8123
• The computation of E(YL ) for Y ∼ L(μ, σ 2 ) is summarized below.

Theorem 2.2: Let Y ∼ L(μ, σ 2 ), then for a positive constant d,


σ2
E [(Y − d)+ ] = exp μ + [1 − Φ(z ∗ )] − d[1 − Φ(z ∗ + σ)], (2.82)
2
where
log d − μ
z∗ = − σ. (2.83)
σ
• The expected reduction in loss due to the deductible is

E(X) − E [(X − d)+ ] = E(X) − E(XL ). (2.87)

46
We define the loss elimination ratio with deductible d, denoted
by LER(d), as the ratio of the expected reduction in loss due to
the deductible to the expected loss without the deductible, which is
given by

E(X) − E(XL )
LER(d) = . (2.88)
E(X)

2.5.2 Policy limit

• For an insurance policy with a policy limit, the insurer compensates


the insured up to a pre-set amount, say, u, called the maximum
covered loss.

• We denote the amount paid for a policy with a policy limit by XU .

47
• If we define the binary operation ∧ as the minimum of two quantities,
so that
a ∧ b = min {a, b}, (2.91)
then
XU = X ∧ u, (2.92)

• XU defined above is called the limited-loss variable.

• For any arbitrary constant q, the following identity holds

X = (X ∧ q) + (X − q)+ . (2.94)

• LER can be written as


E(X) − E [(X − d)+ ] E(X) − [E(X) − E(X ∧ d)] E(X ∧ d)
LER(d) = = = .
E(X) E(X) E(X)
(2.95)

48
• From (2.94) we have
(X − q)+ = X − (X ∧ q),
which implies
E[(X − q)+ ] = E(X) − E[(X ∧ q)].
As E[(X ∧ q)] is tabulated in the Exam C Tables for commonly
used distributions of X, the above equation is a convenient way to
calculate E[(X − q)+ ].

• The above equation also implies, for any positive rv X,


E[(X ∧ q)] = E(X) − E[(X − q)+ ]
Z ∞ Z ∞
= SX (x)dx − SX (x)dx
0 q
Z q
= SX (x)dx.
0

49
2.5.3 Coinsurance

• An insurance policy may specify that the insurer and insured share
the loss in a loss event, which is called coinsurance.

• We consider a simple coinsurance in which the insurer pays the in-


sured a fixed percentage c of the loss in a loss event, where 0 < c < 1.

• We denote XC as the payment made by the insurer. Thus,

XC = c X, (2.96)

where X is the loss without policy modification. The pdf of XC is

1 x
fXC (x) = fX (2.97)
c c

50
• Now we consider a policy which has a deductible of amount d, a
policy limit of amount u (u > d) and a coinsurance factor c (0 <
c < 1).

• We denote the loss random variable in a loss event by XT , which is


given by

XT = c [(X ∧ u) − (X ∧ d)] = c [(X − d)+ − (X − u)+ ] . (2.99)

• It can be checked that XT defined above satisfies




⎨ 0, for X < d,
XT = ⎪ c(X − d), for d ≤ X < u, (2.100)
⎩ c(u − d), for X ≥ u.

• From equation (2.99) we have

E(XT ) = c {E [(X − d)+ ] − E [(X − u)+ ]} , (2.101)

51
which can be computed using equation (2.76).

• Example 2.12: For the exponential loss distribution X and lognor-


mal loss distribution Y given in Examples 2.8 through 2.11, assume
there is a deductible of d = 0.25, maximum covered loss of u = 4
and coinsurance factor of c = 0.8. Calculate the mean loss in a loss
event of these two distributions.

• Solution: We use equation (2.101) to calculate E(XT ) and


E(YT ). Note that E[(X − d)+ ] and E[(Y − d)+ ] are computed in Ex-
ample 2.10 as 0.7788 and 0.7673, respectively. We now compute
E[(X − u)+ ] and E[(Y − u)+ ] using the method in Example 2.10,
with u replacing d. For X, we have

E [(X − u)+ ] = e−x dx = e−4 = 0.0183.
u

52
For Y , we have z ∗ = log(4) − 0.5 = 0.8863 so that Φ(z ∗ ) = 0.8123,
and
log(u) − μ
SY (u) = Pr Z > = Pr(Z > 1.8863) = 0.0296.
σ

Thus,

E [(Y − u)+ ] = (1 − 0.8123) − (4)(0.0296) = 0.0693.

Therefore, from equation (2.101), we have

E(XT ) = (0.8) (0.7788 − 0.0183) = 0.6084,

and E(YT ) = (0.8)(0.7673 − 0.0693) = 0.5584.

53
2.5.4 Effects of inflation

• While loss distributions are specified based on current experience


and data, inflation may cause increases in the costs. On the other
hand, policy specifications will remain unchanged for the policy pe-
riod.

• We consider a one-period insurance policy and assume the rate of


price increase in the period to be r. We use a tilde to denote inflation
adjusted losses.

• Thus, the inflation adjusted loss distribution is denoted by X̃, which


is equal to (1 + r)X. For an insurance policy with deductible d, the
loss in a loss event and the loss in a payment event with inflation
adjustment are denoted by X̃L and X̃P , respectively.

54
• As the deductible is not inflation adjusted, we have
X̃L = X̃ − d = X̃ − (X̃ ∧ d), (2.106)
+

and
X̃P = X̃ − d | X̃ − d > 0 = X̃L | X̃L > 0. (2.107)

• Thus, the mean inflation adjusted loss is given by


E(X̃L ) = E X̃ − d
+

d
= E (1 + r) X −
1+r +
d
= (1 + r)E X− . (2.109)
1+r +

• Also,
E(X̃L )
E(X̃P ) = E(X̃L | X̃L > 0) = . (2.111)
Pr(X̃L > 0)

55
As
d d
Pr(X̃L > 0) = Pr(X̃ > d) = Pr X > = SX ,
1+r 1+r
(2.112)
we conclude
E(X̃L )
E(X̃P ) = . (2.113)
d
SX
1+r

56
Table 2.2: Some Excel functions for the computation of the pdf fX (x) and df
FX (x) of continuous random variable X

Example
X Excel function input output

E(λ) EXPONDIST(x1,x2,ind) EXPONDIST(4,0.5,FALSE) 0.0677


x1 = x EXPONDIST(4,0.5,TRUE) 0.8647
x2 = λ

G(α, β) GAMMADIST(x1,x2,x3,ind) GAMMADIST(4,1.2,2.5,FALSE) 0.0966


x1 = x GAMMADIST(4,1.2,2.5,TRUE) 0.7363
x2 = α
x3 = β

W(α, λ) WEIBULL(x1,x2,x3,ind) WEIBULL(10,2,10,FALSE) 0.0736


x1 = x WEIBULL(10,2,10,TRUE) 0.6321
x2 = α
x3 = λ

N (0, 1) NORMSDIST(x1) NORMSDIST(1.96) 0.9750


x1 = x
output is Pr(N (0, 1) ≤ x)

N (μ, σ 2 ) NORMDIST(x1,x2,x3,ind) NORMDIST(3.92,1.96,1,FALSE) 0.0584


x1 = x NORMDIST(3.92,1.96,1,TRUE) 0.9750
x2 = μ
x3 = σ

L(μ, σ 2 ) LOGNORMDIST(x1,x2,x3) LOGNORMDIST(3.1424,-0.5,1) 0.9500


x1 = x
x2 = μ
x3 = σ
output is Pr(L(μ, σ 2 ) ≤ x)

57
Table 2.3: Some Excel functions for the computation of the inverse of the df
FX−1 (δ) of continuous random variable X

Example
X Excel function input output

G(α, β) GAMMAINV(x1,x2,x3) GAMMAINV(0.8,2,2) 5.9886


x1 = δ
x2 = α
x3 = β

N (0, 1) NORMSINV(x1) NORMSINV(0.9) 1.2816


x1 = δ

N (μ, σ 2 ) NORMINV(x1,x2,x3) NORMINV(0.99,1.2,2.5) 7.0159


x1 = δ
x2 = μ
x3 = σ

58

You might also like