[go: up one dir, main page]

0% found this document useful (0 votes)
62 views104 pages

Introduction Compact

Manufacturing Systems I
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views104 pages

Introduction Compact

Manufacturing Systems I
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 104

MIT 2.853/2.

854
Introduction to Manufacturing Systems

Markov Processes and Queues

Stanley B. Gershwin
Laboratory for Manufacturing and Productivity
Massachusetts Institute of Technology
gershwin@mit.edu

Markov Processes 1 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Stochastic processes

Stochastic processes

• t is time.

• X () is a stochastic process if X (t) is a random


variable for every t.

• t is a scalar — it can be discrete or continuous.

• X (t) can be discrete or continuous, scalar or


vector.

Markov Processes 2 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Stochastic processes Markov processes

Stochastic processes
Markov processes

• A Markov process is a stochastic process in which


the probability of finding X at some value at time
t + δt depends only on the value of X at time t.

• Or, let x (s), s ≤ t, be the history of the values of


X before time t and let A be a possible value of X .

P{X (t + δt) = A|X (s) =x (s), s ≤ t} =


P{X (t + δt) = A|X (t) =x (t)}

Markov Processes 3 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Stochastic processes Markov processes

Stochastic processes
Markov processes

• In words: if we know what X was at time t, we


don’t gain any more useful information about
X (t + δt) by also knowing what X was at any
time earlier than t.

• This is ONLY the definition of a class of


mathematical models. It is NOT a statement
about reality!! That is, not everything is a Markov
process.

Markov Processes 4 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Example

Markov processes
Example

Example:
• I have $100 at time t=0.

• At every time t ≥ 1, I have $N(t).

? A (possibly biased) coin is flipped.


? If it lands with H showing, N(t + 1) = N(t) + 1.
? If it lands with T showing, N(t + 1) = N(t) − 1.
N(t) is a Markov process. Why?
Markov Processes 5 Copyright 2018
c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, discrete time

Discrete state, discrete time


States and transitions

• States can be numbered 0, 1, 2, 3, ... (or with


multiple indices if that is more convenient).
• Time can be numbered 0, 1, 2, 3, ... (or 0, ∆, 2∆,
3∆, ... if more convenient).
• The probability of a transition from j to i in one
time unit is often written Pij , where
Pij = P{X (t + 1) = i|X (t) = j}

Markov Processes 6 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, discrete time

States and transitions


Transition graph
Transition graph ~ ?
P 1 − P − P− P
14 14 24 64
1
P 4 P
24 64

2 P
45

5
3 7

Pij is a probability. Note that Pii = 1 −


P
m,m6=i Pmi . This is the
self-loop probability.  H
Markov Processes 7 Copyright 2018
c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, discrete time

States and transitions


Transition graph

 ~

Example : H(t) is the number of Hs after t coin flips.

Assume probability of H is p.

p p p p p
0 1 2 3 4

1−p 1−p 1−p 1−p 1−p

This is a system with an infinite state space.


Markov Processes 8 Copyright 2018
c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, discrete time

States and transitions


Transition graph

Example : Coin flip bets on Slide 5.

Assume probability of H is p.
1−p 1−p 1−p 1−p 1−p 1−p 1−p 1−p 1−p

96 97 98 99 100 101 102 103

p p p p p p p p p

Markov Processes 9 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, discrete time

Markov processes
Notation

• {X (t) = i} is the event that random quantity X (t)


has value i.

? Example: X (t) is any state in the graph on Slide 7. i


is a particular state.

• Define πi (t) = P{X (t) = i}.

• Normalization equation: πi (t) = 1.


P
i

Markov Processes 10 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, discrete time

Markov processes
Transition equations

Transition equations: application of the law of total probability.


1 −P − P −P
14 24 64
π4 (t + 1) = π5 (t)P45
4
+ π4 (t)(1 − P14 − P24 − P64 )
P
45

(Remember that
P45 = P{X (t + 1) = 4|X (t) = 5},
5 P44 = P{X (t + 1) = 4|X (t) = 4}
(Detail of graph = 1 − P14 − P24 − P64 )
on slide 7.)

Markov Processes 11 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, discrete time

Markov processes
Transition equations
P
14 1 −P − P −P
14 24 64
1
P 4 P
24 64

2 P
45

5
3 7

P{X (t + 1) = 2}
= P{X (t + 1) = 2|X (t) = 1}P{X (t) = 1}
+P{X (t + 1) = 2|X (t) = 2}P{X (t) = 2}
+P{X (t + 1) = 2|X (t) = 4}P{X (t) = 4}
+P{X (t + 1) = 2|X (t) = 5}P{X (t) = 5}

Markov Processes 12 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, discrete time

Markov processes
Transition equations

• Define Pij = P{X (t + 1) = i|X (t) = j}

• Transition equations: πi (t + 1) = Pij πj (t).


P
j

An application of the (Law of Total Probability)

• Normalization equation: πi (t) = 1.


P
i

Markov Processes 13 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, discrete time

Markov processes
Transition equations

P
14 1 −P − P −P
14 24 64
1
P 4 P
Therefore, since
24 64

2 P
45

6
Pij = P{X (t + 1) = i|X (t) = j} and
5
3 7
πi (t) = P{X (t) = i},

we can write
π2 (t + 1) = P21 π1 (t) + P22 π2 (t) + P24 π4 (t) + P25 π5 (t).

Note that P22 = 1 − P52 .

Markov Processes 14 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, discrete time

Markov processes
Transition equations — Matrix-Vector Form
For an n-state system, ? 

• Define     
π1 (t) P11 P12 ... P1n 1
 π2 (t)   P21 P22 ... P2n   1 
π(t) = 
 ...  ,
 P= , ν= 
 ...   ... 
πn (t) Pn1 Pn2 ... Pnn 1

• Transition equations: π(t + 1) = Pπ(t)

• Normalization equation: ν T π(t) = 1

• Other facts:
? ν T P = ν T (Each column of P sums to 1.)
? π(t) = P t π(0)

Markov Processes 15 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, discrete time

Markov processes
Steady state
♦ * ~

State probabilities vs. t for system in Slide 7


Markov Processes 16 Copyright 2018
c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, discrete time

Markov processes
Steady state

* ? N

• Steady state: πi = lim πi (t) = lim P t π(0), if it


t→∞ t→∞
exists.

• Steady-state transition equations: πi = Pij πj .


P
j

• Alternatively, steady-state balance equations:


πi m,m6=i Pmi = j,j6=i Pij πj
P P

• Normalization equation: πi = 1.
P
i

Markov Processes 17 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, discrete time

Markov processes
Balance equations

P
14 1 −P − P −P
Balance equation:
14 24 64
1
P 4 P
(P14 + P24 + P64 )π4
24 64

2 = P45 π5
P
45
6 in steady state only .
5

Intuitive meaning: The average number of transitions


into the circle per unit time equals the average number
of transitions out of the circle per unit time.

Markov Processes 18 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, discrete time

Markov processes
Steady state
How to calculate the steady-state probability distribution π
• Assume that the system has N states, where N is finite.
• Assume that there is a unique steady-state probability distribution.
• The transition equations form a set of N linear equations in N
unknowns.
• The normalization equation is also a linear equation.
• Problem? We have N + 1 equations in N unknowns.
• No problem: there is one redundant equation because each column
sums to 1.
• Delete one transition equation and replace it with the normalization
equation.
• Solve the system of N linear equations in N unknowns.
Markov Processes 19 Copyright 2018
c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, discrete time

Markov processes
Steady state
• A system that has a unique steady-state solutions is called ergodic .
The probability distribution approaches that limit no matter the initial
probability distribution was.
• For systems that have more than one steady-state solution, the limiting
distribution depends on the initial probability.
• The balance equations can be used to find the limiting distribution
instead of the transition equations. As before, one equation has to be
replaced by the normalization equation.
• If a system has an infinite number of states and it has a steady state
probability distribution, there are two possibilties for finding it:
? It might be possible to solve the equations analytically. We will see an
example of that.
? Truncate the system. That is, solve a system with a large but finite subset of
the states. If you understand the system, you can guess which are the highest
probability states. Keep those. This provides an approximate solution.

Markov Processes 20 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, discrete time

Markov processes
Geometric distribution

Consider a two-state system. The system can go from 1 to 0, but


not from 0 to 1.
1−p 1

1 0

Let p be the conditional probability that the system is in state 0 at


time t + 1, given that it is in state 1 at time t. Then
" #

p = P α(t + 1) = 0 α(t) =1 .

Markov Processes 21 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, discrete time

Markov processes
Geometric distribution — Transition equations
Let π(α, t) be the probability of being in state α at time t.
 

π(0, t + 1) = P α(t + 1) = 0 α(t) = 1 P[α(t) = 1]

 

+P α(t + 1) = 0 α(t) = 0 P[α(t) = 0],

we have

π(0, t + 1) = pπ(1, t) + π(0, t),


π(1, t + 1) = (1 − p)π(1, t),
and the normalization equation
π(1, t) + π(0, t) = 1.

Markov Processes 22 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, discrete time

Markov processes
Geometric distribution — transient probability
distribution

Assume that π(1, 0) = 1. Then the solution is

π(0, t) = 1 − (1 − p)t ,
π(1, t) = (1 − p)t .

Markov Processes 23 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, discrete time

Markov processes
Geometric distribution — transient probability
distribution
1

0.8
probability

0.6

0.4

0.2

0
0 20 40 60 80 100
t

Markov Processes 24 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, discrete time

Markov processes
Geometric distribution
We have shown that the probability that the state goes from 1 to 0 at time t is

P(t) = (1 − p)t−1 p

The mean time for the state to go from 1 to 0 is then



X ∞
X
t̄ = tP(t) = t(1 − p)t−1 p
t=1 t=1

It is not hard to show that


1
t̄ =
p

Markov Processes 25 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, discrete time

Markov processes
Unreliable machine

1=up; 0=down.
1−p r 1−r

1 0

Mean up time = Mean time to fail = MTTF= 1/p


Mean down time = Mean time to repair = MTTR= 1/r

Markov Processes 26 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, discrete time

Markov processes
Unreliable machine — transient probability
distribution

The probability distribution satisfies

π(0, t + 1) = π(0, t)(1 − r ) + π(1, t)p,

π(1, t + 1) = π(0, t)r + π(1, t)(1 − p).

Markov Processes 27 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, discrete time

Markov processes
Unreliable machine — transient probability
distribution
It is not hard to show that

π(0, t) = π(0, 0)(1 − p − r )t


p
+ [1 − (1 − p − r )t ] ,
r +p

π(1, t) = π(1, 0)(1 − p − r )t


r
+ [1 − (1 − p − r )t ] .
r +p

Markov Processes 28 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, discrete time

Markov processes
Unreliable machine — transient probability
distribution

0.8
probability

0.6

0.4

0.2

0
0 20 40 60 80 100
t

Markov Processes 29 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, discrete time

Markov processes
Unreliable machine — steady-state probability
distribution
As t → ∞,
p
π(0, t) → ,
r +p
r
π(1, t) →
r +p
which is the solution of

π(0) = π(0)(1 − r ) + π(1)p,


π(1) = π(0)r + π(1)(1 − p).
Markov Processes 30 Copyright 2018
c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, discrete time

Markov processes
Unreliable machine — efficiency
If a machine makes one part per time unit when it is operational, its average
production rate is
r
π(1) =
r +p
This quantity is the efficiency (e) of the machine. If the machine makes one
part per τ time units when it is operational, its average production rate is
 
1 r
P=
τ r +p

Note that we can also write


MTTF
e=
MTTF + MTTR

Markov Processes 31 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Discrete state, continuous time


States and transitions

• States can be numbered 0, 1, 2, 3, ... (or with


multiple indices if that is more convenient).
• Time is a real number, defined on (−∞, ∞) or a
smaller interval.
• The probability of a transition from j to i during
[t, t + δt] is approximately λij δt, where δt is small,
and
λij δt ≈ P{X (t + δt) = i|X (t) = j} for i 6= j
Markov Processes 32 Copyright 2018
c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Discrete state, continuous time


States and transitions

More precisely,

λij δt = P{X (t + δt) = i|X (t) = j} + o(δt)


for i 6= j

o(δt)
where o(δt) is a function that satisfies lim =0
δt→0 δt

This implies that for small δt, o(δt)  δt.

Markov Processes 33 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Discrete state, continuous time


States and transitions
Transition graph λ
14

1
λ 4 λ
24 64

2 λ
45

5
3 7

λij is a probability rate. λij δt is a probability.


Compare with the discrete-time graph. * 

Markov Processes 34 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Discrete state, continuous time


States and transitions

One of the transition equations:

Define πi (t) = P{X (t) = i}.

π5 (t + δt) ≈

(1 − λ25 δt − λ45 δt − λ65 δt)π5 (t)+

λ52 δtπ2 (t) + λ53 δtπ3 (t) + λ56 δtπ6 (t) + λ57 δtπ7 (t)+

Markov Processes 35 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Discrete state, continuous time


States and transitions

Or,

π5 (t + δt) ≈

π5 (t) − (λ25 + λ45 + λ65 )π5 (t)δt

+(λ52 π2 (t) + λ53 π3 (t) + λ56 π6 (t) + λ57 π7 (t))δt

Markov Processes 36 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Discrete state, continuous time


States and transitions

Or,

π5 (t + δt) − π5 (t)
lim =
δt→0 δt

dπ5
(t) = −(λ25 + λ45 + λ65 )π5 (t)
dt

+λ52 π2 (t) + λ53 π3 (t) + λ56 π6 (t) + λ57 π7 (t)

Markov Processes 37 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Discrete state, continuous time


States and transitions

Define for convenience

λ55 = −(λ25 + λ45 + λ65 )


Then

dπ5
(t) = λ55 π5 (t)+
dt

λ52 π2 (t) + λ53 π3 (t) + λ56 π6 (t) + λ57 π7 (t)


Markov Processes 38 Copyright 2018
c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Discrete state, continuous time


States and transitions

• Define πi (t) = P{X (t) = i}

• It is convenient to define λii = − λji ∗ ∗ ∗


P
j6=i

dπi (t) X
• Transition equations: = λij πj (t).
dt j

• Normalization equation: πi (t) = 1.


P
i

∗ ∗ ∗ Often confusing!!!
Markov Processes 39 Copyright 2018
c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Discrete state, continuous time


Transition equations — Matrix-Vector Form
• Define π(t), ν as before *.
 
λ11 λ12 ... λ1n
 λ21 λ22 ... λ2n 
Define Λ = 
 
... 
λn1 λn2 ... λnn

dπ(t)
• Transition equations: = Λπ(t).
dt

• Normalization equation: ν T π = 1.
• Other facts:

? ν T P = 0 (Each column of P sums to 0.)


? π(t) = e Λt π(0)

Markov Processes 40 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Discrete state, continuous time


Steady State

• Steady state: πi = limt→∞ πi (t), if it exists.

• Steady-state transition equations: 0 = λij πj .


P
j

• Alternatively, steady-state balance equations:


πi m,m6=i λmi = j,j6=i λij πj
P P

• Normalization equation: πi = 1.
P
i

Markov Processes 41 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Discrete state, continuous time


Steady State — Matrix-Vector Form

• Steady state: π = lim π(t), if it exists.


t→∞

• Steady-state transition equations: 0 = Λπ.

• Normalization equation: ν T π = 1.

Markov Processes 42 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Discrete state, continuous time


Sources of confusion in continuous time models

• Never Draw self-loops in continuous time


Markov process graphs.
• Never write 1 − λ14 − λ24 − λ64 . Write
? 1 − (λ14 + λ24 + λ64 )δt, or
? −(λ14 + λ24 + λ64 )

• λii = − λji is NOT a rate and NOT a


P
j6=i

probability. It is ONLY a convenient notation.


Markov Processes 43 Copyright 2018
c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Discrete state, continuous time


Exponential distribution

Exponential random variable T : the time to move from


state 1 to state 0.

1 0

µ
Markov Processes 44 Copyright 2018
c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Discrete state, continuous time


Exponential distribution
π(0, t + δt) =
" #

P α(t + δt) = 0 α(t) = 1 P[α(t) = 1]+

" #

P α(t + δt) = 0 α(t) = 0 P[α(t) = 0].

or
π(0, t + δt) = µδtπ(1, t) + π(0, t) + o(δt)
or
dπ(0, t)
= µπ(1, t).
dt
Markov Processes 45 Copyright 2018
c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Discrete state, continuous time


Exponential distribution
π(1, t + δt) =
" #

P α(t + δt) = 1 α(t) = 1 P[α(t) = 1]+

" #

P α(t + δt) = 1 α(t) = 0 P[α(t) = 0].

or
π(1, t + δt) = (1 − µδt)π(1, t) + (0)π(0, t) + o(δt)
or
dπ(1, t)
= −µπ(1, t).
dt
Markov Processes 46 Copyright 2018
c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Discrete state, continuous time


Exponential distribution

dπ(0, t)



 = µπ(1, t)

 dt
Transition equations
dπ(1, t)



= −µπ(1, t)


dt

If π(0, 0) = 0, π(1, 0) = 1, then

π(1, t) = e −µt
and
π(0, t) = 1 − e −µt

Markov Processes 47 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Discrete state, continuous time


Exponential distribution
The probability that the transition takes place at some T ∈ [t, t + δt] is

f (t)δt = P[α(t + δt) = 0 and α(t) = 1]

≈ P[α(t + δt) = 0|α(t) = 1]P[α(t) = 1]

= (µδt)(e −µt )

The exponential density function is therefore f (t) = µe −µt for t ≥ 0 and 0 for
t < 0.
The time of the transition from 1 to 0 is said to be exponentially distributed
with rate µ.
Z ∞
1
The expected transition time is = te −µt .
µ 0

Markov Processes 48 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Discrete state, continuous time


Exponential distribution

• f (t) = µe −µt for t ≥ 0; f (t) = 0 otherwise;


F (t) = 1 − e −µt for t ≥ 0; F (t) = 0 otherwise.

• ET = 1/µ, VT = 1/µ2 . Therefore, σ = ET so cv=1.

f(t) µ 1 F(t) 1
0.9 0.9

0.8 0.8

0.7 0.7

0.6 0.6

0.5 0.5

0.4 0.4

0.3 0.3

0.2 0.2

0.1 0.1

0 0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5

t t
1µ 1µ

Markov Processes 49 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Markov processes
Exponential

Density function

Exponential density
function and a small
number of samples.

Markov Processes 50 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Discrete state, continuous time


Exponential distribution: some properties

• Memorylessness:
P(T > t + x |T > x ) = P(T > t)

• P(t ≤ T ≤ t + δt|T ≥ t) ≈ µδt for small δt.

Markov Processes 51 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Discrete state, continuous time


Exponential distribution: some properties
• If T1 , ..., Tn are independent exponentially
distributed random variables with parameters
µ1 ..., µn , and
• T = min(T1 , ..., Tn ), then
• T is an exponentially distributed random variable
with parameter µ = µ1 + ... + µn .
• Consequently, the time that the system stays in
any state is exponentially distributed. H

Markov Processes 52 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Discrete state, continuous time


Unreliable machine

Continuous time unreliable machine.


r

up down

Markov Processes 53 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Discrete state, continuous time


Unreliable machine
From the Law of Total Probability:

P({the machine is up at time t + δt})=

P({the machine is up at time t + δt | the machine was up at time t}) ×


P({the machine was up at time t}) +

P({the machine is up at time t + δt | the machine was down at time t}) ×


P({the machine was down at time t})
+o(δt)

and similarly for P({the machine is down at time t + δt}).

Markov Processes 54 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Discrete state, continuous time


Unreliable machine

Probability distribution notation and dynamics:

π(1, t) = the probability that the machine is up at time t.


π(0, t) = the probability that the machine is down at time t.

P(the machine is up at time t + δt| the machine was up at time t)


= 1 − pδt
P(the machine is up at time t + δt| the machine was down at time t)
= r δt

Markov Processes 55 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Discrete state, continuous time


Unreliable machine

Therefore

π(1, t + δt) = (1 − pδt)π(1, t) + r δtπ(0, t) + o(δt)

Similarly,

π(0, t + δt) = pδtπ(1, t) + (1 − r δt)π(0, t) + o(δt)

Markov Processes 56 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Discrete state, continuous time


Unreliable machine

or,

π(1, t + δt) − π(1, t) = −pδtπ(1, t) + r δtπ(0, t) + o(δt)

or,
π(1, t + δt) − π(1, t) o(δt)
= −pπ(1, t) + r π(0, t) +
δt δt

Markov Processes 57 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Discrete state, continuous time

or,

dπ(1, t)
= π(0, t)r − π(1, t)p
dt

dπ(0, t)
= −π(0, t)r + π(1, t)p
dt

Markov Processes 58 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Markov processes
Unreliable machine
Solution
" #
p p
π(0, t) = + π(0, 0) − e −(r +p)t
r +p r +p

π(1, t) = 1 − π(0, t).

As t → ∞,
p
π(0) → ,
r +p
r
π(1) →
r +p

Markov Processes 59 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Markov processes
Unreliable machine

Note that MTTF=1/p; MTTR=1/r . Units are natural time units,


not operation times.

If the machine makes µ parts per time unit on the average when it
is operational, the steady-state average production rate is

r MTTF
µπ(1) = µ =µ = µe
r +p MTTF + MTTR

Markov Processes 60 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Discrete state, continuous time


Poisson Process t
T1 T2 T3 T4

0 T1 T1 + T2 T1 + T2 + T3 T1 + T2 + T3 +T4

• Let Ti , i = 1, ... be a set of independent exponentially


distributed random variables with parameter λ. Each random
variable may represent the time between occurrences of a
repeating event.

? Examples: customer arrivals, clicks of a Geiger counter


Pn
• Then i=1 Ti is the time required for n such events.

Markov Processes 61 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Discrete state, continuous time


Poisson Process t
T1 T2 T3 T4

0 T1 T1 + T2 T1 + T2 + T3 T1 + T2 + T3 +T4

• Informally: N(t) is the number of events that occur between


0 and t.
• Formally: Define
 0 if T1 > t
N(t) =
n such that ni=1 Ti ≤ t,
P Pn+1
i=1 Ti > t

• Then N(t) is a Poisson process with parameter λ.

Markov Processes 62 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time

Queueing theory
M/M/1 Queue
Number of events N(t) during [0, t]
N
10
9
8
7
6
5
4
3
2
1
t

Markov Processes 63 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Queueing theory M/M/1 Queue

Queueing theory
M/M/1 Queue
µ
λ

• Simplest model is the M/M/1 queue:


? Exponentially distributed inter-arrival times — mean is 1/λ; λ is
arrival rate (customers/time). (Poisson arrival process.)
? Exponentially distributed service times — mean is 1/µ; µ is
service rate (customers/time).
? The arrival and service processes are independent.
? 1 server.
? Infinite waiting area.

• Define the utilization ρ = λ/µ.

Markov Processes 64 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Queueing theory M/M/1 Queue

Queueing theory
M/M/1 Queue

Number of customers in the system as a function of time for a


M/M/1 queue.
n

6
5
4
3
2
1
t

Markov Processes 65 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Queueing theory M/M/1 Queue

Queueing theory
D/D/1 Queue

Number of customers in the system as a function of time for a


D/D/1 queue.
n

6
5
4
3
2
1
t

Markov Processes 66 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Queueing theory M/M/1 Queue

Queueing theory
Sample path
• Suppose customers arrive in a Poisson process with average inter-arrival
time 1/λ = 1 minute; and that service time is exponentially distributed
with average service time 1/µ = 54 seconds.

? The average number of customers in the system is 9.


10

6
n(t)

0
0 20 40 60 80 100
t

Queue behavior over a short time interval — initial transient


Markov Processes 67 Copyright 2018
c Stanley B. Gershwin.
All rights reserved.
Queueing theory M/M/1 Queue

Queueing theory
Sample path

30

25

20
n(t)

15

10

0
0 1000 2000 3000 4000 5000
t

Queue behavior over a long time interval

Markov Processes 68 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Queueing theory M/M/1 Queue

Queueing theory
M/M/1 Queue

State space

λ λ λ λ λ λ λ

0 1 2 n−1 n n+1

µ µ µ µ µ µ µ

Markov Processes 69 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Queueing theory M/M/1 Queue

Queueing theory
M/M/1 Queue
λ λ λ λ λ λ λ

0 1 2 n−1 n n+1

µ µ µ µ µ µ µ

Let π(n, t) be the probability that there are n parts in


the system at time t. Then,
For n > 0,
π(n, t + δt) = π(n − 1, t)λδt + π(n + 1, t)µδt +
π(n, t)(1 − (λδt + µδt)) + o(δt)
and
π(0, t + δt) = π(1, t)µδt + π(0, t)(1 − λδt) + o(δt).
Markov Processes 70 Copyright 2018
c Stanley B. Gershwin.
All rights reserved.
Queueing theory M/M/1 Queue

Queueing theory
M/M/1 Queue
Or,

dπ(n, t)
= π(n − 1, t)λ + π(n + 1, t)µ − π(n, t)(λ + µ), n > 0
dt
dπ(0, t)
= π(1, t)µ − π(0, t)λ.
dt

If a steady state distribution exists, it satisfies

0 = π(n − 1)λ + π(n + 1)µ − π(n)(λ + µ), n > 0


0 = π(1)µ − π(0)λ.
Why “if”?

Markov Processes 71 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Queueing theory M/M/1 Queue

Queueing theory
M/M/1 Queue – Steady State

Let ρ = λ/µ. These equations are satisfied by

π(n) = (1 − ρ)ρn , n ≥ 0
if ρ < 1.

The average number of parts in the system is



X ρ λ
n̄ = nπ(n) = = .
n=0 1−ρ µ−λ

Markov Processes 72 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Queueing theory Little’s Law

Queueing theory
Little’s Law

• True for most systems of practical interest (not just M/M/1).


• Steady state only.
• L = the average number of customers in a system.
• W = the average delay experienced by a customer in the
system.
L = λW
In the M/M/1 queue, L = n̄ and
1
W = .
µ−λ
Markov Processes 73 Copyright 2018
c Stanley B. Gershwin.
All rights reserved.
Queueing theory Little’s Law

Queueing theory
M/M/1 Queue capacity

W
100 • µ is the capacity of
the system.
80

60
• If λ < µ, system is
stable and waiting
µ=1
40 time remains bounded.
20
• If λ > µ, waiting time
0 grows over time.
0 0.5 1 1.5 2
λ

Markov Processes 74 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Queueing theory Little’s Law

Queueing theory
M/M/1 Queue capacity

W
100
• To increase capacity,
80
increase µ.
60

µ=1
40 • To decrease delay for a
20
given λ, increase µ.
µ=2
0
0 0.5 1 1.5 2
λ

Markov Processes 75 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Queueing theory Other Single-Stage Models

Queueing theory
Other Single-Stage Models
Things get more complicated when:

• There are multiple servers.

• There is finite space for queueing.

• The arrival process is not Poisson.

• The service process is not exponential.

Closed formulas and approximations exist for some, but not all,
cases.
Markov Processes 76 Copyright 2018
c Stanley B. Gershwin.
All rights reserved.
Queueing theory M/M/s Queue

Queueing theory
M/M/s Queue

µ
λ

s-Server Queue, s = 3

Markov Processes 77 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Queueing theory M/M/s Queue

Queueing theory
M/M/s Queue

λ λ λ λ λ λ λ λ

0 1 2 s−2 s−1 s s+1

µ 2µ 3µ (s−2)µ (s−1)µ sµ sµ sµ

• The departure rate when there are k > s customers in the


system is sµ since all s servers are always busy.

• The departure rate when there are k ≤ s customers in the


system is kµ since only k of the servers are busy.

Markov Processes 78 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Queueing theory M/M/s Queue

Queueing theory
M/M/s Queue

s k ρk


π(0) , k≤s



k!




P(k) = 
s s ρk



π(0) , k>s




s!
where
λ X
ρ= < 1; π(0) chosen so that P(k) = 1
sµ k

Markov Processes 79 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Queueing theory M/M/s Queue

Queueing theory
M/M/s Queue
W vs. λ; sµ = 4
20
(µ,s)=(.5,8)
(µ,s)=(1,4)
(µ,s)=(2,2)
(µ,s)=(4,1)

15

10
W

0
0 0.5 1 1.5 2 2.5 3 3.5 4
λ
Markov Processes 80 Copyright 2018
c Stanley B. Gershwin.
All rights reserved.
Queueing theory M/M/s Queue

Queueing theory
M/M/1 Queue capacity
To increase capacity or
W reduce delay,
100

80
• increase µ, or
60

µ=1 • add servers in parallel


40
... but that will not reduce
20 µ = 1 , s=2 delay as much.
µ=2
0
0 0.5 1 1.5 2
λ

Markov Processes 81 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Networks of Queues

Queueing theory
Networks of Queues

• Set of queues where customers can go to another


queue after completing service at a queue.

• Open network: where customers enter and leave


the system. λ is known and we must find L and W .

• Closed network: where the population of the


system is constant. L is known and we must find λ
and W .

Markov Processes 82 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Networks of Queues Examples

Queueing theory
Networks of Queues
Examples of open networks
• internet traffic
• emergency room (arrive, triage, waiting room, treatment,
tests, exit or hospital admission)
• food court
• airport (arrive, ticket counter, security, passport control, gate,
board plane)
• factory with no centralized material flow control after
material enters
Markov Processes 83 Copyright 2018
c Stanley B. Gershwin.
All rights reserved.
Networks of Queues Examples

Person
Sbarro’s TCBY
PIZZA McDonald’s Frozen Yogurt
Person with Tray
ENTRANCE

Tables

EXIT
Markov Processes 84 Copyright 2018
c Stanley B. Gershwin.
All rights reserved.
Networks of Queues Examples

Queueing theory
Networks of Queues

Examples of closed networks

• factory with limited fixtures or pallets


Raw Part Input

Empty Pallet Buffer

Finished Part Output

• factory with material controlled by keeping the


number of items constant (CONWIP)

Markov Processes 85 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Networks of Queues Jackson Networks

Queueing theory
Jackson Networks

Queueing networks are often modeled as Jackson networks.

• Relatively easy to compute performance measures (capacity,


average time in system, average queue lengths).

• Easily provides intuition.

• Easy to optimize and to use for design.

• Valid (or good approximation) for a large class of systems ...

Markov Processes 86 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Networks of Queues Jackson Networks

Queueing theory
Jackson Networks

• ... but not all. Storage areas must be assumed to be infinite


(which means blocking is assumed not to happen).

? This assumption leads to bad results for systems with


bottlenecks at locations other than the first station.

Markov Processes 87 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Networks of Queues Open Jackson Networks

Queueing theory
Open Jackson Networks

A
A

= B M

D
Node

Markov Processes 88 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Networks of Queues Open Jackson Networks

Queueing theory
Open Jackson Networks
• Items arrive from outside the system to node i according to a Poisson
process with rate αi .

• αi > 0 for at least one i.

• When an item’s service at node i is finished, it goes to node j next with


probability pij .
X
• If pi0 = 1 − pij > 0, items depart from the network from node i.
j

• pi0 > 0 for at least one i.

• We will focus on the special case in which each node has a single server
with exponential processing time. The service rate of node i is µi .

Markov Processes 89 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Networks of Queues Open Jackson Networks

Queueing theory
Open Jackson Networks

Goals of analysis:

• to determine if the system is feasible

• to determine how much inventory is in this system


(on the average) and how it is distributed

• to determine the average waiting time at each node


and the average time a part spends in the system.

Markov Processes 90 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Networks of Queues Open Jackson Networks

Queueing theory
Open Jackson Networks
• Define λi as the total arrival rate of items to node i. This
includes items entering the network at i and items coming
from all other nodes.
• pji λj is the portion of the flow arriving at node j that goes to
node i.
X
• Then λi = αi + pji λj
j

• In matrix form, let λ be the vector of λi , α be the vector of


αi , and P be the matrix of pij . Then
λ = α + PT λ
• Solving for λ,
λ = (I − PT )−1 α
Markov Processes 91 Copyright 2018
c Stanley B. Gershwin.
All rights reserved.
Networks of Queues Open Jackson Networks

Queueing theory
Open Jackson Networks
Probability distribution:

• If λi < µi for each i, define ρi = λi /µi and


πi (ni ) = (1 − ρi )ρni i

• This is the solution of an M/M/1 queue with arrival rate λi


calculated on the previous slide and service rate µi specified
by the given problem data.

• If λi ≥ µi for some i, the demand is not feasible. The


system cannot handle the demand placed on it.

Markov Processes 92 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Networks of Queues Open Jackson Networks

Queueing theory
Open Jackson Networks
Solution:

• Define π(n1 , n2 , ..., nk ) to be the steady-state probability that


there are ni items at node i, i = 1, ..., k.

• Then the probability distribution for the entire system is


Q
π(n1 , n2 , ..., nk ) = i πi (ni )

• At each node i

ρi
n̄i = Eni =
1 − ρi

Markov Processes 93 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Networks of Queues Open Jackson Networks

Queueing theory
Open Jackson Networks
• The solution is product form. It says that the probability of the system
being in a given state is the product of the probabilities of the queue at
each node being in the corresponding state.
• This exact analytic formula is the reason that the Jackson network
model is of interest. It is relatively easy to use to calculate the
performance of a complex system.
• The product form solution holds for some more general cases.
• However, it is restricted to models of systems with unlimited storage
space. Consequently, it cannot model blocking.
? It is a good approximation for systems where blocking is rare, for
example when the arrival rate of material is much less than the
capacity of the system.
? It will not work so well where blocking occurs often.
Markov Processes 94 Copyright 2018
c Stanley B. Gershwin.
All rights reserved.
Networks of Queues Closed Jackson Networks

Queueing theory
Closed Jackson Networks
• Consider an extension in which
? αi = 0 forX
all nodes i.
? pi0 = 1 − pij = 0 for all nodes i.
j

• Then
? Since nothing is entering and nothing is departing from the
network, X
the number of items in the network is constant .
That is, ni (t) = N for all t.
i
X
? λi = pji λj does not have a unique solution.
j
? This means that a different solution approach is needed to
analyze the system. It is used in the example that follows.

Markov Processes 95 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Networks of Queues Application — Simple Flexible Manufacturing System model

Queueing theory
Closed Jackson Network model of an FMS

Solberg’s “CANQ” model.


Let {pij } be the set of
routing probabilities, as
q
1
defined on Slide 89.
1
q
2
q
2 piM = 1 if i 6= M
M 3
(Transport
Station) q
M−1
3 pMj = qj if j 6= M
pij = 0 otherwise

Load/Unload
Service rate at Station
qM
i is µi .
M−1

Markov Processes 96 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Networks of Queues Application — Simple Flexible Manufacturing System model

Queueing theory
Closed Jackson Network model of an FMS
• Input data: M, N, qj , µj , sj (j = 1, ..., M)
? M = number of stations, including transportation system
? N = number of pallets
? qj = fraction of parts going from the transportation system to
Station j
? µj = processing rate of machines at Station j
? sj = number of machines at Station j

• Output data: P, W , ρj (j = 1, ..., M)


? P = production rate
? W = average time a part spends in the system
? ρj = utilization per machine of Station j

Markov Processes 97 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Networks of Queues Application — Simple Flexible Manufacturing System model

Queueing theory
Closed Jackson Network model of an FMS

For the following graphs,

• Base input data: M, N, qj , µj , sj (j = 1, ..., M)


? M =5
? N = 10
? qj = .1, .2, .2, .25, .25
? 1/µj = 3., 4., 3.44, 1.41, 5.
? sj = 2, 1, 2, 1, 15

We see the effect of one of the variables on the


performance measures in the following graphs.
Markov Processes 98 Copyright 2018
c Stanley B. Gershwin.
All rights reserved.
Networks of Queues Application — Simple Flexible Manufacturing System model

Queueing theory
Closed Jackson Network model of an FMS
P
0.35

0.3

0.25

0.2

0.15

0.1

0.05
0 2 4 6 8 10 12 14 16 18 20

Number of pallets

Markov Processes 99 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Networks of Queues Application — Simple Flexible Manufacturing System model

Queueing theory
Closed Jackson Network model of an FMS
Average time in system
70

60

50

40

30

20

10
0 2 4 6 8 10 12 14 16 18 20

Number of Pallets

Markov Processes 100 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Networks of Queues Application — Simple Flexible Manufacturing System model

Queueing theory
Closed Jackson Network model of an FMS
Utilization

0.8
Station 2

0.6

0.4

0.2

0
0 2 4 6 8 10 12 14 16 18 20

Number of Pallets

Markov Processes 101 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Networks of Queues Application — Simple Flexible Manufacturing System model

Queueing theory
Closed Jackson Network model of an FMS
Utilization

0.8
Station 2

0.6

0.4

0.2

0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

Station 2 operation time

Markov Processes 102 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Networks of Queues Application — Simple Flexible Manufacturing System model

Queueing theory
Closed Jackson Network model of an FMS
P
0.6

0.55

0.5

0.45

0.4

0.35

0.3

0.25

0.2
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

Station 2 operation time

Markov Processes 103 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.
Networks of Queues Application — Simple Flexible Manufacturing System model

Queueing theory
Closed Jackson Network model of an FMS
Average time in system
45

40

35

30

25

20

15
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

Station 2 operation time

Markov Processes 104 Copyright 2018


c Stanley B. Gershwin.
All rights reserved.

You might also like