Introduction Compact
Introduction Compact
854
Introduction to Manufacturing Systems
Stanley B. Gershwin
Laboratory for Manufacturing and Productivity
Massachusetts Institute of Technology
gershwin@mit.edu
Stochastic processes
• t is time.
Stochastic processes
Markov processes
Stochastic processes
Markov processes
Markov processes
Example
Example:
• I have $100 at time t=0.
2 P
45
5
3 7
~
Assume probability of H is p.
p p p p p
0 1 2 3 4
Assume probability of H is p.
1−p 1−p 1−p 1−p 1−p 1−p 1−p 1−p 1−p
p p p p p p p p p
Markov processes
Notation
Markov processes
Transition equations
(Remember that
P45 = P{X (t + 1) = 4|X (t) = 5},
5 P44 = P{X (t + 1) = 4|X (t) = 4}
(Detail of graph = 1 − P14 − P24 − P64 )
on slide 7.)
Markov processes
Transition equations
P
14 1 −P − P −P
14 24 64
1
P 4 P
24 64
2 P
45
5
3 7
P{X (t + 1) = 2}
= P{X (t + 1) = 2|X (t) = 1}P{X (t) = 1}
+P{X (t + 1) = 2|X (t) = 2}P{X (t) = 2}
+P{X (t + 1) = 2|X (t) = 4}P{X (t) = 4}
+P{X (t + 1) = 2|X (t) = 5}P{X (t) = 5}
Markov processes
Transition equations
Markov processes
Transition equations
P
14 1 −P − P −P
14 24 64
1
P 4 P
Therefore, since
24 64
2 P
45
6
Pij = P{X (t + 1) = i|X (t) = j} and
5
3 7
πi (t) = P{X (t) = i},
we can write
π2 (t + 1) = P21 π1 (t) + P22 π2 (t) + P24 π4 (t) + P25 π5 (t).
Markov processes
Transition equations — Matrix-Vector Form
For an n-state system, ?
• Define
π1 (t) P11 P12 ... P1n 1
π2 (t) P21 P22 ... P2n 1
π(t) =
... ,
P= , ν=
... ...
πn (t) Pn1 Pn2 ... Pnn 1
• Other facts:
? ν T P = ν T (Each column of P sums to 1.)
? π(t) = P t π(0)
Markov processes
Steady state
♦ * ~
Markov processes
Steady state
* ? N
• Normalization equation: πi = 1.
P
i
Markov processes
Balance equations
P
14 1 −P − P −P
Balance equation:
14 24 64
1
P 4 P
(P14 + P24 + P64 )π4
24 64
2 = P45 π5
P
45
6 in steady state only .
5
Markov processes
Steady state
How to calculate the steady-state probability distribution π
• Assume that the system has N states, where N is finite.
• Assume that there is a unique steady-state probability distribution.
• The transition equations form a set of N linear equations in N
unknowns.
• The normalization equation is also a linear equation.
• Problem? We have N + 1 equations in N unknowns.
• No problem: there is one redundant equation because each column
sums to 1.
• Delete one transition equation and replace it with the normalization
equation.
• Solve the system of N linear equations in N unknowns.
Markov Processes 19 Copyright
2018
c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, discrete time
Markov processes
Steady state
• A system that has a unique steady-state solutions is called ergodic .
The probability distribution approaches that limit no matter the initial
probability distribution was.
• For systems that have more than one steady-state solution, the limiting
distribution depends on the initial probability.
• The balance equations can be used to find the limiting distribution
instead of the transition equations. As before, one equation has to be
replaced by the normalization equation.
• If a system has an infinite number of states and it has a steady state
probability distribution, there are two possibilties for finding it:
? It might be possible to solve the equations analytically. We will see an
example of that.
? Truncate the system. That is, solve a system with a large but finite subset of
the states. If you understand the system, you can guess which are the highest
probability states. Keep those. This provides an approximate solution.
Markov processes
Geometric distribution
1 0
Markov processes
Geometric distribution — Transition equations
Let π(α, t) be the probability of being in state α at time t.
π(0, t + 1) = P α(t + 1) = 0α(t) = 1 P[α(t) = 1]
+P α(t + 1) = 0α(t) = 0 P[α(t) = 0],
we have
Markov processes
Geometric distribution — transient probability
distribution
π(0, t) = 1 − (1 − p)t ,
π(1, t) = (1 − p)t .
Markov processes
Geometric distribution — transient probability
distribution
1
0.8
probability
0.6
0.4
0.2
0
0 20 40 60 80 100
t
Markov processes
Geometric distribution
We have shown that the probability that the state goes from 1 to 0 at time t is
P(t) = (1 − p)t−1 p
Markov processes
Unreliable machine
1=up; 0=down.
1−p r 1−r
1 0
Markov processes
Unreliable machine — transient probability
distribution
Markov processes
Unreliable machine — transient probability
distribution
It is not hard to show that
Markov processes
Unreliable machine — transient probability
distribution
0.8
probability
0.6
0.4
0.2
0
0 20 40 60 80 100
t
Markov processes
Unreliable machine — steady-state probability
distribution
As t → ∞,
p
π(0, t) → ,
r +p
r
π(1, t) →
r +p
which is the solution of
Markov processes
Unreliable machine — efficiency
If a machine makes one part per time unit when it is operational, its average
production rate is
r
π(1) =
r +p
This quantity is the efficiency (e) of the machine. If the machine makes one
part per τ time units when it is operational, its average production rate is
1 r
P=
τ r +p
More precisely,
o(δt)
where o(δt) is a function that satisfies lim =0
δt→0 δt
1
λ 4 λ
24 64
2 λ
45
5
3 7
π5 (t + δt) ≈
λ52 δtπ2 (t) + λ53 δtπ3 (t) + λ56 δtπ6 (t) + λ57 δtπ7 (t)+
Or,
π5 (t + δt) ≈
Or,
π5 (t + δt) − π5 (t)
lim =
δt→0 δt
dπ5
(t) = −(λ25 + λ45 + λ65 )π5 (t)
dt
dπ5
(t) = λ55 π5 (t)+
dt
dπi (t) X
• Transition equations: = λij πj (t).
dt j
∗ ∗ ∗ Often confusing!!!
Markov Processes 39 Copyright
2018
c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time
dπ(t)
• Transition equations: = Λπ(t).
dt
• Normalization equation: ν T π = 1.
• Other facts:
• Normalization equation: πi = 1.
P
i
• Normalization equation: ν T π = 1.
1 0
µ
Markov Processes 44 Copyright
2018
c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time
" #
P α(t + δt) = 0α(t) = 0 P[α(t) = 0].
or
π(0, t + δt) = µδtπ(1, t) + π(0, t) + o(δt)
or
dπ(0, t)
= µπ(1, t).
dt
Markov Processes 45 Copyright
2018
c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time
" #
P α(t + δt) = 1α(t) = 0 P[α(t) = 0].
or
π(1, t + δt) = (1 − µδt)π(1, t) + (0)π(0, t) + o(δt)
or
dπ(1, t)
= −µπ(1, t).
dt
Markov Processes 46 Copyright
2018
c Stanley B. Gershwin.
All rights reserved.
Markov processes Discrete state, continuous time
dπ(0, t)
= µπ(1, t)
dt
Transition equations
dπ(1, t)
= −µπ(1, t)
dt
π(1, t) = e −µt
and
π(0, t) = 1 − e −µt
= (µδt)(e −µt )
The exponential density function is therefore f (t) = µe −µt for t ≥ 0 and 0 for
t < 0.
The time of the transition from 1 to 0 is said to be exponentially distributed
with rate µ.
Z ∞
1
The expected transition time is = te −µt .
µ 0
f(t) µ 1 F(t) 1
0.9 0.9
0.8 0.8
0.7 0.7
0.6 0.6
0.5 0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
t t
1µ 1µ
Markov processes
Exponential
Density function
Exponential density
function and a small
number of samples.
• Memorylessness:
P(T > t + x |T > x ) = P(T > t)
up down
Therefore
Similarly,
or,
or,
π(1, t + δt) − π(1, t) o(δt)
= −pπ(1, t) + r π(0, t) +
δt δt
or,
dπ(1, t)
= π(0, t)r − π(1, t)p
dt
dπ(0, t)
= −π(0, t)r + π(1, t)p
dt
Markov processes
Unreliable machine
Solution
" #
p p
π(0, t) = + π(0, 0) − e −(r +p)t
r +p r +p
As t → ∞,
p
π(0) → ,
r +p
r
π(1) →
r +p
Markov processes
Unreliable machine
If the machine makes µ parts per time unit on the average when it
is operational, the steady-state average production rate is
r MTTF
µπ(1) = µ =µ = µe
r +p MTTF + MTTR
0 T1 T1 + T2 T1 + T2 + T3 T1 + T2 + T3 +T4
0 T1 T1 + T2 T1 + T2 + T3 T1 + T2 + T3 +T4
Queueing theory
M/M/1 Queue
Number of events N(t) during [0, t]
N
10
9
8
7
6
5
4
3
2
1
t
Queueing theory
M/M/1 Queue
µ
λ
Queueing theory
M/M/1 Queue
6
5
4
3
2
1
t
Queueing theory
D/D/1 Queue
6
5
4
3
2
1
t
Queueing theory
Sample path
• Suppose customers arrive in a Poisson process with average inter-arrival
time 1/λ = 1 minute; and that service time is exponentially distributed
with average service time 1/µ = 54 seconds.
6
n(t)
0
0 20 40 60 80 100
t
Queueing theory
Sample path
30
25
20
n(t)
15
10
0
0 1000 2000 3000 4000 5000
t
Queueing theory
M/M/1 Queue
State space
λ λ λ λ λ λ λ
0 1 2 n−1 n n+1
µ µ µ µ µ µ µ
Queueing theory
M/M/1 Queue
λ λ λ λ λ λ λ
0 1 2 n−1 n n+1
µ µ µ µ µ µ µ
Queueing theory
M/M/1 Queue
Or,
dπ(n, t)
= π(n − 1, t)λ + π(n + 1, t)µ − π(n, t)(λ + µ), n > 0
dt
dπ(0, t)
= π(1, t)µ − π(0, t)λ.
dt
Queueing theory
M/M/1 Queue – Steady State
π(n) = (1 − ρ)ρn , n ≥ 0
if ρ < 1.
Queueing theory
Little’s Law
Queueing theory
M/M/1 Queue capacity
W
100 • µ is the capacity of
the system.
80
60
• If λ < µ, system is
stable and waiting
µ=1
40 time remains bounded.
20
• If λ > µ, waiting time
0 grows over time.
0 0.5 1 1.5 2
λ
Queueing theory
M/M/1 Queue capacity
W
100
• To increase capacity,
80
increase µ.
60
µ=1
40 • To decrease delay for a
20
given λ, increase µ.
µ=2
0
0 0.5 1 1.5 2
λ
Queueing theory
Other Single-Stage Models
Things get more complicated when:
Closed formulas and approximations exist for some, but not all,
cases.
Markov Processes 76 Copyright
2018
c Stanley B. Gershwin.
All rights reserved.
Queueing theory M/M/s Queue
Queueing theory
M/M/s Queue
µ
λ
s-Server Queue, s = 3
Queueing theory
M/M/s Queue
λ λ λ λ λ λ λ λ
µ 2µ 3µ (s−2)µ (s−1)µ sµ sµ sµ
Queueing theory
M/M/s Queue
s k ρk
π(0) , k≤s
k!
P(k) =
s s ρk
π(0) , k>s
s!
where
λ X
ρ= < 1; π(0) chosen so that P(k) = 1
sµ k
Queueing theory
M/M/s Queue
W vs. λ; sµ = 4
20
(µ,s)=(.5,8)
(µ,s)=(1,4)
(µ,s)=(2,2)
(µ,s)=(4,1)
15
10
W
0
0 0.5 1 1.5 2 2.5 3 3.5 4
λ
Markov Processes 80 Copyright
2018
c Stanley B. Gershwin.
All rights reserved.
Queueing theory M/M/s Queue
Queueing theory
M/M/1 Queue capacity
To increase capacity or
W reduce delay,
100
80
• increase µ, or
60
Queueing theory
Networks of Queues
Queueing theory
Networks of Queues
Examples of open networks
• internet traffic
• emergency room (arrive, triage, waiting room, treatment,
tests, exit or hospital admission)
• food court
• airport (arrive, ticket counter, security, passport control, gate,
board plane)
• factory with no centralized material flow control after
material enters
Markov Processes 83 Copyright
2018
c Stanley B. Gershwin.
All rights reserved.
Networks of Queues Examples
Person
Sbarro’s TCBY
PIZZA McDonald’s Frozen Yogurt
Person with Tray
ENTRANCE
Tables
EXIT
Markov Processes 84 Copyright
2018
c Stanley B. Gershwin.
All rights reserved.
Networks of Queues Examples
Queueing theory
Networks of Queues
Queueing theory
Jackson Networks
Queueing theory
Jackson Networks
Queueing theory
Open Jackson Networks
A
A
= B M
D
Node
Queueing theory
Open Jackson Networks
• Items arrive from outside the system to node i according to a Poisson
process with rate αi .
• We will focus on the special case in which each node has a single server
with exponential processing time. The service rate of node i is µi .
Queueing theory
Open Jackson Networks
Goals of analysis:
Queueing theory
Open Jackson Networks
• Define λi as the total arrival rate of items to node i. This
includes items entering the network at i and items coming
from all other nodes.
• pji λj is the portion of the flow arriving at node j that goes to
node i.
X
• Then λi = αi + pji λj
j
Queueing theory
Open Jackson Networks
Probability distribution:
Queueing theory
Open Jackson Networks
Solution:
• At each node i
ρi
n̄i = Eni =
1 − ρi
Queueing theory
Open Jackson Networks
• The solution is product form. It says that the probability of the system
being in a given state is the product of the probabilities of the queue at
each node being in the corresponding state.
• This exact analytic formula is the reason that the Jackson network
model is of interest. It is relatively easy to use to calculate the
performance of a complex system.
• The product form solution holds for some more general cases.
• However, it is restricted to models of systems with unlimited storage
space. Consequently, it cannot model blocking.
? It is a good approximation for systems where blocking is rare, for
example when the arrival rate of material is much less than the
capacity of the system.
? It will not work so well where blocking occurs often.
Markov Processes 94 Copyright
2018
c Stanley B. Gershwin.
All rights reserved.
Networks of Queues Closed Jackson Networks
Queueing theory
Closed Jackson Networks
• Consider an extension in which
? αi = 0 forX
all nodes i.
? pi0 = 1 − pij = 0 for all nodes i.
j
• Then
? Since nothing is entering and nothing is departing from the
network, X
the number of items in the network is constant .
That is, ni (t) = N for all t.
i
X
? λi = pji λj does not have a unique solution.
j
? This means that a different solution approach is needed to
analyze the system. It is used in the example that follows.
Queueing theory
Closed Jackson Network model of an FMS
Load/Unload
Service rate at Station
qM
i is µi .
M−1
Queueing theory
Closed Jackson Network model of an FMS
• Input data: M, N, qj , µj , sj (j = 1, ..., M)
? M = number of stations, including transportation system
? N = number of pallets
? qj = fraction of parts going from the transportation system to
Station j
? µj = processing rate of machines at Station j
? sj = number of machines at Station j
Queueing theory
Closed Jackson Network model of an FMS
Queueing theory
Closed Jackson Network model of an FMS
P
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0 2 4 6 8 10 12 14 16 18 20
Number of pallets
Queueing theory
Closed Jackson Network model of an FMS
Average time in system
70
60
50
40
30
20
10
0 2 4 6 8 10 12 14 16 18 20
Number of Pallets
Queueing theory
Closed Jackson Network model of an FMS
Utilization
0.8
Station 2
0.6
0.4
0.2
0
0 2 4 6 8 10 12 14 16 18 20
Number of Pallets
Queueing theory
Closed Jackson Network model of an FMS
Utilization
0.8
Station 2
0.6
0.4
0.2
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Queueing theory
Closed Jackson Network model of an FMS
P
0.6
0.55
0.5
0.45
0.4
0.35
0.3
0.25
0.2
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Queueing theory
Closed Jackson Network model of an FMS
Average time in system
45
40
35
30
25
20
15
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5