Mathematical
Modelling
Part 3: Probability Model
Content of part 2
Chapter 1: Introduction to probability models
Chapter 2: Stochastic models
Chapter 3: Simulation of probability models
3
Simulation of Probability Models
• Computational methods for optimization are important,
because most optimization problems are too difficult to
be solved analytically.
• For dynamic models, it is often possible to determine
steady–state behavior analytically, but the study of
transient (time–dependent) behavior requires computer
simulation.
• Probability models are even more complex. Models
with no time dynamics can sometimes be solved
analytically, and steady–state results are available for
the simplest stochastic models.
4
Outline
• Monte Carlo simulation
• Markov property
5
Monte Carlo Simulation
• Questions involving the transient or time–dependent
behavior of stochastic models are difficult to resolve
analytically. Monte Carlo simulation is a general
modeling technique that is usually effective for such
problems.
• Monte Carlo simulation models are easy to
conceptualize, easy to explain, and they are the only
viable method for the modeling of many complex
stochastic system.
6
Example: Vacation problem
• Arriving on your vacation, you are dismayed to
learn that the local weather service forecasts a
50% chance of rain every day this week. What
are the chances of 3 consecutive rainy days?
7
Step 1: Ask a question
8
Step 2: Select a modelling approach
• We will use Monte Carlo simulation for this problem.
• Monte Carlo simulation is a technique that can be applied to
any probability model. A probability model includes a
number of random variables and must also specify the
probability distribution for each of these random variables.
• Monte Carlo simulation uses a randomizing device to assign
a value to each random variable, in accordance with its
probability distribution. Since the results of the simulation
depend on random factors, subsequent repetitions of the
same simulation will reproduce different results.
• Usually, a Monte Carlo simulation will be repeated a number
of times in order to determine an average or expected
outcome.
9
Step 3: Formulate the model
10
Step 3: Formulate the model
11
Step 3: Formulate the model
12
Step 4: Solve the problem
• We ran a computer implementation of the
algorithm in Step 3 with 𝑝 = 0.5, 𝑛 = 100.
• The simulation counted 43 rainy weeks out of
100.
• On this basis, we would estimate a 43%
chance of a rainy week.
• Several additional runs were mad to confirm
this results. In very case the simulation
counted around 40 rainy weeks out of 100.
13
Step 5: Answer the question
Arriving on your vacation, you find that the local
weather service predicts a 50% chance of rain
every day for a week. A simulation indicates that,
if this forecast is correct, there is a 40% chance
that there will be at least 3 consecutive rainy
days this week. These results apply to sunshine
as well as rain, and so, to end on a somewhat
more optimistic note, let us point out that there is
a 50% chance of sunshine every day this week,
and a 40% chance of at least 3 consecutive
days of sunshine. Enjoy your vacation!
14
Sensitivity analysis
• For a simple, we
ran a number of
model runs to
do sensitivity
analysis.
• Histogram of
distribution of
number of rainy
weeks out of
100
15
Sensitivity analysis
16
Exercise
17
Outline
• Monte Carlo simulation
• Markov property
18
Markov property
• A stochastic process is said to have the Markov
property if the information contained in the current state
of the process is all that is needed to determine the
probability distribution of future states.
• Monte Carlo simulation of a stochastic process is much
simpler in the presence of the Markov property,
because it reduces the quantity of information that
needs to be stored in the computer.
19
Discrete time dynamic systems
Example: Astronauts in training are required to
practice a docking maneuver under manual control.
As a part of this maneuver, it is required to bring an
orbiting spacecraft to rest relative to another orbiting
craft. The hand controls provide for variable
acceleration and deceleration, and there is a device
on board that measures the rate of closing between
the two vehicles. The following strategy has been
proposed for bringing the craft to rest.
20
Discrete time dynamic systems
First, look at the closing velocity. If it is zero, we are
done. Otherwise, remember the closing velocity and
look at the acceleration control. Move the acceleration
control so that it is opposite to the closing velocity
(i.e., if closing velocity is positive, we slow down, and
we speed up if it is negative) and proportional in
magnitude (i.e., we brake twice as hard if we find
ourselves closing twice as fast). After a time, look at
the closing velocity again and repeat the procedure.
Under what circumstances will this strategy be
effective?
21
Step 1: Ask a question
22
Step 1: Ask a question
• We make additionally assumptions about the variables 𝑎𝑛 ,
𝑐𝑛 and 𝑤𝑛 .
• The first one: 𝑐𝑛
• The random variable that represents the most uncertainty
(greatest variance) is 𝑐𝑛 , the time it takes to make a control
adjustment.
• We will assume that it takes roughly 1 second to observe the rate
of closing, 2 seconds to calculate the adjustment, and 2 seconds
to make the adjustment. The actual time to carry out each phase
is random.
• Let 𝑅𝑛 denote the time to read the closing velocity, 𝑆𝑛 the time to
calculate the desired adjustment, and 𝑇𝑛 the time to make the
adjustment.
23
Step 1: Ask a question
• The first one: 𝑐𝑛
• So, we have 𝐸𝑅𝑛 = 1 second and 𝐸𝑆𝑛 = 𝐸𝑇𝑛 = 2 seconds.
• We make additionally assumption about these variables:
nonnegative values, mutually independent and close to the
mean.
• So, 𝑐𝑛 = 𝑅𝑛 + 𝑆𝑛 + 𝑇𝑛 .
• The second one: 𝑎𝑛 , the acceleration after this control
adjustment. We assume that 𝑎𝑛 = −𝑘𝑣𝑛 + 𝜖𝑛 , where 𝜖𝑛 is a
random error, which follow a normal distribution with mean
zero and standard deviation 𝜎 = 0.05.
• The third one: 𝑤𝑛 , the waiting time before the next control
adjustment.
• This will depend on 𝑐𝑛 if we are trying to maintain a fixed time
between control adjustment of 15 seconds total.
• We assume 𝑤𝑛 = 15 − 𝑐𝑛 + 𝐸𝑛 , where 𝐸𝑛 is a small random
error, with mean zero, normal distribution, standard deviation of
0.1 seconds.
24
Step 1: Ask a question
• Analysis objective: we are interested in seeing
that 𝑣𝑛 → 0.
• Assume that initial closing velocity is 50m/s
and the closing velocity has been reduced to
0.1 m/s, this will be considered successful.
• We would be most interested in the total time it
take to succeed.
25
Step 1: Ask a question
26
Step 1: Ask a question
27
Step 2: Select a modelling approach
• We will use a Monte Carlo simulation based on
the Markov property.
• General idea is as follows. At each time step 𝑛
there is a vector 𝑋𝑛 that describes the current
state of the system. The sequence of random
vectors {𝑋𝑛 } is assumed to have the Markov
property. In other words, the current state
𝑋𝑛 contains all of the information needed to
determine the probability distribution of the
next state, 𝑋𝑛+1 .
28
Step 2: Select a modelling approach
• General structure of the simulation:
• First, we initialize variables and read data files. At
this stage, we must specify the initial state 𝑋0 .
• Next, we enter a loop that repeats until an end
condition is satisfied. In the loop, we use 𝑋𝑛 to
specify the distribution of 𝑋𝑛+1 , and then we use a
random number generator to determine 𝑋𝑛+1
according to that distribution. We must also
calculate and store any information needed to
generate the simulation Measure of Performance
(MOPs). Once the end condition occurs, we exit the
loop and output the MOPs.
29
Step 2: Select a modelling approach
30
Step 3: Formulate the model
• 𝑇 = 𝑡𝑛 , 𝑉 = 𝑣𝑛 , 𝐴 = 𝑎𝑛 , 𝐵 = 𝑎𝑛−1 are state variables.
31
Step 3: Formulate the model
32
Step 4: Solving the problem
• This figure shows the results of 20 simulation runs with
𝑘 = 0.02, 𝜎 = 0.1.
33
Sensitivity analysis
34
Sensitivity analysis
35