Stochastic Process Simulation in Matlab
Stochastic Process Simulation in Matlab
Stochastic Process Simulation in Matlab
学号: SL20010001
1. INTRODUCTION
Many stochastic processes used for the modeling of financial assets and other systems in engineering
are Markovian, and this makes it relatively easy to simulate from them. Here we present a brief
introduction to the simulation of Markov chains. Our emphasis is on discrete-state chains both in
discrete and continuous time, but some examples with a general state space will be discussed too.
2.MARKOV CHAIN
Definition 1.1 A stochastic process {Xn: n ≥ 0} is called a Markov chain if for all times
= Pij.
Pij denotes the probability that the chain, whenever in state i, moves next (one unit of time later) into
state j, and is referred to as a one-step transition probability. The square matrix P = (Pij), i, j ∈ S, is called
the one-step transition matrix, and since when leaving state i the chain must move to one of the states j
∈ S, each row sums to one (e.g., forms a probability distribution): For each i
X j∈S Pij = 1.
We are assuming that the transition probabilities do not depend on the time n, and so, in particular,
using n = 0 in (1) yields
(Formally we are considering only time homogenous MC’s meaning that their transition probabilities are
time-homogenous (time stationary).)
The defining property (1) can be described in words as the future is independent of the past given the
present state. Letting n be the present time, the future after time n is {Xn+1, Xn+2, . . .}, the present
state is Xn, and the past is {X0, . . ., Xn−1}. If the value Xn = i is known, then the future evolution of the
chain only depends (at most) on i, in that it is stochastically independent of the past values Xn−1, . . .,
X0.
Conditional on the rv Xn, the future sequence of rvs { Xn+1,Xn+2, . . .} is independent of the past
sequence of rvs {X0, . . ., Xn-1}.
The defining Markov property above does not require that the state space be discrete, and in general
such a process possessing the Markov property is called a Markov chain or Markov process.
2.1.1Remark
A Markov chain with non-stationary transition probabilities is allowed to have
a different transition matrix Pn, for each time n. This means than given present state Xn and the
present time n, the future only depends (at most) on (n, Xn) and is independent of the past.
Why does it matter? If 1% of a population have cancer, for a screening test with 80% sensitivity and 95%
specificity;
Mixing up P [A|B] with P[B|A] is the Prosecutor’s Fallacy; a small probability of evidence given
innocence need NOT mean a small probability of innocence given evidence.
Figure: 1-2.Sally Clark
Bayes’ theorem also applies to continuous variables – say systolic and diastolic blood pressure.
f(x|y)∞ + (y|x)f(x)
This proportionality statement
3.1. Non-regular
first, an example of a non-regular, but ergodic markov chain, which doesn't converge, The Markov chain
on the right, I want to show it in this example is that each state can reach the other states, but that it
has a zero at all times and in all iterations, so the first thing I did is configure our matrices.
0 1
p=( ) this is our state transition matrix
1 0
P=[ 0 1;1 0];
t_all = [];
i_all = [];
figure(1)
clf
for i = 1:100
t = P^i;
t_all = [t_all t(:)];
i_all = [i_all ones(4,1)*i];
subplot(211)
draw_states(t,i)
subplot(212)
plot(i_all',t_all','.-')
xlabel('discrete time steps')
ylabel('probability')
title('evolution of transition probs. for each element')
pause
end
It is basically to visualize how the states are configured, we will see how the system works over time
from this code.
Figure:3-1. interface surface
In the figure we can clearly see the two states, state A and state B, the values are 1y1 and others are
0y0, the discrete time staggers the evolution of the transition probabilities for each one.
Figure:3-2. iteration time 5 and iteration time 16
we can observe the transition probabilities oscillating in the course of these iterations.
now when Has zero elements in initial transition matrix (TM)), but not always, so thus still regular
P = [ 1/2 1/2 ;1 0]
t_all = [];
i_all = [];
figure(1)
clf
for i = 1:100
t = P^i
t_all = [t_all t(:)];
i_all = [i_all ones(4,1)*i];
subplot(211)
draw_states(t,i)
subplot(212)
plot(i_all',t_all','.-')
xlabel('discrete time steps')
ylabel('probability')
title('evolution of transition probs. for each element')
pause
end
fF
We can observe its evolution over time, we obtain an asymptotic system, when the time gets older the
system becomes asymptotic, and so our final matrices will be seen.
4. TRANING LEVEL
4.1. Training: level one! Learn to read your opponent.
Our Bayesian Ninja has just learned about the sudden reappearance of an old enemy of the Bayesian
Clan, The frequentisian Ninja Clan!
The bayesian ninja must now train to fight! One critical skill is to know your opponent, know their
tendencies, and learn their patterns! The old master ninja is the last of the Bayesians who have fought
the
Frequentisian Ninja, and he describes the different close-combat fighting styles taught within their
devious clan in terms of a markov process.
1)punch (red) and 2) kick (yellow), 3) and flying falcon punch (blue).
here we look at how, overall, the Frequentisian Ninja's will fight, given
"special attack"
% E honda style (Likes to punch)
a= .9
b = .3
c = .2
P = [ a (1-a)/2 (1-a)/2;(1-b)/2 b (1-b)/2; (1-c)/2 (1-c)/2 c;]
a= .1
b = .9
c = .3
P = [ a (1-a)/2 (1-a)/2;(1-b)/2 b (1-b)/2; (1-c)/2 (1-c)/2 c;]
a= .1
b = .2
c = .9
P = [ a (1-a)/2 (1-a)/2;(1-b)/2 b (1-b)/2; (1-c)/2 (1-c)/2 c;]
a = .33333
b = .333333
c = .333333
P = [ a (1-a)/2 (1-a)/2;(1-b)/2 b (1-b)/2; (1-c)/2 (1-c)/2 c;]
t_all = [];
i_all = [];
figure(1)
clf
for i = 1:100
t = P^i
t_all = [t_all t(:)];
i_all = [i_all ones(size(t_all,1),1)*i];
subplot(211)
draw_states3(t(:),i)
subplot(212)
plot(i_all',t_all','.-')
xlabel('discrete time steps')
ylabel('probability')
title('evolution of transition probs. for each element')
axis([0 max(max(i_all)) min(min(t_all))-.5 max(max(t_all))+.5])
pause
end
Figure:4-1 probability table1
Figure:3-2. probability table2
Each function was already described in the algorithm above by means of the three colors, each color is a
different action, we can observe the extremely high probability of using a punch, the probability of using
a punch is higher than other functions, remember that the probabilities they can move from one state
to another.
According to the results obtained by the iteration matrices, there is a 78% chance that our Ninja will be
hit with the punch.
If the Master ninja can land a 3 hit combo in the specific order of punch, kick, falcon punch...the
bayesian ninja will get KO'd!
Simulate this as a markov process and see how the Bayesian will perform given his ability, b, to interrupt
the punches, and starting state u,after T number of attacks (time steps)
a= .5
b = .7 %interrupt probability
P = [1-a a 0 0; b 0 1-b 0; b 0 0 1-b; 1 0 0 0]
t_all = [];
i_all = [];
figure(1)
clf
for i = 1:100
t = P^i
t_all = [t_all t(:)];
i_all = [i_all ones(size(t_all,1),1)*i];
subplot(211)
draw_states4(t,i)
subplot(212)
plot(i_all',t_all','.-')
xlabel(['Time steps = ', num2str(i)])
ylabel('probability')
title('evolution of transition probs. for each element')
pause
end
%how to solve with eigen math
N = 1
[min_v,coln] = min(abs(values-N))
%grab the corresponding vector, and normalize, this is your stationary
%distribution!
Evector = Evector(:,coln)
fixed_row_vector = (Evector/sum(Evector))'
We can see that when running our program, the first values of our system we observe an asymptotic
system, but that it will be synthesized throughout the time, in this Falcon punch state it is point 14 of
our state matrix, so it's like a 14% chance.
Now, if we change the values of our matrices, for example we change the values of "a" and "b", the
evolution of the system would be different, the system would continue to be asymptotic, but the
probability drops to only 3%.
Figure: 4-2. Time steps 21
Figure:4-3
1. END
Allowing approximate Bayes, one answer is ‘almost any analysis’.
• Hierarchical modeling One expert calls the clas-sic frequentist version a “statistical no-man’s land”
Complex models – for e.g. messy data, measurement error, multiple sources of data; fitting them is
possible under Bayesian approaches, but perhaps still not easy.
Markov Bayesian :
• Is useful in many settings, and you should know about it
• Is often not very different in practice from frequentist statistics; it is often helpful to think about
analyses from both Bayesian and non-Bayesian points of view
• Is not reserved for hard-core mathematicians, or computer scientists, or philosophers. If you find it
helpful, use it.