Markov Chains
A Markov chain, named for Andrey Markov (1856–1922), is a stochastic process that describes transitions over time from one state to another and has the property that the future is independent of the past, given the present.
May 13, 2017—Christopher DeFiglia
Discrete-Time Markov Chain (DTMC)
We begin with a set of states for our system.
We define our state space to be 1, 2, 3:
I
n
[
]
:
=
d
t
m
c
=
D
i
s
c
r
e
t
e
M
a
r
k
o
v
P
r
o
c
e
s
s
[
1
,
{
{
.
2
,
.
3
,
.
5
}
,
{
.
7
,
.
2
,
.
1
}
,
{
.
3
,
.
3
,
.
4
}
}
]
;
s
=
{
1
,
2
,
3
}
O
u
t
[
]
=
{
1
,
2
,
3
}
Let’s take a look at the transition probability matrix for these states.
Make a matrix of probabilities where each row sums to 1:
I
n
[
]
:
=
p
=
{
{
0
.
2
,
0
.
3
,
0
.
5
}
,
{
0
.
7
,
.
2
,
.
1
}
,
{
0
.
3
,
.
3
,
.
4
}
}
;
M
a
t
r
i
x
F
o
r
m
[
p
]
O
u
t
[
]
=
0
.
2
0
.
3
0
.
5
0
.
7
0
.
2
0
.
1
0
.
3
0
.
3
0
.
4
E
a
c
h
e
n
t
r
y
p
i
j
i
s
t
h
e
p
r
o
b
a
b
i
l
i
t
y
o
f
e
n
t
e
r
i
n
g
s
t
a
t
e
j
f
r
o
m
s
t
a
t
e
i
.
For this particular DTMC, notice that you can enter any state from any other state.
C
o
m
p
u
t
e
t
h
e
p
r
o
b
a
b
i
l
i
t
y
o
f
e
n
t
e
r
i
n
g
s
t
a
t
e
3
f
r
o
m
s
t
a
t
e
2
,
w
h
i
c
h
w
e
d
o
b
y
p
i
c
k
i
n
g
o
u
t
p
2
,
3
:
I
n
[
]
:
=
p
[
[
2
,
3
]
]
O
u
t
[
]
=
0
.
1
Now let’s simulate this system for 20 time steps.
We then make a plot of how the system evolved over time:
I
n
[
]
:
=
d
a
t
a
1
=
R
a
n
d
o
m
F
u
n
c
t
i
o
n
[
d
t
m
c
,
{
0
,
2
0
}
]
O
u
t
[
]
=
T
e
m
p
o
r
a
l
D
a
t
a
T
i
m
e
:
0
t
o
2
0
D
a
t
a
p
o
i
n
t
s
:
2
1
P
a
t
h
s
:
1
I
n
[
]
:
=
L
i
s
t
L
i
n
e
P
l
o
t
[
d
a
t
a
1
,
F
i
l
l
i
n
g
A
x
i
s
,
T
i
c
k
s
{
A
u
t
o
m
a
t
i
c
,
{
1
,
2
,
3
}
}
,
I
n
t
e
r
p
o
l
a
t
i
o
n
O
r
d
e
r
0
]
O
u
t
[
]
=
Continuous-Time Markov Chain (CTMC)
Applications
AUTHORSHIP INFORMATION
Christopher DeFiglia
5/13/17