WOLFRAM NOTEBOOK

WOLFRAM|DEMONSTRATIONS PROJECT

Finite-State, Discrete-Time Markov Chains

number of states
4
time
1
new transition matrix
power
1
0.429
0.058
0.414
0.099
0.223
0.061
0.502
0.214
0.193
0.341
0.103
0.364
0.171
0.100
0.395
0.334
=
0.429
0.058
0.414
0.099
0.223
0.061
0.502
0.214
0.193
0.341
0.103
0.364
0.171
0.100
0.395
0.334
Consider a system that is always in one of
n
states, numbered 1 through
n
. Every time a clock ticks, the system updates itself according to an
n×n
matrix of transition probabilities, the
th
(i,j)
entry of which gives the probability that the system moves from state
i
to state
j
at any clock tick. A Markov chain is a system like this, in which the next state depends only on the current state and not on previous states. Powers of the transition matrix approach a matrix with constant columns as the power increases. The number to which the entries in the
th
i
column converge is the asymptotic fraction of time the system spends in state
i
.
The image on the upper left shows the states of the chain with the current state colored red, where what is "current" is determined by the time slider. The histogram tracks the number of visits to each state over the number of time steps determined by the time slider. The transition probabilities can be changed using the new transition matrix slider. For small chains, powers of the transition matrix are shown at the bottom.
Wolfram Cloud

You are using a browser not supported by the Wolfram Cloud

Supported browsers include recent versions of Chrome, Edge, Firefox and Safari.


I understand and wish to continue anyway »

You are using a browser not supported by the Wolfram Cloud. Supported browsers include recent versions of Chrome, Edge, Firefox and Safari.