A Two-State, Discrete-Time Markov Chain
A Two-State, Discrete-Time Markov Chain
Consider a system that is always in one of two states, 1 or 2. Every time a clock ticks, the system updates itself according to a 2×2 matrix of transition probabilities, the entry of which gives the probability that the system moves from state to state at any clock tick. A two-state Markov chain is a system like this, in which the next state depends only on the current state and not on previous states. Powers of the transition matrix approach a matrix with constant columns as the power increases. The number to which entries in the column converge is the asymptotic fraction of time the system spends in state .
th
(i,j)
i
j
th
i
i
The state of the system at the time step given by the time slider is the colored circle. Transition probabilities are shown on the left diagram and can be changed using the new transition matrix slider. State 1 is colored yellow for "sunny" and state 2 is colored gray for "not sunny" in deference to the classic two-state Markov chain example. The number of visits to each state over the number of time steps given by the time slider is illustrated by the histogram. Powers of the transition matrix are shown at the bottom.