Open In App

Steady-state probabilities in Markov Chains

Last Updated : 29 Apr, 2025
Summarize
Comments
Improve
Suggest changes
Share
Like Article
Like
Report

A Markov chain is a statistical model that explains how a system transitions from one state to another, with the next state depending only on the present state and not on previous states. This Markov property makes it easier to study complex systems using probabilities.

An important component of a Markov chain is the transition matrix, which reflects the probabilities of transitions from one state to another. In the long term, the system can enter a steady-state, with the probabilities remaining constant.

Understanding Steady-State Probabilities

A Markov chain models a stochastic process in which the transition from one state to another is governed by a fixed probability distribution. The steady-state probabilities of a Markov chain are the long-run probabilities of the system being in a specific state. They do not change with time, that is, after enough transitions, the system settles into a constant distribution.

Mathematically, the vector of steady-state probabilities satisfies:

\pi P = \pi

where P is the transition probability matrix. Also, the sum of all steady-state probabilities should be 1:

\sum_{i} \pi_i = 1

This equation guarantees that the system will arrive at an balance in which the probability of transitioning between states does not change.

Markov Chains and Transition Matrices

A Markov chain consists of:

  • A state space (set of possible states).
  • A transition probability matrix P, where Pij represents the probability of moving from state i to j.

The system reaches a steady state when the probability of being in each state remains unchanged over time.

For ergodic Markov chains (both irreducible and aperiodic), a unique steady-state distribution always exists.

Methods for Calculating Steady-State Probabilities

There are several methods to calculate steady-state probabilities:

1. Solving the Linear System

Rewriting \pi P = \pi
as: (P^T - I) \pi = 0
We add the constraint \sum_{i} \pi_i = 1 and solve for \pi.

Computing Steady-State Probabilities of a Markov Chain Using Linear Algebra

Python
import numpy as np

def steady_state_linear(P):
    n = P.shape[0]
    A = np.transpose(P) - np.eye(n)
    A[-1] = np.ones(n)  # Constraint: sum(pi) = 1
    b = np.zeros(n)
    b[-1] = 1
    
    return np.linalg.solve(A, b)

# Example Transition Matrix
P = np.array([[0.7, 0.2, 0.1],
              [0.3, 0.5, 0.2],
              [0.2, 0.3, 0.5]])

steady_probs = steady_state_linear(P)
print("Steady-State Probabilities:", steady_probs)

Output
Steady-State Probabilities: [0.46341463 0.31707317 0.2195122 ]

2. Power Method (Iterative Approach)

This method iteratively updates \pi until convergence:

  1. Begin with an initial probability vector.
  2. Update iteratively using \pi^{(t+1)} = \pi^{(t)} P.
  3. Stop when \| \pi^{(t+1)} - \pi^{(t)} \| < \epsilon.

Computing Steady-State Probabilities Using the Power Method

Python
import numpy as np

P = np.array([[0.7, 0.2, 0.1], 
              [0.3, 0.5, 0.2], 
              [0.4, 0.3, 0.3]])

def steady_state_power_method(P, tol=1e-6, max_iter=1000):
    n = P.shape[0]
    pi = np.ones(n) / n  # Start with uniform distribution
    
    for _ in range(max_iter):
        new_pi = np.dot(pi, P)
        if np.linalg.norm(new_pi - pi) < tol:
            break
        pi = new_pi
    
    return pi

# Call the function after defining P
steady_probs = steady_state_power_method(P)
print("Steady-State Probabilities (Power Method):", steady_probs)

Output
Steady-State Probabilities (Power Method): [0.52727197 0.30909138 0.16363665]

3. Eigenvector Method

The steady-state distribution is the dominant eigenvector of P^T \pi = \picorresponding to eigenvalue 1.

Computing Steady-State Probabilities Using the Eigenvector Method

Python
import numpy as np

P = np.array([[0.7, 0.2, 0.1], 
              [0.3, 0.5, 0.2], 
              [0.4, 0.3, 0.3]])

def steady_state_eigenvector(P):
    eigenvalues, eigenvectors = np.linalg.eig(P.T)
    
    # Find the index of the eigenvalue closest to 1
    index = np.abs(eigenvalues - 1).argmin()
    steady_state = eigenvectors[:, index].real
    
    # Normalize the vector to sum to 1
    return steady_state / np.sum(steady_state)

steady_probs = steady_state_eigenvector(P)

print("Steady-State Probabilities (Eigenvector Method):", steady_probs)

Output
Steady-State Probabilities (Eigenvector Method): [0.52727273 0.30909091 0.16363636]

Applications in Machine Learning

1. Reinforcement Learning (RL)- In RL, the transition probability matrix characterizes state-action transitions. Steady-state distribution facilitates the study of long-run behavior of policies.

2. Hidden Markov Models (HMMs)- HMMs apply steady-state probabilities to represent long-run state distributions and assist in speech recognition and sequence prediction.

3. PageRank Algorithm- Google's PageRank algorithm relies on the steady-state probabilities of a web-link transition matrix.

4. Queuing Systems and Customer Behavior Modeling- Steady-state probabilities can be employed for modeling customer waiting times, load balancing in a system, and optimal resource scheduling.

5. Markov Decision Processes (MDPs)- MDPs assume the use of steady-state probabilities to analyze the stability of the policy in models of decision making.


Practice Tags :

Similar Reads