Skip to content
Feb 28

Discrete-Time Markov Chains

MT
Mindli Team

AI-Generated Content

Discrete-Time Markov Chains

Discrete-Time Markov Chains (DTMCs) are fundamental models for systems that evolve randomly over time while possessing a crucial "memoryless" property. From predicting weather patterns and modeling genetic inheritance to optimizing server queues and analyzing stock trends, they provide a powerful framework for understanding stochastic processes where the future depends only on the present state, not the full history of past events. Mastering DTMCs equips you to analyze long-term behavior, calculate key performance metrics, and solve complex probabilistic problems across science, engineering, and finance.

Foundational Concepts and the Transition Matrix

A Markov chain is a stochastic process that undergoes transitions from one state to another on a state space. Its defining characteristic is the Markov property, also called memorylessness: the conditional probability of moving to any future state depends only on the current state, not on the sequence of events that preceded it. Formally, for a process and states , this is expressed as .

The dynamics of a finite-state DTMC are completely captured by its transition probability matrix, denoted . Each entry represents the probability of moving from state to state in one step: . This matrix is a stochastic matrix, meaning each row sums to 1: for all . The -step transition probabilities, the probability of going from to in steps, are given by the entries of , the matrix raised to the th power.

Example: Consider a simple weather model where a day is either Sunny (S) or Rainy (R). Let:

  • If today is Sunny, tomorrow is Rainy with probability 0.4 and Sunny with probability 0.6.
  • If today is Rainy, tomorrow is Sunny with probability 0.2 and Rainy with probability 0.8.

The state space is . The transition matrix is: where row 1 (and column 1) corresponds to Sunny, and row 2/column 2 corresponds to Rainy. The probability that it will be sunny two days after a rainy day is the (2,1) entry of .

Classification of States and Long-Run Behavior

Analyzing the structure of a Markov chain begins with classifying its states. A state is accessible from state if there is a non-zero probability of ever reaching from , denoted . If and , the states communicate, forming a communication class. A Markov chain is irreducible if all its states communicate, meaning there is only one class.

States are further characterized by their recurrence. A state is recurrent if, starting from it, the probability of eventually returning to it is 1. If this probability is less than 1, the state is transient. In a finite Markov chain, at least one state must be recurrent. A recurrent state is positive recurrent if the expected time to return is finite. Within a class, states are either all transient, all positive recurrent, or all null recurrent (a theoretical case where return is certain but the expected return time is infinite).

These classifications directly inform limiting distributions. For an irreducible, aperiodic (states do not cycle in a periodic pattern), and positive recurrent DTMC, a unique stationary distribution exists. This is a probability vector satisfying . Once the chain's state distribution equals , it remains forever. Furthermore, the long-run proportion of time spent in any state converges to , regardless of the starting state. For our weather model, solving yields . In the long run, about 33.3% of days are sunny and 66.7% are rainy.

Key Computations: Absorption and Passage Times

For chains with both transient and recurrent states, two critical calculations are absorption probabilities and mean first passage times.

Absorption probabilities apply when one or more recurrent classes are absorbing—once entered, they cannot be left. The probability of being absorbed into a particular absorbing state, given a starting transient state, is found by solving a system of linear equations derived from the transition matrix. These are fundamental in models like genetics, where an absorbing state might represent a fixed gene pool, or in gambler's ruin problems, where absorption represents ruin or winning a target sum.

Example (Gambler's Ruin): A gambler with 1 per game, stopping at 4 (goal). The states are , with 0 and 4 as absorbing. The probability of reaching 2 is calculated by solving equations for the absorption probabilities from each non-absorbing state.

The mean first passage time is the expected number of steps to first reach state starting from state . For , these values are also found by solving a system of equations. If is recurrent and we start in , the mean recurrence time is the expected return time, which is the reciprocal of the stationary probability: for positive recurrent states in an irreducible chain.

Applications in Random Walks, Genetics, and Queueing

DTMCs model diverse phenomena. A simple random walk on the integers is a classic Markov chain where at each step you move +1 or -1 with given probabilities. It can model particle motion, gambling profits, or stock price changes. In genetics, DTMCs model the spread of alleles in a population under assumptions like random mating, with states representing the count of a specific gene. Absorption probabilities here answer questions about the ultimate fixation or loss of a genetic trait.

In queueing theory, DTMCs model systems like a single-server queue. The state can be the number of customers in the system. Transitions occur with arrivals and service completions. Analyzing this chain reveals performance metrics like the long-run average queue length, the probability the server is idle (found via the stationary distribution), and waiting times. This directly informs resource allocation in networking, telecommunications, and service industries.

Common Pitfalls

  1. Confusing Stationary and Limiting Distributions: A stationary distribution satisfies . A limiting distribution may not exist or may depend on the starting state. They are equal only for irreducible, aperiodic, positive recurrent chains. Not every chain has a limiting distribution that is independent of the initial state, but every finite chain has at least one stationary distribution.
  2. Misclassifying States in Complex Chains: Always map communication classes carefully. A single transient state can mislead analysis. Use a state transition diagram to visually identify classes before labeling states as recurrent or transient. Remember: within a class, states are all of the same recurrence type.
  3. Incorrectly Formulating Equations for Absorption/Passage Times: When setting up linear equations for absorption probabilities or mean first passage times, a common error is mis-indexing the equations or forgetting to set the boundary conditions (e.g., the absorption probability for an absorbing state itself is 1). Always write the equation for a generic state based on the law of total probability conditioned on the first step.
  4. Assuming Irreducibility Without Verification: Many nice theorems (like unique stationary distribution) require irreducibility. Applying them to a reducible chain without adjustment leads to incorrect conclusions. Always check if the chain can reach every state from every other state.

Summary

  • A Discrete-Time Markov Chain is defined by a countable state space and the Markov property, where the future depends only on the present state, with all transition probabilities encoded in a stochastic matrix .
  • Analyzing communication classes and classifying states as transient or recurrent is essential for understanding the chain's structure. The stationary distribution , solving , describes long-run behavior for well-behaved (irreducible, aperiodic) chains.
  • For chains with absorbing states, absorption probabilities are found by solving systems of linear equations. Mean first passage times measure the expected steps to reach a target state.
  • DTMCs are widely applied, modeling random walks on graphs, genetic drift in population genetics, and customer flow in queueing systems, providing tools for prediction and optimization across scientific and engineering domains.

Write better notes with AI

Mindli helps you capture, organize, and master any subject with AI-powered summaries and flashcards.