Gibbs Sampling Is a Special Case of Metropolis–Hastings
Gibbs sampling is a computationally convenient Bayesian inference algorithm that is a special case of the Metropolis–Hastings algorithm. I discuss Gibbs sampling in the broader context of Markov chain Monte Carlo methods.
I avoided learning about Gibbs sampling for a long time. Practically, it is straightforward to implement without understanding the theory, and I assumed that learning about it would be yet another hill to climb. I found that, in fact, understanding Gibbs sampling is easy if you already understand Metropolis–Hastings. It is a special case in which the acceptance ratio is always one. In this post, I’ll outline the conceptual thread from Markov chain Monte Carlo (MCMC) methods to Metropolis–Hastings to Gibbs sampling, with links as needed for those who want more details.
Markov chain Monte Carlo
In Bayesian inference, given data and parameters , the posterior
is generally unavailable in closed form, and we must rely on other methods to perform inference. MCMC methods approach this problem by simulating a Markov chain whose stationary distribution is the desired posterior, . This works because an ergodic Markov chain is one in which the long-term probability of being on each state is independent of the initial state. The random walk is fated. Thus, walking an ergodic Markov chain and recording states is, in the long-run, like sampling from its stationary distribution.
If a Markov chain is irreducible, meaning any state is reachable from any other state, and if it aperiodic, meaning the same state is not reached on a fixed frequency, then it is ergodic. But how do we perform a random walk such that the implicitly constructed Markov chain’s stationary distribution is the desired posterior? A sufficient condition is the reversibility constraint or the detailed balance: the probability of transitioning from one state to another must be equivalent to the probability of moving in the reverse direction. If we denote the stationary distribution as and the transition kernel, the function that computes the probability of moving from state to , as , then the reversibility constraint is
You can find a proof of why this works in my previous post on Metropolis–Hastings or (Chib & Greenberg, 1995). Intuitively, I think of the reversibility constraint as meaning that our proposed kernel is direction- and time-invariant; the only thing that matters is the probability of being on a given state, defined by . In MCMC, we must propose a distribution and a kernel such that the constraints above hold. Then we know we walking an ergodic Markov chain whose stationary distribution is .
Metropolis–Hastings
The classic MCMC method is Metropolis–Hastings (Metropolis et al., 1953; Hastings, 1970). The idea is to use a proposal distribution from which one can sample new states. At each iteration, propose a new state and accept it with probability
Note that since the chain stays on state with probability , the chain is aperiodic. For a discrete state-space Markov chain, it is as if each state had a self-loop. If assigns non-zero probability for each , the chain is irreducible and therefore also ergodic. The detailed balance is achieved because
In other words, the acceptance ratio in was carefully constructed to ensure detailed balance. At this point you might ask: how does this help if we don’t have access to ? Notice that
Conveniently, the generally intractable integrals in the numerator and denominator,
cancel out, and the right-most term in just requires computing the likelihood times period of our model under and . This leads to a special case of Metropolis–Hastings that is commonly used to infer the posterior in Bayesian inference:
Metropolis–Hastings:
for do
- Draw .
- Calculate
- Draw .
- If then . Otherwise, .
Notice that in the special case that the proposal distribution is the prior, the ratio reduces to a likelihood ratio,
Gibbs sampling
Gibbs sampling is a special case of Metropolis–Hastings in which the newly proposed state is always accepted with probability one. It is fairly straightforward to see this once you know the algorithm. Consider a -dimensional posterior with parameters . The basic idea of Gibbs sampling is to iterately sample from the conditional distribution where is without the th parameter:
Gibbs sampling:
for do
To see why this works, first note that
Ignoring the iterates’ notation, the probability of a transition can be written as
In , I have color-coded the terms that cancel. In particular, the terms in red cancel because . In other words, in each step of the Gibbs sampling algorithm, we are performing a Metropolis–Hastings-like random walk in which the proposed next state always adheres to the reversibility constraint.
The primary advantage of Gibbs sampling is simple: proposals are always accepted. The primary disadvantage is that we need to be able to derive the above conditional probability distributions. This is tractable when is conjugate to the posterior.
Conclusion
We can view Gibbs sampling as just a special case of the Metropolis–Hastings algorithm. In my mind, the conceptually hard part about both these algorithms is understanding the correctness of the reversibility constraint and how we can specify a distribution that is guaranteed to be the stationary distribution of an implicit Markov chain. Armed with this knowledge, we can see the correctness of Gibbs sampling with just a little algebra.
For a complete example of a Gibbs sampler, see my post on Pólya-gamma variable augmentation, particularly this section. The main idea of that work is to generate augmenting variables such that an intractable distribution becomes conditionally Gaussian. This allows for an efficient Gibb sampler by switching between sampling the augmenting variables and the main variables of interest.
- Chib, S., & Greenberg, E. (1995). Understanding the metropolis-hastings algorithm. The American Statistician, 49(4), 327–335.
- Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., & Teller, E. (1953). Equation of state calculations by fast computing machines. The Journal of Chemical Physics, 21(6), 1087–1092.
- Hastings, W. K. (1970). Monte Carlo sampling methods using Markov chains and their applications.