I learned very early the difference between knowing the name of something and knowing something.

Richard Feynman

Standard Errors and Confidence Intervals

How do we know when a parameter estimate from a random sample is significant? I discuss the use of standard errors and confidence intervals to answer this question.

A Python Implementation of the Multivariate Skew Normal

I needed a Python implementation of the multivariate skew normal. I wrote one based on SciPy's multivariate distributions module.

Understanding Dirichlet–Multinomial Models

The Dirichlet distribution is really a multivariate beta distribution. I discuss this connection and then derive the posterior, marginal likelihood, and posterior predictive distributions for Dirichlet–multinomial models.

Fast Computation of the Multivariate Normal PDF for Multiple Parameters

For a project, I needed to compute the log PDF of a vector x\mathbf{x} for multiple pairs of parameters, {(μ1,Σ1),,(μn,Σn)}\{(\boldsymbol{\mu}_1, \boldsymbol{\Sigma}_1), \dots, (\boldsymbol{\mu}_n, \boldsymbol{\Sigma}_n)\}. I discuss a fast Python implementation.

Why Shouldn't I Invert That Matrix?

A standard claim in textbooks and courses in numerical linear algebra is that one should not invert a matrix to solve for x\mathbf{x} in Ax=b\mathbf{Ax} = \mathbf{b}. I explore why this is typically true.

Inference for Hidden Markov Models

Expectation–maximization for hidden Markov models is called the Baum–Welch algorithm, and it relies on the forward–backward algorithm for efficient computation. I review HMMs and then present these algorithms in detail.

The Unscented Transform

The unscented transform, most commonly associated with the nonlinear Kalman filter, was proposed by Jeffrey Uhlmann to estimate a nonlinear transformation of a Gaussian. I illustrate the main idea.

Conjugate Analysis for the Multivariate Gaussian

I work through Bayesian parameter estimation of the mean for the multivariate Gaussian.

A Python Demonstration that Mutual Information Is Symmetric

I provide a numerical demonstration that the mutual information of two random variables, the observations and latent variables in a Gaussian mixture model, is symmetric.

Proof that Mutual Information Is Symmetric

The mutual information (MI) of two random variables quantifies how much information (in bits or nats) is obtained about one random variable by observing the other. I discuss MI and show it is symmetric.

From Entropy Search to Predictive Entropy Search

In Bayesian optimization, a popular acquisition function is predictive entropy search, which is a clever reframing of another acquisition function, entropy search. I rederive the connection and explain why this reframing is useful.

A Unifying Review of EM for Gaussian Latent Factor Models

The expectation–maximization (EM) updates for several Gaussian latent factor models (factor analysis, probabilistic principal component analysis, probabilistic canonical correlation analysis, and inter-battery factor analysis) are closely related. I explore these relationships in detail.

Implementing Bayesian Online Changepoint Detection

I annotate my Python implementation of the framework in Adams and MacKay's 2007 paper, "Bayesian Online Changepoint Detection".

Entropy of the Gaussian

I derive the entropy for the univariate and multivariate Gaussian distributions.

Bayesian Inference for Beta–Bernoulli Models

I derive the posterior, marginal likelihood, and posterior predictive distributions for beta–Bernoulli models.

Antifragile Ideas

Thoughts on John Carmack's theory of antifragile idea generation.

Gaussian Process Dynamical Models

Wang and Fleet's 2008 paper, "Gaussian Process Dynamical Models for Human Motion", introduces a Gaussian process latent variable model with Gaussian process latent dynamics. I discuss this paper in detail.

Matrix Multiplication as the Sum of Outer Products

The transpose of a matrix times itself is equal to the sum of outer products created by the rows of the matrix. I prove this identity.

From Probabilistic PCA to the GPLVM

A Gaussian process latent variable model (GPLVM) can be viewed as a generalization of probabilistic principal component analysis (PCA) in which the latent maps are Gaussian-process distributed. I discuss this relationship.

Hamiltonian Monte Carlo

The physics of Hamiltonian Monte Carlo, part 3: In the final post in this series, I discuss Hamiltonian Monte Carlo, building off previous discussions of the Euler–Lagrange equation and Hamiltonian dynamics.

Gaussian Processes with Multinomial Observations

Linderman, Johnson, and Adam's 2015 paper, "Dependent multinomial models made easy: Stick-breaking with the Pólya-gamma augmentation", introduces a Gibbs sampler for Gaussian processes with multinomial observations. I discuss this model in detail.

Summing Quadratic Forms

The sum of two equations that are quadratic in x\mathbf{x} is a single quadratic form in x\mathbf{x}. I work through this derivation in detail.

A Stick-Breaking Representation of the Multinomial Distribution

Following Linderman, Johnson, and Adam's 2015 paper, "Dependent multinomial models made easy: Stick-breaking with the Pólya-gamma augmentation", I show that a multinomial density can be represented as a product of binomial densities.

How I Built This Blog

I have received a number of compliments on my blog's style or theme and even more requests for details on the blogging environment. So here's how I built my blog.

Lagrangian and Hamiltonian Mechanics

The physics of Hamiltonian Monte Carlo, part 2: Building off the Euler–Lagrange equation, I discuss Lagrangian mechanics, the principle of stationary action, and Hamilton's equations.

The Euler–Lagrange Equation

The physics of Hamiltonian Monte Carlo, part 1: Lagrangian and Hamiltonian mechanics are based on the principle of stationary action, formalized by the calculus of variations and the Euler–Lagrange equation. I discuss this result.

Understanding Moments

Why are a distribution's moments called "moments"? How does the equation for a moment capture the shape of a distribution? Why do we typically only study four moments? I explore these and other questions in detail.

Gibbs Sampling Is a Special Case of Metropolis–Hastings

Gibbs sampling is a computationally convenient Bayesian inference algorithm that is a special case of the Metropolis–Hastings algorithm. I discuss Gibbs sampling in the broader context of Markov chain Monte Carlo methods.

The Log-Sum-Exp Trick

Normalizing vectors of log probabilities is a common task in statistical modeling, but it can result in under- or overflow when exponentiating large values. I discuss the log-sum-exp trick for resolving this issue.

Bayesian Linear Regression

Linear models, part 3. I discuss Bayesian linear regression or classical linear regression with a prior on the parameters. Using a particular prior as an example, I provide intuition and detailed derivations for the full model.

Can Linear Models Overfit?

Linear models, part 2. Before discussing regularization, I discuss what overfitting means for linear models.

A Python Implementation of the Multivariate t-distribution

I needed a fast and numerically stable Python implementation of the multivariate t-distribution. I wrote one based on SciPy's multivariate distributions module.

Why I Keep a Research Blog

Writing has made me a better thinker and researcher. I expand on my reasons why.

Comparing Kernel Ridge with Gaussian Process Regression

The posterior mean from a Gaussian process regressor is related to the prediction of a kernel ridge regressor. I explore this connection in detail.

Classical Linear Regression

Linear models, part 1. I discuss classical linear regression with an emphasis on multiple interpretations of the model.

Random Fourier Features

Rahimi and Recht's 2007 paper, "Random Features for Large-Scale Kernel Machines", introduces a framework for randomized, low-dimensional approximations of kernel functions. I discuss this paper in detail with a focus on random Fourier features.

Implicit Lifting and the Kernel Trick

I disentangle the what I call the "lifting trick" from the kernel trick as a way of clarifying what the kernel trick is and does.

Asymptotic Normality of Maximum Likelihood Estimators

Under certain regularity conditions, maximum likelihood estimators are "asymptotically efficient", meaning that they achieve the Cramér–Rao lower bound in the limit. I discuss this result.

Proof of the Cramér–Rao Lower Bound

The Cramér–Rao lower bound allows us to derive uniformly minimum–variance unbiased estimators by finding unbiased estimators that achieve this bound. I derive the main result.

The Fisher Information

I document several properties of the Fisher information or the variance of the derivative of the log likelihood.

Proof of the Rao–Blackwell Theorem

I walk the reader through a proof the Rao–Blackwell Theorem.

Lagrange Polynomials

In numerical analysis, the Lagrange polynomial is the polynomial of least degree that exactly coincides with a set of data points. I provide the geometric intuition and proof of correctness for this idea.

Proof of the Law of Total Expectation

I discuss a straightforward proof of the law of total expectation with three standard assumptions.

Approximate Counting with Morris's Algorithm

Robert Morris's algorithm for counting large numbers using 8-bit registers is an early example of a sketch or data structure for efficiently processing a data stream. I introduce the algorithm and analyze its probabilistic behavior.

Expectation–Maximization

For many latent variable models, maximizing the complete log likelihood is easier than maximizing the log likelihood. The expectation–maximization (EM) algorithm leverages this fact to construct and optimize a tight lower bound. I rederive EM.

Why Metropolis–Hastings Works

Many authors introduce Metropolis–Hastings through its acceptance criteria without explaining why such a criteria allows us to sample from our target distribution. I provide a formal justification.

A Fast and Numerically Stable Implementation of the Multivariate Normal PDF

Naively computing the probability density function for the multivariate normal can be slow and numerically unstable. I work through SciPy's implementation.

A Romantic View of Markov Chains

A Markov chain is ergodic if and only if it has at most one recurrent class and is aperiodic. A sketch of a proof of this theorem hinges on an intuitive probabilistic idea called "coupling" that is worth understanding.

Interpreting Expectations and Medians as Minimizers

I show how several properties of the distribution of a random variable—the expectation, conditional expectation, and median—can be viewed as solutions to optimization problems.

Pólya-Gamma Augmentation

Bayesian inference for models with binomial likelihoods is hard, but in a 2013 paper, Nicholas Polson and his coauthors introduced a new method fast Bayesian inference using Gibbs sampling. I discuss their main results in detail.

Completing the Square

This operation, while useful in elementary algebra, also arises frequently when manipulating Gaussian random variables. I review and document both the univariate and multivariate cases.

A Poisson–Gamma Mixture Is Negative-Binomially Distributed

We can view the negative binomial distribution as a Poisson distribution with a gamma prior on the rate parameter. I work through this derivation in detail.

A Practical Implementation of Gaussian Process Regression

I discuss Rasmussen and Williams's Algorithm 2.1 for an efficient implementation of Gaussian process regression.

Sampling: Two Basic Algorithms

Numerical sampling uses randomized algorithms to sample from and estimate properties of distributions. I explain two basic sampling algorithms, rejection sampling and importance sampling.

Bayesian Online Changepoint Detection

Adams and MacKay's 2007 paper, "Bayesian Online Changepoint Detection", introduces a modular Bayesian framework for online estimation of changes in the generative parameters of sequential data. I discuss this paper in detail.

Gaussian Process Regression with Code Snippets

The definition of a Gaussian process is fairly abstract: it is an infinite collection of random variables, any finite number of which are jointly Gaussian. I work through this definition with an example and provide several complete code snippets.

Laplace's Method

Laplace's method is used to approximate a distribution with a Gaussian. I explain the technique in general and work through an exercise by David MacKay.

Bayesian Inference for the Gaussian

I work through several cases of Bayesian parameter estimation of Gaussian models.

The Exponential Family

Probability distributions that are members of the exponential family have mathematically convenient properties for Bayesian inference. I provide the general form, work through several examples, and discuss several important properties.

Conjugacy in Bayesian Inference

Conjugacy is an important property in exact Bayesian inference. I work though Bishop's example of a beta conjugate prior for the binomial distribution and explore why conjugacy is useful.

Random Noise and the Central Limit Theorem

Many probabilistic models assume random noise is Gaussian distributed. I explain at least part of the motivation for this, which is grounded in the Central Limit Theorem.

The KL Divergence: From Information to Density Estimation

The KL divergence, also known as "relative entropy", is a commonly used metric for density estimation. I re-derive the relationships between probabilities, entropy, and relative entropy for quantifying similarity between distributions.

Floating Point Precision with Log Likelihoods

Computing the log likelihood is a common task in probabilistic machine learning, but it can easily under- or overflow. I discuss one such issue and its resolution.

Randomized Singular Value Decomposition

Halko, Martinsson, and Tropp's 2011 paper, "Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions", introduces a modular framework for randomized matrix decompositions. I discuss this paper in detail with a focus on randomized SVD.

Proof of Bessel's Correction

Bessel's correction is the division of the sample variance by N1N - 1 rather than NN. I walk the reader through a quick proof that this correction results in an unbiased estimator of the population variance.

Proof of the Singular Value Decomposition

I walk the reader carefully through Gilbert Strang's existence proof of the singular value decomposition.

Singular Value Decomposition as Simply as Possible

Singular Value Decomposition (SVD) is powerful and ubiquitous tool for matrix factorization but explanations often provide little intuition. My goal is to explain SVD as simply as possible before working towards the formal definition.

Woodbury Matrix Identity for Factor Analysis

In factor analysis, the Woodbury matrix identity allows us to invert the covariance matrix of our data x\textbf{x} in O(k3)O(k^3) time rather than O(p3)O(p^3) time where kk and pp are the latent and data dimensions respectively. I explain and implement the technique.

Modeling Repulsion with Determinantal Point Processes

Determinantal point process are point processes characterized by the determinant of a positive semi-definite matrix, but what this means is not necessarily obvious. I explain how such a process can model repulsive systems.

A Geometrical Understanding of Matrices

My college course on linear algebra focused on systems of linear equations. I present a geometrical understanding of matrices as linear transformations, which has helped me visualize and relate concepts from the field.

Probabilistic Canonical Correlation Analysis in Detail

Probabilistic canonical correlation analysis is a reinterpretation of CCA as a latent variable model, which has benefits such as generative modeling, handling uncertainty, and composability. I define and derive its solution in detail.

Factor Analysis in Detail

Factor analysis is a statistical method for modeling high-dimensional data using a smaller number of latent variables. It is deeply related to other probabilistic models such as probabilistic PCA and probabilistic CCA. I define the model and how to fit it in detail.

Canonical Correlation Analysis in Detail

Canonical correlation analsyis is conceptually straightforward, but I want to define its objective and derive its solution in detail, both mathematically and programmatically.

Dot Product: Equivalence of Definitions

The dot product has two definitions, one algebraic and one geometric. The relationship between the two may not be immediately obvious. I explain why they make sense relative to each other and then prove that they are equivalent.

An Example of Probabilistic Machine Learning

Probabilistic machine learning is a useful framework for handling uncertainty and modeling generative processes. I explore this approach by comparing two models, one with and one without a clear probabilistic interpretation.

The Reparameterization Trick

A common explanation for the reparameterization trick with variational autoencoders is that we cannot backpropagate through a stochastic node. I provide a more formal justification.

Why Backprop Goes Backward

Backprogation is an algorithm that computes the gradient of a neural network, but it may not be obvious why the algorithm uses a backward pass. The answer allows us to reconstruct backprop from first principles.

From Convolution to Neural Network

Most explanations of CNNs assume the reader understands the convolution operation and how it relates to image processing. I explore convolutions in detail and explain how they are implemented as layers in a neural network.