I learned very early the difference between knowing the name of something and knowing something.

Richard Feynman

An Intuitive Explanation of Black–Scholes

I explain the Black–Scholes formula using only basic probability theory and calculus, with a focus on the big picture and intuition over technical details.

Expectation of the Truncated Lognormal Distribution

I derive the expected value of a random variable that is left-truncated and lognormally distributed.

The Prime Factors of 999999

The prime factorization of 999999 allows us to compute repeating decimals for some common fractions. I work through this idea.

Simulating Geometric Brownian Motion

I work through a simple Python implementation of geometric Brownian motion and check it against the theoretical model.

Bienaymé's Identity

In probability theory, Bienaymé's identity is a formula for the variance of random variables which are themselves sums of random variables. I provide a little intuition for the identity and then prove it.

Lognormal Distribution

I derive some basic properties of the lognormal distribution.

High-Dimensional Variance

A useful view of a covariance matrix is that it is a natural generalization of variance to higher dimensions. I explore this idea.

Correlation and Hedging

A mean–variance optimizer will hedge correlated assets. I explain why and then work through a simple example.

The Greeks

In finance, the "Greeks" refer to the partial derivatives of an option pricing model with respect to its inputs. They are important for understanding how an option's price may change. I discuss the Black–Scholes Greeks in detail.

Deriving the VIX

The VIX is a benchmark for market-implied volatility. It is computed from a weighted average of variance swaps. I first derive the fair strike for a variance swap and then discuss the VIX's approximation of this formula.

Estimating ATM Option Prices

I work through a well-known approximation of the Black–Scholes price of at-the-money (ATM) options.

Proof the Binomial Model Converges to Black–Scholes

The binomial options-pricing model converges to Black–Scholes as the number of steps in fixed physical time goes to infinity. I present Chi-Cheng Hsia's 1983 proof of this result.

Binomial Options-Pricing Model

I present a simple yet useful model for pricing European-style options, called the binomial options-pricing model. It provides good intuition into pricing options without any advanced mathematics.

fortunai

I describe the process of using ChatGPT-3.5 to write a program that uses OpenAI's API. The program generates LLM fortunes a la the Unix command 'fortune'.

Problem Solving with Dimensional Analysis

Dimensional analysis is the technique of analyzing relationships through their base quantities. I demonstrate the power of this approach by approximating a Gaussian integral without calculus.

Estimating Square Roots in Your Head

I explore an ancient algorithm, sometimes called Heron's method, for estimating square roots without a calculator.

Carr–Madan Formula

In the options-pricing literature, the Carr–Madan formula equates a derivative's nonlinear payoff function with a portfolio of options. I describe and prove this relationship.

One-Period Binomial Model

The binomial options-pricing model is a numerical method for valuing options. I explore this model over a single time period and focus on two key ideas, the no-arbitrage condition and risk-neutral pricing.

Principal Component Analysis

Principal component analyis (PCA) is a simple, fast, and elegant linear method for data analysis. I explore PCA in detail, first with pictures and intuition, then with linear algebra and detailed derivations, and finally with code.

Matrices as Functions, Matrices as Data

I discuss two views of matrices: matrices as linear functions and matrices as data. The second view is particularly useful in understanding dimension reduction methods.

Scaling Factors for Hidden Markov Models

Inference for hidden Markov models (HMMs) is numerically unstable. A standard approach to resolving this instability is to use scaling factors. I discuss this idea in detail.

Weighted Least Squares

Weighted least squares (WLS) is a generalization of ordinary least squares in which each observation is assigned a weight, which scales the squared residual error. I discuss WLS and then derive its estimator in detail.

The Sharpe Ratio

The Sharpe ratio measures a financial strategy's performance as the ratio of its reward to its variability. I discuss this metric in detail, particularly its relationship to the information ratio and tt-statistics.

How Dangerous Is Biking in New York?

I estimate my probability of serious injury or death from bike commuting to work in New York, using public data from city's Department of Transportation.

Moving Averages

I discuss moving or rolling averages, which are algorithms to compute means over different subsets of sequential data.

Square Root of Time Rule

A common heuristic for time-aggregating volatility is the square root of time rule. I discuss the big idea for this rule and then provide the mathematical assumptions underpinning it.

Exponential Decay

Many phenomena can be modeled as exponential decay. I discuss this model in detail, focusing on natural exponential decay (base ee) and various useful properties.

Factor Modeling in Finance

I discuss multi-factor modeling, which generalizes many early financial models into a common prediction and risk framework.

Research and Adventure

During my PhD, I went hiking alone in a remote region of Iceland. Over the years, I've come to view this trip as analogous to the PhD process. Graduate school was hard, but on the warm days, the views were spectacular.

Conjugate Gradient Descent

Conjugate gradient descent (CGD) is an iterative algorithm for minimizing quadratic functions. CGD uses a kind of orthogonality (conjugacy) to efficiently search for the minimum. I present CGD by building it up from gradient descent.

The Capital Asset Pricing Model

In finance, the capital asset pricing model (CAPM) was the first theory to measure systematic risk. The CAPM argues that there is a single type of risk, market risk. I derive the CAPM from the mean–variance framework of modern portfolio theory.

Generalized Least Squares

I discuss generalized least squares (GLS), which extends ordinary least squares by assuming heteroscedastic errors. I prove some basic properties of GLS, particularly that it is the best linear unbiased estimator, and work through a complete example.

Understanding Positive Definite Matrices

I discuss a geometric interpretation of positive definite matrices and how this relates to various properties of them, such as positive eigenvalues, positive determinants, and decomposability. I also discuss their importance in quadratic programming.

The Gauss–Markov Theorem

I discuss and prove the Gauss–Markov theorem, which states that under certain conditions, the least squares estimator is the minimum-variance linear unbiased estimator of the model parameters.

Returns and Log Returns

I discuss prices, returns, cumulative returns, and log returns, with a special focus on some nice mathematical properties of log returns.

Breusch–Pagan Test for Heteroscedasticity

I discuss the Breusch–Pagan test, a simple hypothesis test for heteroscedasticity in linear models. I also implement the test in Python and demonstrate that it can detect heteroscedasticity in a toy example.

OLS with Heteroscedasticity

The ordinary least squares estimator is inefficient when the homoscedasticity assumption does not hold. I provide a simple example of a nonsensical tt-statistic from data with heteroscedasticity and discuss why this happens in general.

Consistency of the OLS Estimator

A consistent estimator converges in probability to the true value. I discuss this idea in general and then prove that the ordinary least squares estimator is consistent.

Convex Combinations as Lines

The locus defined by a convex combination of two points is the line between them. I provide some geometric intuition for this fact and then prove it.

Geometry of the Efficient Frontier

Some important financial ideas are encoded in the geometry of the efficient frontier, such as the tangency portfolio and the Sharpe ratio. The goal of this post is to re-derive these ideas geometrically, showing that they arise from the mean–variance analysis framework.

Autoregressive Model

Autoregressive (AR) models represent random processes in which each observation is a linear function of some of its previous values, plus noise. I present the main ideas behind AR models, including when they are stationary and how to fit them with the Yule–Walker equations.

Blockchain in 19 Lines of Python

I walk through a simple Python implementation of blockchain technology, commonly used for public ledgers in cryptocurrencies.

Hypothesis Testing for OLS

When can we be confident in our estimated coefficients when using OLS? We typically use a tt-statistic to quantify whether an inferred coefficient was likely to have happened by chance. I discuss hypothesis testing and tt-statistics for OLS.

Solving Geometric Series

I describe a useful trick for computing geometric series without closed-form solutions.

Residual Sum of Squares in Terms of Pearson's Correlation

I re-derive a relationship between the residual sum of squares in simple linear regresssion and Pearson's correlation coefficient.

Visualizing Drawdown

Drawdown measures the decline of a time series variable from a historical peak. I explore visualizing and computing drawdown-based metrics.

Sampling Distribution of the OLS Estimator

I derive the mean and variance of the OLS estimator, as well as an unbiased estimator of the OLS estimator's variance. I then show that the OLS estimator is normally distributed if we assume the error terms are normally distributed.

Simple Linear Regression and Correlation

In simple linear regression, the slope parameter is a simple function of the correlation between the targets and predictors. I derive this result and discuss a few consequences.

Coefficient of Determination

In ordinary least squares, the coefficient of determination quantifies the variation in the dependent variables that can be explained by the model. However, this interpretation has a few assumptions which are worth understanding. I explore this metric and the assumptions in detail.

Multicollinearity

Multicollinearity is when two or more predictors are linearly dependent. This can impact the interpretability of a linear model's estimated coefficients. I discuss this phenomenon in detail.

Portfolio Theory: Why Diversification Matters

The casual investor knows that diversification matters. This intuition is grounded in the mathematics of modern portfolio theory. I define diversification and formalize how diversification helps maximize risk-adjusted returns.

Linear Independence, Basis, and the Gram–Schmidt algorithm

I formalize and visualize several important concepts in linear algebra: linear independence and dependence, orthogonality and orthonormality, and basis. Finally, I discuss the Gram–Schmidt algorithm, an algorithm for converting a basis into an orthonormal basis.

The ELBO in Variational Inference

I derive the evidence lower bound (ELBO) in variational inference and explore its relationship to the objective in expectation–maximization and the variational autoencoder.

Standard Errors and Confidence Intervals

How do we know when a parameter estimate from a random sample is significant? I discuss the use of standard errors and confidence intervals to answer this question.

A Python Implementation of the Multivariate Skew Normal

I needed a Python implementation of the multivariate skew normal. I wrote one based on SciPy's multivariate distributions module.

Understanding Dirichlet–Multinomial Models

The Dirichlet distribution is really a multivariate beta distribution. I discuss this connection and then derive the posterior, marginal likelihood, and posterior predictive distributions for Dirichlet–multinomial models.

Fast Computation of the Multivariate Normal PDF for Multiple Parameters

For a project, I needed to compute the log PDF of a vector for multiple pairs of mean and variance parameters. I discuss a fast Python implementation.

Why Shouldn't I Invert That Matrix?

A standard claim in textbooks and courses in numerical linear algebra is that one should not invert a matrix to solve for x\mathbf{x} in Ax=b\mathbf{Ax} = \mathbf{b}. I explore why this is typically true.

Inference for Hidden Markov Models

Expectation–maximization for hidden Markov models is called the Baum–Welch algorithm, and it relies on the forward–backward algorithm for efficient computation. I review HMMs and then present these algorithms in detail.

The Unscented Transform

The unscented transform, most commonly associated with the nonlinear Kalman filter, was proposed by Jeffrey Uhlmann to estimate a nonlinear transformation of a Gaussian. I illustrate the main idea.

Conjugate Analysis for the Multivariate Gaussian

I work through Bayesian parameter estimation of the mean for the multivariate Gaussian.

A Python Demonstration that Mutual Information Is Symmetric

I provide a numerical demonstration that the mutual information of two random variables, the observations and latent variables in a Gaussian mixture model, is symmetric.

Proof that Mutual Information Is Symmetric

The mutual information (MI) of two random variables quantifies how much information (in bits or nats) is obtained about one random variable by observing the other. I discuss MI and show it is symmetric.

From Entropy Search to Predictive Entropy Search

In Bayesian optimization, a popular acquisition function is predictive entropy search, which is a clever reframing of another acquisition function, entropy search. I rederive the connection and explain why this reframing is useful.

A Unifying Review of EM for Gaussian Latent Factor Models

The expectation–maximization (EM) updates for several Gaussian latent factor models (factor analysis, probabilistic principal component analysis, probabilistic canonical correlation analysis, and inter-battery factor analysis) are closely related. I explore these relationships in detail.

Implementing Bayesian Online Changepoint Detection

I annotate my Python implementation of the framework in Adams and MacKay's 2007 paper, "Bayesian Online Changepoint Detection".

Entropy of the Gaussian

I derive the entropy for the univariate and multivariate Gaussian distributions.

Bayesian Inference for Beta–Bernoulli Models

I derive the posterior, marginal likelihood, and posterior predictive distributions for beta–Bernoulli models.

Antifragile Ideas

Thoughts on John Carmack's theory of antifragile idea generation.

Gaussian Process Dynamical Models

Wang and Fleet's 2008 paper, "Gaussian Process Dynamical Models for Human Motion", introduces a Gaussian process latent variable model with Gaussian process latent dynamics. I discuss this paper in detail.

Matrix Multiplication as the Sum of Outer Products

The transpose of a matrix times itself is equal to the sum of outer products created by the rows of the matrix. I prove this identity.

From Probabilistic PCA to the GPLVM

A Gaussian process latent variable model (GPLVM) can be viewed as a generalization of probabilistic principal component analysis (PCA) in which the latent maps are Gaussian-process distributed. I discuss this relationship.

Hamiltonian Monte Carlo

The physics of Hamiltonian Monte Carlo, part 3: In the final post in this series, I discuss Hamiltonian Monte Carlo, building off previous discussions of the Euler–Lagrange equation and Hamiltonian dynamics.

Gaussian Processes with Multinomial Observations

Linderman, Johnson, and Adam's 2015 paper, "Dependent multinomial models made easy: Stick-breaking with the Pólya-gamma augmentation", introduces a Gibbs sampler for Gaussian processes with multinomial observations. I discuss this model in detail.

Summing Quadratic Forms

The sum of two equations that are quadratic in x\mathbf{x} is a single quadratic form in x\mathbf{x}. I work through this derivation in detail.

A Stick-Breaking Representation of the Multinomial Distribution

Following Linderman, Johnson, and Adam's 2015 paper, "Dependent multinomial models made easy: Stick-breaking with the Pólya-gamma augmentation", I show that a multinomial density can be represented as a product of binomial densities.

How I Built This Blog

I have received a number of compliments on my blog's style or theme and even more requests for details on the blogging environment. So here's how I built my blog.

Lagrangian and Hamiltonian Mechanics

The physics of Hamiltonian Monte Carlo, part 2: Building off the Euler–Lagrange equation, I discuss Lagrangian mechanics, the principle of stationary action, and Hamilton's equations.

The Euler–Lagrange Equation

The physics of Hamiltonian Monte Carlo, part 1: Lagrangian and Hamiltonian mechanics are based on the principle of stationary action, formalized by the calculus of variations and the Euler–Lagrange equation. I discuss this result.

Understanding Moments

Why are a distribution's moments called "moments"? How does the equation for a moment capture the shape of a distribution? Why do we typically only study four moments? I explore these and other questions in detail.

Gibbs Sampling Is a Special Case of Metropolis–Hastings

Gibbs sampling is a computationally convenient Bayesian inference algorithm that is a special case of the Metropolis–Hastings algorithm. I discuss Gibbs sampling in the broader context of Markov chain Monte Carlo methods.

The Log-Sum-Exp Trick

Normalizing vectors of log probabilities is a common task in statistical modeling, but it can result in under- or overflow when exponentiating large values. I discuss the log-sum-exp trick for resolving this issue.

Bayesian Linear Regression

I discuss Bayesian linear regression or classical linear regression with a prior on the parameters. Using a particular prior as an example, I provide intuition and detailed derivations for the full model.

Can Linear Models Overfit?

We know that regularization is important for linear models, but what does overfitting mean in this context? I discuss this question.

A Python Implementation of the Multivariate t-distribution

I needed a fast and numerically stable Python implementation of the multivariate t-distribution. I wrote one based on SciPy's multivariate distributions module.

Why I Keep a Research Blog

Writing has made me a better thinker and researcher. I expand on my reasons why.

Comparing Kernel Ridge with Gaussian Process Regression

The posterior mean from a Gaussian process regressor is related to the prediction of a kernel ridge regressor. I explore this connection in detail.

Ordinary Least Squares

I discuss ordinary least squares or linear regression when the optimal coefficients minimize the residual sum of squares. I discuss various properties and interpretations of this classic model.

Random Fourier Features

Rahimi and Recht's 2007 paper, "Random Features for Large-Scale Kernel Machines", introduces a framework for randomized, low-dimensional approximations of kernel functions. I discuss this paper in detail with a focus on random Fourier features.

Implicit Lifting and the Kernel Trick

I disentangle the what I call the "lifting trick" from the kernel trick as a way of clarifying what the kernel trick is and does.

Asymptotic Normality of Maximum Likelihood Estimators

Under certain regularity conditions, maximum likelihood estimators are "asymptotically efficient", meaning that they achieve the Cramér–Rao lower bound in the limit. I discuss this result.

Proof of the Cramér–Rao Lower Bound

The Cramér–Rao lower bound allows us to derive uniformly minimum–variance unbiased estimators by finding unbiased estimators that achieve this bound. I derive the main result.

The Fisher Information

I document several properties of the Fisher information or the variance of the derivative of the log likelihood.

Proof of the Rao–Blackwell Theorem

I walk the reader through a proof the Rao–Blackwell Theorem.

Lagrange Polynomials

In numerical analysis, the Lagrange polynomial is the polynomial of least degree that exactly coincides with a set of data points. I provide the geometric intuition and proof of correctness for this idea.

Proof of the Law of Total Expectation

I discuss a straightforward proof of the law of total expectation with three standard assumptions.

Approximate Counting with Morris's Algorithm

Robert Morris's algorithm for counting large numbers using 8-bit registers is an early example of a sketch or data structure for efficiently processing a data stream. I introduce the algorithm and analyze its probabilistic behavior.

Expectation–Maximization

For many latent variable models, maximizing the complete log likelihood is easier than maximizing the log likelihood. The expectation–maximization (EM) algorithm leverages this fact to construct and optimize a tight lower bound. I rederive EM.

Why Metropolis–Hastings Works

Many authors introduce Metropolis–Hastings through its acceptance criteria without explaining why such a criteria allows us to sample from our target distribution. I provide a formal justification.

A Fast and Numerically Stable Implementation of the Multivariate Normal PDF

Naively computing the probability density function for the multivariate normal can be slow and numerically unstable. I work through SciPy's implementation.

Ergodic Markov Chains

A Markov chain is ergodic if and only if it has at most one recurrent class and is aperiodic. A sketch of a proof of this theorem hinges on an intuitive probabilistic idea called "coupling" that is worth understanding.

Interpreting Expectations and Medians as Minimizers

I show how several properties of the distribution of a random variable—the expectation, conditional expectation, and median—can be viewed as solutions to optimization problems.

Pólya-Gamma Augmentation

Bayesian inference for models with binomial likelihoods is hard, but in a 2013 paper, Nicholas Polson and his coauthors introduced a new method fast Bayesian inference using Gibbs sampling. I discuss their main results in detail.

Completing the Square

This operation, while useful in elementary algebra, also arises frequently when manipulating Gaussian random variables. I review and document both the univariate and multivariate cases.

A Poisson–Gamma Mixture Is Negative-Binomially Distributed

We can view the negative binomial distribution as a Poisson distribution with a gamma prior on the rate parameter. I work through this derivation in detail.

A Practical Implementation of Gaussian Process Regression

I discuss Rasmussen and Williams's Algorithm 2.1 for an efficient implementation of Gaussian process regression.

Sampling: Two Basic Algorithms

Numerical sampling uses randomized algorithms to sample from and estimate properties of distributions. I explain two basic sampling algorithms, rejection sampling and importance sampling.

Bayesian Online Changepoint Detection

Adams and MacKay's 2007 paper, "Bayesian Online Changepoint Detection", introduces a modular Bayesian framework for online estimation of changes in the generative parameters of sequential data. I discuss this paper in detail.

Gaussian Process Regression with Code Snippets

The definition of a Gaussian process is fairly abstract: it is an infinite collection of random variables, any finite number of which are jointly Gaussian. I work through this definition with an example and provide several complete code snippets.

Laplace's Method

Laplace's method is used to approximate a distribution with a Gaussian. I explain the technique in general and work through an exercise by David MacKay.

Bayesian Inference for the Gaussian

I work through several cases of Bayesian parameter estimation of Gaussian models.

The Exponential Family

Probability distributions that are members of the exponential family have mathematically convenient properties for Bayesian inference. I provide the general form, work through several examples, and discuss several important properties.

Conjugacy in Bayesian Inference

Conjugacy is an important property in exact Bayesian inference. I work though Bishop's example of a beta conjugate prior for the binomial distribution and explore why conjugacy is useful.

Random Noise and the Central Limit Theorem

Many probabilistic models assume random noise is Gaussian distributed. I explain at least part of the motivation for this, which is grounded in the Central Limit Theorem.

The KL Divergence: From Information to Density Estimation

The KL divergence, also known as "relative entropy", is a commonly used metric for density estimation. I re-derive the relationships between probabilities, entropy, and relative entropy for quantifying similarity between distributions.

Floating Point Precision with Log Likelihoods

Computing the log likelihood is a common task in probabilistic machine learning, but it can easily under- or overflow. I discuss one such issue and its resolution.

Randomized Singular Value Decomposition

Halko, Martinsson, and Tropp's 2011 paper, "Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions", introduces a modular framework for randomized matrix decompositions. I discuss this paper in detail with a focus on randomized SVD.

Proof of Bessel's Correction

Bessel's correction is the division of the sample variance by N1N - 1 rather than NN. I walk the reader through a quick proof that this correction results in an unbiased estimator of the population variance.

Proof of the Singular Value Decomposition

I walk the reader carefully through Gilbert Strang's existence proof of the singular value decomposition.

Singular Value Decomposition as Simply as Possible

The singular value decomposition (SVD) is a powerful and ubiquitous tool for matrix factorization but explanations often provide little intuition. My goal is to explain the SVD as simply as possible before working towards the formal definition.

Woodbury Matrix Identity for Factor Analysis

In factor analysis, the Woodbury matrix identity allows us to invert the covariance matrix of our data x\textbf{x} in O(k3)O(k^3) time rather than O(p3)O(p^3) time where kk and pp are the latent and data dimensions respectively. I explain and implement the technique.

Modeling Repulsion with Determinantal Point Processes

Determinantal point process are point processes characterized by the determinant of a positive semi-definite matrix, but what this means is not necessarily obvious. I explain how such a process can model repulsive systems.

A Geometrical Understanding of Matrices

My college course on linear algebra focused on systems of linear equations. I present a geometrical understanding of matrices as linear transformations, which has helped me visualize and relate concepts from the field.

Probabilistic Canonical Correlation Analysis in Detail

Probabilistic canonical correlation analysis is a reinterpretation of CCA as a latent variable model, which has benefits such as generative modeling, handling uncertainty, and composability. I define and derive its solution in detail.

Factor Analysis in Detail

Factor analysis is a statistical method for modeling high-dimensional data using a smaller number of latent variables. It is deeply related to other probabilistic models such as probabilistic PCA and probabilistic CCA. I define the model and how to fit it in detail.

Canonical Correlation Analysis in Detail

Canonical correlation analsyis is conceptually straightforward, but I want to define its objective and derive its solution in detail, both mathematically and programmatically.

Two Forms of the Dot Product

The dot product is often presented as both an algebraic and a geometric operation. The relationship between these two ideas may not be immediately obvious. I prove that they are equivalent and explain why the relationship makes sense.

An Example of Probabilistic Machine Learning

Probabilistic machine learning is a useful framework for handling uncertainty and modeling generative processes. I explore this approach by comparing two models, one with and one without a clear probabilistic interpretation.

The Reparameterization Trick

A common explanation for the reparameterization trick with variational autoencoders is that we cannot backpropagate through a stochastic node. I provide a more formal justification.

Why Backprop Goes Backward

Backprogation is an algorithm that computes the gradient of a neural network, but it may not be obvious why the algorithm uses a backward pass. The answer allows us to reconstruct backprop from first principles.

From Convolution to Neural Network

Most explanations of CNNs assume the reader understands the convolution operation and how it relates to image processing. I explore convolutions in detail and explain how they are implemented as layers in a neural network.