I learned very early the difference between knowing the name of something and knowing something.
Richard Feynman09 December 2023
A useful view of a covariance matrix is that it is a natural generalization of variance to higher dimensions. I explore this idea.
1Matrices as Functions, Matrices as Data
28 August 2022
I discuss two views of matrices: matrices as linear functions and matrices as data. The second view is particularly useful in understanding dimension reduction methods.
220 March 2022
Conjugate gradient descent (CGD) is an iterative algorithm for minimizing quadratic functions. CGD uses a kind of orthogonality (conjugacy) to efficiently search for the minimum. I present CGD by building it up from gradient descent.
3Understanding Positive Definite Matrices
27 February 2022
I discuss a geometric interpretation of positive definite matrices and how this relates to various properties of them, such as positive eigenvalues, positive determinants, and decomposability. I also discuss their importance in quadratic programming.
416 January 2022
The locus defined by a convex combination of two points is the line between them. I provide some geometric intuition for this fact and then prove it.
5I formalize and visualize several important concepts in linear algebra: linear independence and dependence, orthogonality and orthonormality, and basis. Finally, I discuss the Gram–Schmidt algorithm, an algorithm for converting a basis into an orthonormal basis.
6Why Shouldn't I Invert That Matrix?
09 December 2020
A standard claim in textbooks and courses in numerical linear algebra is that one should not invert a matrix to solve for in . I explore why this is typically true.
7Matrix Multiplication as the Sum of Outer Products
17 July 2020
The transpose of a matrix times itself is equal to the sum of outer products created by the rows of the matrix. I prove this identity.
802 July 2020
The sum of two equations that are quadratic in is a single quadratic form in . I work through this derivation in detail.
918 September 2019
This operation, while useful in elementary algebra, also arises frequently when manipulating Gaussian random variables. I review and document both the univariate and multivariate cases.
10Randomized Singular Value Decomposition
17 January 2019
Halko, Martinsson, and Tropp's 2011 paper, "Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions", introduces a modular framework for randomized matrix decompositions. I discuss this paper in detail with a focus on randomized SVD.
11Proof of the Singular Value Decomposition
20 December 2018
I walk the reader carefully through Gilbert Strang's existence proof of the singular value decomposition.
12Singular Value Decomposition as Simply as Possible
10 December 2018
The singular value decomposition (SVD) is a powerful and ubiquitous tool for matrix factorization but explanations often provide little intuition. My goal is to explain the SVD as simply as possible before working towards the formal definition.
13Woodbury Matrix Identity for Factor Analysis
30 November 2018
In factor analysis, the Woodbury matrix identity allows us to invert the covariance matrix of our data in time rather than time where and are the latent and data dimensions respectively. I explain and implement the technique.
14Modeling Repulsion with Determinantal Point Processes
06 November 2018
Determinantal point process are point processes characterized by the determinant of a positive semi-definite matrix, but what this means is not necessarily obvious. I explain how such a process can model repulsive systems.
15A Geometrical Understanding of Matrices
24 October 2018
My college course on linear algebra focused on systems of linear equations. I present a geometrical understanding of matrices as linear transformations, which has helped me visualize and relate concepts from the field.
1626 June 2018
The dot product is often presented as both an algebraic and a geometric operation. The relationship between these two ideas may not be immediately obvious. I prove that they are equivalent and explain why the relationship makes sense.
17