# Understanding and solving the normal equations

The normal equations arise in several branches of mathematics, from statistics to geometry. In this article, we discuss how they emerge and how to solve them.

## Emergence of the normal equations

1. The normal equations define the orthogonal projection of a vector onto a linear subspace.
2. They equivalently define the vector that minimizes the distance between a vector and a linear subspace (see: the Moore-Penrose solution).
3. They arise while solving the linear least square regression.
4. They equivalently arise while solving the maximum likehood estimator for a linear regression with Gaussian error.

## The normal equations

Let $X \in \realset^{\ndataset\times \inputdim}$ be a matrix and $\outputvec \in \realset^{\ndataset}$ a vector. The normal equations are writen in matrix form as follow:

As stated in the previous section, a unique solution $\linparamv$ to these equations is at the same time:

1. the Moore-Penrose solution to: $\argmin_{\linparamv} \normtwo{\outputvec - \inputmatrix\linparamv}$;
2. and the orthogonal projection of $\outputvec$ onto $\span(X)$.

## Solving the normal equations

When the matrix $\inputmatrix^{\top}\inputmatrix$ has rank $\inputdim$, it is invertible and the normal equations admit a unique solution expessed using the Moore-Penrose inverse of $X$:

When $\inputmatrix^{\top}\inputmatrix$ has rank < $\inputdim$, the normal equations form an underdetermined system and several solutions exists. As discussed in the article about the Moore-Penrose inverse, we can use an optimization algorithm such as gradient descent or stochastic gradient descent to find one numerically, or remove some columns from $X$ to reduce $\inputdim$.