The normal equations arise in several branches of mathematics, from statistics to geometry. In this article, we discuss how they emerge and how to solve them.
Emergence of the normal equations
- The normal equations define the orthogonal projection of a vector onto a linear subspace.
- They equivalently define the vector that minimizes the distance between a vector and a linear subspace (see: the Moore-Penrose solution).
- They arise while solving the linear least square regression.
- They equivalently arise while solving the maximum likehood estimator for a linear regression with Gaussian error.
The normal equations
Let X∈RN×D be a matrix and →y∈RN a vector. The normal equations are writen in matrix form as follow:
X⊤(→y−X→w)=0As stated in the previous section, a unique solution →w to these equations is at the same time:
- the Moore-Penrose solution to: argmin→w‖→y−X→w‖2;
- and the orthogonal projection of →y onto span(X).
Solving the normal equations
When the matrix X⊤X has rank D, it is invertible and the normal equations admit a unique solution expessed using the Moore-Penrose inverse of X:
→w=X†→yWhen X⊤X has rank < D, the normal equations form an underdetermined system and several solutions exists. As discussed in the article about the Moore-Penrose inverse, we can use an optimization algorithm such as gradient descent or stochastic gradient descent to find one numerically, or remove some columns from X to reduce D.