How to assess an OLS regression?

We’ve just fitted OLS to our trainset. How to assess whether it was a good model to use? We will answer this question from the point of view of machine-learning and statistics.

Recall from our article about the difference between statistics and machine-learning that those fields asses their models differently. Machine-learning relies on the large size of modern datasets while statistics relies on theory.

So, how to we assess an OLS regression?

Machine-learning

In machine learning, we measure a model’s performance using the loss function. The fit is assessed by the training error:

While the generalization error is assessed by computing the loss on the testset $\testset$:

To be meaningful, these measures require enough data.

As for preprocessing, rely on automated methods such as forward-selection or backward-elimination. But as we will see in the next section, the statistician’s approach provides usefull graohs that can guide and speed up this process.

Statistics

In statistics, we rely on statistical theory to produce ways to check if our model is adapted to the data at hand.

As discussed in our article about the theory underlying an OLS regression, the regression is asymptotically optimal when the output vector $\vy$ is normally distributed and the center of its distribution is a linear function of the input vectors:

Here $\vy$ is the output vector and $\mx$ the design matrix. $\vw$ is a vector of parameters.

We can rewrite this hypothesis using an error vector $\epsilon$ whose components are normally distributed: $\epsilon_\si \iid \gaussian(0, \sigma^2)$:

The strategy is to find implications of these assumptions that we can check graphically.

Checking for linearity

Under the assumption of linearity, the residual vector should be in the kernel of the design matrix $\mx$. Hence the two following plots:

• Plot standardised residual $\vr$ against each feature (column of $\mx$). No systematic pattern should appear in these plots. A systematic pattern would suggest that we need to add a transformation of the current feature as an additional column.

In the plot below, no systematic pattern appears:

While on the plot below, we recognize a sinus pattern:

• Plot standardised residual $\vr$ against each feature that we left out of the model. No systematic pattern should appear in these plots. A Systematic pattern suggests that we have left out a feature that should have been included.

This is what the plot of a feature that should have been included looks like:

Checking for homoskedasticity

1. Plot the standardised residuals $\sr_\si$ against the estimates $\hat{\sy}_\si$. A random scatter should appear, with approximately constant spread of the values of $\sr_\si$ for the different values of $\hat{\sy}_\si$. “Trumpet” of “bulging” effects indicate that the noise components $\epsilon_\si$ are not i.i.d. (they have different variance).

This is what such plot should look like:

On the plot below, the noise components have different variance. Model assumptions not validated.

Checking for normality

We can compare the distribution of the standardised residual agains a normal distribution. This can be done by comparing theoretical quantiles with empirical ones.

Such a plot is named a Q-Q Plot. We should see a diagonal line with $45$-degrees angle. If the line significantly deviates from the $45$-degrees line, there is evidence against the normality assumption. This easily reveals outliers, skewness and heaky tails.

Note: if we plot the empirical quantiles of the unstandardised residuals against those of a $\gaussian(0, 1)$, then the line should have slope $stdev(\se)$ and intercept zero.

The lines won’t be perfect for small sample size, so don’t overinterpret the plot.

This is how a normal Q-Q plot looks like for a small sample $\sn = 100$:

And here is a Q-Q plot that invalidates the normality hypothesis:

Checking for influential observations

How to know if a line in the design matrix has a strong influence on the fitted model? Remove the line and re-fit. Then compare with original model.

Let $\vy_{-\si}$ the output vector where row $\si$ is removed and $\mx_{-\si}$ the design matrix where the $\si$-th row is removed. Note $\vw_{-\si}$ the estimated parameter vector in this new setup.

The Cook distance measures the scaled distance between the original estimate $\hat{\vy}$ and the estimate of the truncated model $\hat{\vy}_{-\si} = \mx_{-\si}\vw_{\si}$:

As a rule of thumb, the rows ($\si$) for which the cook distance is above the following threshold should be investiguated:

So let’s plot $\sc_\si$ for each value of $\si$ and compare with this threshold.