# Why there is more to classification than dicrete regression

In a classification problem, the dataset $\dataset$ consists of $\ndataset$ pairs of input vectors $\ninputvec{\idataset}$ and discrete labels $\ioutputval{\idataset}$:

While in regression, output values are numerical ($\ioutputval{\idataset} \in \realset$), in classification the labels can take at most a finite number of values: $\ioutputval{\idataset} \in \{l_1, \dotsc, l_k\}$.

Assume that the labels are binary: $\ioutputval{\idataset} \in \{0, 1\}$.

As for regression, we suppose that there exists an approximate deterministic relationship $\truemodel$ such that :

Our goal is to use a subset $\trainset \subseteq \dataset$ to train a model $\trainedmodel$ able to approximate this relationship.

## Classification using regression

We can try to use a regression and then binarize the predicted value: values above a given threshold are set to $1$, values under are set to $0$.

Let’s generate a dataset maThe use of logistic regression de of $20$ examples in each class and fit a linear least squares regression.

On the picture below:

• The orange marks are datapoints.
• Datapoints in class $0$ are at $y = 0$ and datapoints in class $1$ are at $y = 1$. We can clearly see that a point is in class $0$ if and only if $% $.
• We fit a linear least squares line to this dataset. The line is drawn in blue.
• The orange dashed line shows the threshold value of $0.5$.
• If the blue line is under the orange line, the point is classified as $0$, if it is above the point is classified as $1$.

Everything looks good so far! If the computed value $\trainedmodel(\inputvec)$ is below $0.5$ we can predict the label $0$, otherwise we predict the label $1$.

The problem with this approach is that the loss function used by the regression is not at all adapted to classification. Even on easy dataset like this one where the separation lies at $x=5$, the regression line might shift unexpectedly when the number of datapoints changes.

Let’s generate $800$ additional examples in class $0$ to see what happens:

## Polynomial regression

The stability is much better using a polynomial regression of degree $9$, as shown on the picture below.

But this is still suboptimal. While it is true that a small MSE error induces a small misclassification error, it can happen that every example is correctly classified but the MSE error is large.

This is because a predicted value of $0.8$ classifies an example in class $1$, and assuming the point is indeed in class $1$, the classification error is $0$ but the corresponding MSE error is still $(1 - 0.8)^2$.

A model with MSE loss might therefore have to work much harder than necessary in order to provide a decent upper bound on the classification error.