# Introduction to hypothesis testing

We introduce the basic vocabulary required to understand hypothesis testing and define the p-value.

## Introduction

Scientists accept a theory as long as a better theory hasn’t been found. Each time a theory is recognized, we have no way to determine whether it is true for sure, but at least we know it’s better than the previous theory we had.

For instance, the laws of Newton ($\vf = \sm\va$) were widely accepted and used with success. It turned out that they were only approximately true and a better theory was found in relativity. Has relativity theory found the true equations of nature? We don’t know, but it’s the model in use until we find a better one.

Hypothesis testing provides a tool to reject an existing theory when compared to a new candidate theory.

## Rejection vs acceptation

Before diving into hypothesis testing, it’s important to understand why probability theory can be used to reject an hypothesis, but not accept one.

Hypotheses are modeled by probability distributions. Given an observation $\sy$, we can ask “how probable is it that model $1$ has generated $\sy$?”

If this probability is high, does it mean that we should accept $\theta_1$? Not necessarily because it could be high by coincidence. Also another hypothesis $\theta_2$ might yield a higher probability.

But if this probability is very small, we don’t need a second hypothesis to suspect that $\theta_1$ is a bad model.

## The hypotheses

As always in statistics, we model all this with samples and distributions.

Let $\rvy = (\ry_1, \dotsc, \ry_\sn)$ be a sample of $\sn$ random variables. Model the source of $\rvy$ as the distribution $\ff_{\rvy}(\vy \mid \theta)$ where $\theta \in \Theta$ is an unknown parameter.

We model the existing theory with a subset $\Theta_0 \subset \Theta$ and the candidate theory with another disjoint subset $\Theta_1 \subset \Theta$. The hypotheses are:

 $H_0: \theta \in \Theta_0$ We keep the current theory $H_1: \theta \in \Theta_1$ The new theory is better

Given an observed sample $\vy = (\sy_1, \dotsc, \vy_\sn)$ from $\rvy$, which region between $\Theta_0$ and $\Theta_1$ is more plausible to contain the true value $\theta$ of the parameter?

## How to decide between $H_0$ and $H_1$?

To decide whether we reject the old theory, we use a test function:

And we keep $H_0$ when $\delta(\vy) = 0$ or we reject $H_0$ and prefer $H_1$ when $\delta(\vy)$.

There exists numerous such test functions, just like there exists numerous estimators. Rather than diving in the details now, let’s discuss how to choose one.

## Quantifying errors

Since we don’t have all the possible observations from the source $\ff_{\rvy}$ but only a sample $\vy$ we might make mistakes in deciding between $H_0$ and $H_1$. And our decision might change if we collect more data.

There are two types of mistakes:

• type $1$: decide in favor of $H_1$ when $H_0$ is better;
• type $2$: decide in favor of $H_0$ when $H_1$ is better.
 $H_0$ better $H_1$ better Choose $H_0$ no error Type $2$ error Choose $H_1$ Type $1$ error no error

In practice, one type of error is more costly than the other.

For instance, if we decide in favor of $H_1$ when in fact $H_0$ is better, this means we choose the new theory when we should have kept the old one.

• This is very costly because every textbook will be updated with the new theory, only to discover a few years later that we should switch back to the old one.
• On the other hand, if we decide to keep the old theory when $H_1$ is better (type $2$ error), then there is no immediate cost and we can always re-evaluate the new theory when we have more data.

So we fix a significance level $\alpha$ to bound the probability of type $1$ errors:

And we only consider the test functions $\delta$ that can garantee the above threshold is respected.

In terms of the test function $\delta$, the probability of type $1$ error is written:

## The $p$-value

Let’s take a family of test functions $\{\delta_\alpha \mid \alpha \in \realset\}$ such that $\delta_\alpha$ has significance level $\alpha$:

Given a sample $\vy$, each test function will decide between keeping $H_0$ or rejecting $H_0$.

Recall that for a test function $\delta_\alpha$:

• $H_0$ is rejected when $\delta_\alpha(\vy) = 1$;
• And this is an error with probability at most $\alpha$.

The $p$-value is the smallest $\alpha$ such that $H_0$ is rejected:

In other words, it can be considered as the probability of making an error when rejecting $H_0$.

• When $p(\vy)$ is small, there is little probability that the test function is mistaken in rejecting $H_0$ and we can be confident if it does.
• When $p(\vy)$ is large, there is high probability that the test function makes a mistake so we shouldn’t trust it.

It is used as a measure of evidence against $H_0$:

• small $p$-value provides evidence against $H_0$;
• large $p$-value provides no evidence against $H_0$.

Other articles you might like: