# What is a statistic and why do we care?

In this article, we explain that a statistic is a way of compressing information contained in the data, and we show how it can be used for inference.

Let $\def\source{\mathcal{Y}} \def\sourcevec{\vec{\source}} \def\obs{y} \def\obsvec{\vec{\obs}} \def\param{\theta} \def\Param{\Theta} \newcommand{\est}[1]{\hat{#1}}$ $\sourcevec = (\source_1, ..., \source_n)$ be a random vector. Suppose the joint distribution of $\sourcevec$ is $F(\obsvec ; \param)$ for some unknown parameter $\param \in \Param$.

We observe a sample $\obsvec = (\obs_1, ..., \obs_n)$ drawn from $\source$. What conclusions about $\param$ can we make on the sole basis of our observations $\obsvec$? And what is the uncertainty associated with these conclusions?

We will study the sample $\obsvec$ through numerical summaries $T(\obsvec)$. Such a summary is called a statistic.

Definition: statistic
A statistic is any function $T$ of the sample that does not depend on the unknown parameters $\param$. For example, the sample average $\avg(\obsvec) = \frac{1}{n}\sum \obs_i$ is a statistic.

To understand what good a given statistic $T$ is, we need to understand its behavior when the parameter $\param$ changes. While $T(\obsvec)$ is a fixed number associated with the fixed observation $\obsvec$, we have that $T(\sourcevec)$ is a random variable. To understand how the statistic $T$ behaves when $\param$ changes, we need to study this random variable.

Definition: sampling distribution
The sampling distribution of $T$ under the distribution $F(\obsvec ; \param)$ of $\sourcevec$ is the distribution of the random variable $T(\sourcevec)$:

The key observation here is that the sampling distribution of $T$ depends on the unknown parameter $\param$. The more it depends on $\param$, the more information $T$ conveys about it.

The result $T(\sourcevec)$ of a deterministic transformation $T$ applied to $\sourcevec$ can not convey more information than $\sourcevec$. So it is a form of compression. How much we can compress the sample without loosing interesting information about $\param$?

Let’s define a name for statistics that carry no information about the parameter.

Definition: ancillary statistic
A statistic $T$ is ancillary for the parameter $\param$ if its sampling distribution does not functionally depend on $\param$. Consequence: such statistics carry no information about $\param$.

So, what information is lost when we use $T$ to compress the sample? To answer this question, we need to understand what different samples $\obsvec_1$ and $\obsvec_2$ are compressed into the same value $t = T(\obsvec_1) = T(\obsvec_2)$.

Definition: level set
The level sets of $T$ are the sets:$\newcommand{\levelset}[1]{L_#1}$

This sets are of interest because all the observations of $\sourcevec$ that falls in a given level set $\levelset{t}$ are equivalent as far as $T$ is concerned. They all reduce to the same value $t$.

Let’s look at the distribution $F_{\sourcevec \mid T = t}$ of $\sourcevec$ conditional on a given level set $\levelset{t}$ of $T$.

• When $F_{\sourcevec \mid T = t}$ changes depending on $\param$, we are loosing the information conveyed by this dependence.
• When $F_{\sourcevec \mid T = t}$ is functionally independent of $\param$, then $\sourcevec$ contains no information about $\param$ on the set $\levelset{t}$ and we are not loosing any information on this set.
• If this is true for all possible values $t$ of $T(\obsvec)$, then our statistic contains the same information about $\param$ as $\obsvec$ itself does. In other words, knowing the exact value of $\obsvec$ does not convey more information than knowing $T(\obsvec)$. Let’s define a name for this.
Definition: sufficient statistic
A statistic $T$ is said to be sufficient for the parameter $\param$ if $F_{\sourcevec \mid T(\sourcevec) = t}$ does not depend on $\param$.
Example: coin tossing
We model $n$ toss of a biased coin using an i.i.d. sample from the $\mathrm{Bernoulli}(\param)$ distribution, where the probability $\param$ to obtain head is unknown. Let $T(\obsvec) = \sum_{i = 1}^{n} \obs_i$ be the number of heads among the $n$ toss.

And we see that $T$ is sufficient for $\param$: knowing which tosses came heads is irrelevant in deciding the probability of head. Only the number of observed heads matters.

While sufficient statistics are incredibly usefull, the definition is hard to verify in practice. The Fisher-Neyman factorization theorem provides an easier way to identify sufficient statistics.

Fisher-Neyman factorization theorem
Let $\sourcevec$ be a random vector with joint density function $f(\obsvec;\param)$. A statistic $T$ is sufficient for $\param$ if and only if there exists functions $g$ and $h$ such that:

So, sufficient statistics compress data without information loss about the parameter $\param$ of interest. Still, some sufficient statistic might contain more data than necessary. How much can we compress?

Definition: minimally sufficient statistic
A statistic $T$ is said to be minimally sufficient for the parameter $\param$ if it is sufficient for $\param$ and for any other sufficient statistic $S$ there exists a function $g(\cdot)$ such that:

Since the deterministic function $g$ can only reduce the amount of conveyed information and not increase it, we see that $T$ is the sufficient statistic that contains the less information.

So, statistics compress the sample and contain information about the unknown parameter. How do we retrieve this parameter? We use a point estimator.

Let’s see an example.

### Gaussian Sufficient Statistics

Let a sample $\sourcevec \iid \gaussian(\mu, \sigma^2)$ of size $n$. Define the following statistics:

The pair $(\avg(\sourcevec), S^2(\sourcevec))$ is minimally sufficient for $(\mu, \sigma^2)$ and we have:

Using convergence results, we can conclude that as the sample size $n$ increases, $\avg(\obsvec)$ converges to $\mu$ at the speed of $\mathcal{O}\paren{\frac{1}{\sqrt{n}}}$. Likewise, $S^2(\obsvec)$ converges to $\sigma^2$:

Other articles you might like: