# Introduction to statistical estimators

In this article we define what an estimator is. We focus on the theory to compare and assess estimators, rather than how to find one.

Note: estimators are statistics, so I suggest you read our dedicated article on statistics first.

## Context

In a typical inference situation, we dispose of a sample of $\sn$ observations:

We model this sample as observations of a random variable $\rvx = (\rx_1, \dotsc, \rx_{\sn})$ whose source is some probability distribution $F(\rvx \mid \theta)$ that depends on some unknown parameter $\theta$.

## Point estimators

The purpose of an estimator $\hat{\theta}(\rvx)$ is to use the observed sample to estimate the true value of $\theta$.

Since an estimator is a function of the sample, it is a statistic.

Definition: point estimator
Let $\Theta$ the range of possible values for $\theta$. A point estimator of $\theta$ is a statistic $\hat{\theta}$ taking values in $\Theta$:

Don’t confuse the notations: $\theta$ is a fixed value while $\hat{\theta} = \hat{\theta}(\rvx)$ is a random variable and $\hat{\theta}(\vx)$ is an observation of this random variable.

### Constistency

This definition is very large and clearly not every estimator are interesting. Let’s narrow it down.

Definition: consistent estimator
A point estimator $\hat{\theta}$ of $\theta$ is consistent if it converges to $\theta$ when the sample size $n$ increases:

## Precision of an estimator

To measure the precision of an estimator, we can use the mean squared-error:

Definiton: mean squared-error
The mean squared-error of an estimator is the squared-distance between the estimate and the true value of the parameter:

Which can be used to bound the concentration of $\hat{\theta}$ around the true value $\theta$:

If $\text{MSE}(\hat{\theta}, \theta)$ converges towards $0$ when $\sn$ increases, the estimator is consistent. But we can find consistent estimators for which the MSE does not converge towards $0$.

So, how small can we make the $\text{MSE}$? Before we answer this question, it will be usefull to use the bias-variance decomposition.

Definition: bias-variance decomposition
The bias-variance decomposition expresses the MSE loss in terms of the bias and the variance of the estimator:

Which explains why unbiased estimators are so popular. Let’s turn our attention to such estimators.

## Bias

Definition: unbiased estimator
An estimator $\hat{\theta}(\rvx)$ is unbiased when:

Although unbiased estimators are convenient, always remember that a biased low-variance estimators can be preferable to unbiased high-variance ones. Moreover, biased estimators can be consistent if the bias decreases when $\sn$ increases.

What about the variance term, can we make it as small as we want?

## Variance

We do have a lower bound on the variance of unbiased estimators:

Cramér-Rao lower bound
Given some regularity conditions, any unbiased estimator $\hat{\theta}(\rvx)$ of finite variance satisfies:

Where $\mathcal{I}_n(\theta)$ is the Fisher information.

Can we achieve this bound?

Proprosition
$\var[\hat{\theta}(\rvx)]$ attains the Cramér-Rao lower bound if and only if the density of $\rvx$ is a one-parameter exponential family with sufficient statistic $\hat{\theta}$

And if we can’t achieve it, how can we improve our estimator? The following theorem tells us that in order to reduce the variance of our estimator, we should throw away irrelevant aspects of the data.

Rao-Blackwell theorem
Let $\hat{\theta}$ be an unbiased estimator of $\theta$ with finite variacne, and let $T = T(\rvx)$ be a sufficient statistic for $\theta$. Then $\hat{\theta}^* = \expectation[\hat{\theta} \mid T]$ is also an unbiased estimator of $\theta$ and:

Equality is attained when: $\prob[\hat{\theta}^* = \hat{\theta}] = 1$.

Recall that a statistic $T$ contains $\leq$ information than a statistics $S$ when there exists a function $g$ such that: $T = g(S)$.

The following theorem tells us that the more we throw away irrelevant information, the lower the variance of our estimator:

Let $\hat{\theta}$ be an unbiased estimator, and $T$ and $S$ two sufficient statistics. If there exists a function $g$ such that $T = g(S)$, then:

So the best we can do is use a minimally sufficient statistic.

## Estimators in practice

Common estimators are:

• the maximum likelihood estimator which maximizes $f_\rvx(\vx \mid \hat{\theta})$;
• the maximum a posterior estimator which maximizes $f_\theta(\hat{\theta} \mid \rvx = \vx)$;
• the method of moment estimator which approximates $\expectation[\rvx]$ with $\bar{\rx}$.