Overview

estimator


Show Summary Details

Quick Reference

A statistic used to estimate a parameter. The realized value of an estimator for a particular sample of data is called the estimate (or point estimate).

If the expectation of the statistic is equal to the parameter then it is described as being an unbiased estimator and the realized value is referred to as an unbiased estimate. If T is an estimator of the parameter θ and the expectation of T is θ+b, where b ≠ 0, then b is called the bias and the estimator is a biased estimator. If the bias tends to 0 as the sample size increases, then the estimator is described as being an asymptotically unbiased estimator.

The efficiency of an unbiased estimator is the ratio of its variance to the Cramér–Rao lower bound (See Fisher information). For an efficient estimator the ratio is 1. The relative efficiency of two unbiased estimators T and T′ is given by the inverse ratio of their variances.

Comparisons involving biased estimators are often based on the mean squared error (MSE) defined to be

E[(T-θ)2]=Var(T)+{E(T)-θ}2= Var(T)+b2,

where E(T) and Var(T) are, respectively, the expectation and variance of T. The root mean square error(RMSE) is the square root of the mean squared error and has the same units as the original data.

An estimator is said to be a consistent estimator if, for all positive c, , where n is the sample size.

A sufficient estimator or sufficient statistic is a statistic that encapsulates all the information about the unknown parameter.

The terms ‘biased’ and ‘unbiased’ appear in an 1897 text by Bowley. The terms ‘efficiency’, ‘estimate’, ‘estimation’, and ‘sufficiency’ were introduced by Sir Ronald Fisher in 1922. The term ‘estimator’ was introduced in a specialized sense by Pitman in 1939.

Subjects: Probability and Statistics.


Reference entries

Users without a subscription are not able to see the full content. Please, subscribe or login to access all content.