A computer-intensive resampling method for estimating the properties of a distribution while making minimal assumptions. In this respect it resembles the jackknife. The idea is simple. Suppose we have n observations x1, x2,…, xn from an unknown distribution. We assume that the population being sampled has 1n of its observations equal to x1, 1n equal to x2, and so on. Using pseudo-random numbers we now select m sets of n observations from this hypothetical distribution. If we are interested in, for example, the median of the original distribution, then we can use the m ‘sample’ medians to give an overall estimate and a confidence interval for that estimate.
In practice bootstrap estimators are often slightly biased. However, this bias can be estimated—again using bootstrap methods. Estimation using this form of bias correction is referred to as the double bootstrap. The term ‘bootstrap’ was introduced by Efron in 1979.