## Quick Reference

A sequence of random variables *x*_{1},…,*x*_{n},… converges in mean squares to a random variable *x* if *E*[*x*^{2}] and *E*[*x*_{n}^{2}] exist and the expectation of the squared (Euclidean) distance between *x*_{n} and *x* converges to zero as *n* tends to infinity: lim_{n→∞}*E*[(*x** _{n}*−

*x*

^{2})]=0. In particular,

*x*can be a constant,

*x*= θ. In this case convergence of

*x*

_{n}to θ in mean squares is equivalent to the convergence of the bias and the variance of

*x*

_{n}to zero as

*n*tends to infinity. Convergence in mean squares implies convergence in probability (the converse does not hold, in general). This is a particular case of convergence in the

*p*th mean (or in

*L*norm) defined as

^{p}*E*[

*x*

^{p}],

*E*[

*x*

_{n}

^{p}] exist and lim

_{n→∞}*E*[(

*x*

*−*

_{n}*x*)

*]=0. Convergence in*

^{p}*p*th mean implies convergence in

*r*th mean for every

*r*ε (1,

*p*).

lim_{n→∞}*E*[(*x** _{n}*−

*x*

^{2})]=0

lim_{n→∞}*E*[(*x** _{n}*−

*x*)

*]=0*

^{p}*Subjects:*
Economics.