Overview

convergence in mean squares


'convergence in mean squares' can also refer to...

 

More Like This

Show all results sharing this subject:

  • Economics

GO

Show Summary Details

Quick Reference

A sequence of random variables x1,…,xn,… converges in mean squares to a random variable x if E[x2] and E[xn2] exist and the expectation of the squared (Euclidean) distance between xn and x converges to zero as n tends to infinity: limn→∞E[(xnx2)]=0. In particular, x can be a constant, x = θ. In this case convergence of xn to θ in mean squares is equivalent to the convergence of the bias and the variance of xn to zero as n tends to infinity. Convergence in mean squares implies convergence in probability (the converse does not hold, in general). This is a particular case of convergence in the pth mean (or in Lp norm) defined as E[xp], E[xnp] exist and limn→∞E[(xnx)p]=0. Convergence in pth mean implies convergence in rth mean for every rε (1, p).

limn→∞E[(xnx2)]=0

limn→∞E[(xnx)p]=0

Subjects: Economics.


Reference entries

Users without a subscription are not able to see the full content. Please, subscribe or login to access all content.