Keyword | CPC | PCC | Volume | Score | Length of keyword |
---|---|---|---|---|---|

prove an estimator is consistent | 0.84 | 0.2 | 2647 | 67 | 32 |

prove | 0.65 | 0.3 | 9498 | 63 | 5 |

an | 0.82 | 0.7 | 5451 | 74 | 2 |

estimator | 1.39 | 0.5 | 4470 | 68 | 9 |

is | 0.36 | 0.7 | 5121 | 89 | 2 |

consistent | 1.97 | 0.8 | 8487 | 87 | 10 |

EDIT: Fixed minor mistakes. An estimator of θ (let's call it T n) is consistent if it converges in probability to θ. Using your notation p l i m n → ∞ T n = θ. lim n → ∞ P ( | T n − θ | ≥ ϵ) = 0 for all ϵ > 0. The easiest way to show convergence in probability/consistency is to invoke Chebyshev's Inequality, which states:

However, T n is not a consistent estimator of μ. EDIT 3: See cardinal's points in the comments below. @G.JayKerns Unbiasedness is unnecessary for this. Consider S n = 1 n − 1 ∑ i = 1 n ( X i − X n ¯) 2. S n is a biased estimator of the standard deviation yet you can use the above argument to show that it's consistent.

Thus, the concept of consistency extends from the sequence of estimators to the rule used to generate it. For instance, suppose that the rule is to "compute the sample mean", so that is a sequence of sample means over samples of increasing size.

Consistency is a relatively weak property and is considered necessary of all reasonable estimators. This is in contrast to optimality properties such as eﬃciency which state that the estimator is “best”. ... The above theorem can be used to prove that S2 is a consistent estimator of Var(X i) S2 = 1