7 2 consistency consistency is a property of an

Info iconThis preview shows pages 7–15. Sign up to view the full content.

View Full Document Right Arrow Icon
7
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
2 . Consistency Consistency is a property of an estimation method as the sample size grows. The thought experiment is even more difficult than that for unbiasedness: In addition to thinking of drawing different samples for a given sample size, we now also imagine what happens as the sample size gets larger. 8
Background image of page 8
DEFINITION : A sequence of estimators ̂ n : n 1,2,. .. is consistent (weakly consistent) for Θ if plim ̂ n ,all Θ More precisely, for all 0, P ̂ n 0as n . If ̂ n (as a shorthand) is not consistent, we say it is inconsistent and its inconsistency is plim ̂ n 9
Background image of page 9

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
It is (too) common that the inconsistency of an estimator is called its “asymptotic bias.” In many cases, we cannot compute expected values, and so we cannot even discuss asymptotic bias. In a sense, consistency is a minimal requirement of an estimation procedure. In effect, it requires that if we have access to increasing amounts of data from the underlying population, we should be able to get closer and closer to the population parameter. Sometimes the following result is useful for establishing consistency of an estimator. 10
Background image of page 10
THEOREM :If ̂ n is a sequence of estimators with MSE ̂ n 0as n then ̂ n p . Proof : Immediate from Chebyshev’s inequality because for any 0, P ̂ n E  ̂ n 2 2 MSE ̂ n 2 11
Background image of page 11

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Remember that MSE ̂ n Var ̂ n Bias ̂ n  2 so we require that the variance and bias go to zero: lim n Var ̂ n 0 lim n Bias ̂ n 0 12
Background image of page 12
Requiring consistency as a minimal feature rules out certain silly unbiased estimators: To ensure that an unbiased estimator is also consistent, its variance (if it can be computed) should shrink to zero. Consistency does not rule out all inferior estimators. Suppose for estimating the mean of a population with finite second moment, for a sample of size n we use only the first n 1 n /2 ( n even) or n 1 /2 ( n odd) observations. The variance of X ̄ n 1 is 2 / n 1 0as n , but X ̄ n 1 has a larger variance than X ̄ n : Var X ̄ n 1 Var X ̄ n n n 1 2as n . 13
Background image of page 13

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
An important caveat is that the variance of unbiased, consistent estimators need not exist, and so the MSE need not be defined. EXAMPLE : Suppose X is a random variable with E | X | and E X , but E X 2 . If we have a random sample X i : i 1,2,. .., n , the sample average is unbiased and consistent for , even though Var X ̄ n does not exist.
Background image of page 14
Image of page 15
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

Page7 / 77

7 2 Consistency Consistency is a property of an estimation...

This preview shows document pages 7 - 15. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online