71 facts about convergence in probability boundedness

Info iconThis preview shows pages 71–79. Sign up to view the full content.

View Full Document Right Arrow Icon
71
Background image of page 71

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Facts About Convergence in Probability , Boundedness in Probability , and Convergence in Distribution (i) If W n p W then W n d W . In practice, this is not especially useful because typically W c for some constant, and then W n d c , which means the limiting distribution is P W c 1. (ii) If W n d W then W n O p 1 . This is very useful because it implies that if we can show a sequence converges in distribution then it is automatically bounded in probability. We conclude immediately that if W n d W and g  is continuous then g W n O p 1 [because g W n d g W .] 72
Background image of page 72
(iii) The asymptotic equivalence lemma allows us to obtain the limiting distribution of one sequence if we know (a) that it is “getting close” to another sequence and (b) we know the limiting distribution of that other sequence. Formally, suppose Z n d Z and W n Z n p 0or W n Z n o p 1 . Then W n d Z The result is used routinely for large-sample approximations to estimators and test statistics. 73
Background image of page 73

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Convergence in Distribution for Random Vectors We can use the natural extension for random vectors. Namely, W n d W if F n w F w at all w k where F  is continuous. But there is also a useful equivalent condition based on convergence of linear combinations. FACT : W n d W if and only if for all a k with a a 1, a W n a 1 W n 1 a 2 W n 2 ... a k W nk d a W . 74
Background image of page 74
In other words, we can establish convergence of the joint CDF for a sequence of random vectors by establishing convergence of univariate linear combinations of the random vector. This characterization is especially useful for establishing convergence to multivariate normality because linear combinations of a multivariate normal are normal. The continuous convergence result continuous to hold. If W n d W and g : k m is continuous then g W n d g W . 75
Background image of page 75

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
In some unusual cases, it is helpful to know that continuous convergence holds if we allow g  to be discontinuous on a set of points G with P W G 0. For example, if W n (2 1) converges in distribution to a bivariate standard normal, then we can conclude W n 1 / W n 2 d W 1 / W 2 where W W 1 , W 2 Normal 0 , I 2 . While the function g w 1 , x 2 w 1 / w 2 is discontinuous at all points with w 2 0, this has no effect because P W 2 0 0. 76
Background image of page 76
Continuous convergence has many applications. For example, for A m k and b m 1, AW n b d AW b For C a k k matrix, W n CW n d W CW because the function g w w Cw is continuous on k . We will cover many other applications in Section 8. 77
Background image of page 77

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
7 . The Central Limit Theorem The central limit theorem (CLT) is fundamental for establishing convergence in distribution of appropriately standardized sample averages.
Background image of page 78
Image of page 79
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

Page71 / 88

71 Facts About Convergence in Probability Boundedness in...

This preview shows document pages 71 - 79. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online