This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Economics 241B Review of Limit Theorems for Sequences of Random Variables Convergence in Distribution The previous de&nitions of convergence focus on the outcome sequences of a ran dom variable. Convergence in distribution refers to the probability distribution of each element of the sequence directly. De&nition. A sequence of random variables & & Y 1 ; & Y 2 ;::: is said to converge in distribution to a random variable Y if lim n !1 P & Y n < c = P ( Y < c ) at all c such that F Y is continuous. We express this as & Y n D ! Y . An equivalent de&nition is De&nition. A sequence of random variables & & Y 1 ; & Y 2 ;::: is said to converge in distribution to F Y if lim n !1 F & Y n ( c ) = F Y ( c ) at all c such that F Y is continuous. We express this as & Y n D ! F Y . The distribution F Y is the asymptotic (or limiting) distribution of & Y n . Convergence in distribution is simply the pointwise conver gence of F & Y n to F Y . ( The requirement that F Y be continuous at all c will hold for all applications in this course, with the exception of binary dependent variables.) In most cases F Y is a Gaussian distribution, for which we write & Y n D ! N &; 2 : Convergence in distribution is closely related to convergence in probability, if the limit quantity is changed. In the discussion above, a sequence of random variables was shown to converge (in probability) to a constant. One can also establish that a sequence of random variables converges in probability to a random variable. Recall, for convergence (i.p.) to a constant, the probability that & Y n is in an neighborhood of & must be high, P ( & Y n & & < ) > 1 & . That is P & ! : & Y n ( ! ) & & < > 1 & : For convergence to a random variable, Y , we need P & ! : & Y n ( ! ) & Y ( ! ) < > 1 & ; (1) that is, for large n we want the probability that the histogram for & Y n is close to the histogram for Y . There is no natural measure of distance here, although we think of & as de&ning the histogram bin width. If the two histograms are arbitrarily close as n increases, then we write & Y n P ! Y: Because the two histograms are arbitrarily close, it should not be surprising that & Y n P ! Y ) & Y n D ! F Y (sometimes written & Y n D ! Y ). Why doesnt the reverse hold? Because of the basic workings of probability spaces, about which we rarely concern ourselves. By the fact that both & Y n and Y are indexed by the same ! in (1), both quantities are de&ned on the same probability space. No such requirement is made in establishing convergence of distribution, we simply discuss a sequence of distribution functions.distribution, we simply discuss a sequence of distribution functions....
View
Full
Document
 Fall '08
 Staff
 Economics

Click to edit the document details