Economics 241B
Review of Limit Theorems for Sequences of Random Variables
Convergence in Distribution
The previous de°nitions of convergence focus on the outcome sequences of a ran
dom variable.
Convergence in distribution refers to the probability distribution
of each element of the sequence directly.
De°nition.
A sequence of random variables
°
°
Y
1
;
°
Y
2
; : : :
±
is said to converge
in distribution to a random variable
Y
if
lim
n
!1
P
²
°
Y
n
< c
³
=
P
(
Y < c
)
at all
c
such that
F
Y
is continuous.
We express this as
°
Y
n
D
!
Y
.
An equivalent de°nition is
De°nition.
A sequence of random variables
°
°
Y
1
;
°
Y
2
; : : :
±
is said to converge
in distribution to
F
Y
if
lim
n
!1
F
°
Y
n
(
c
) =
F
Y
(
c
)
at all
c
such that
F
Y
is continuous.
We express this as
°
Y
n
D
!
F
Y
.
The distribution
F
Y
is the asymptotic (or limiting)
distribution of
°
Y
n
.
Convergence in distribution is simply the pointwise conver
gence of
F
°
Y
n
to
F
Y
. ( The requirement that
F
Y
be continuous at all
c
will hold for
all applications in this course, with the exception of binary dependent variables.)
In most cases
F
Y
is a Gaussian distribution, for which we write
°
Y
n
D
!
N
²
°; ±
2
³
:
Convergence in distribution is closely related to convergence in probability, if
the limit quantity is changed.
In the discussion above, a sequence of random
variables was shown to converge (in probability) to a constant.
One can also
establish that a sequence of random variables converges in probability to a random
variable. Recall, for convergence (i.p.) to a constant, the probability that
°
Y
n
is in
an
²
neighborhood of
°
must be high,
P
(
´
´
°
Y
n
°
°
´
´
< ²
)
>
1
°
³
.
That is
P
°
!
:
´
´
°
Y
n
(
!
)
°
°
´
´
< ²
±
>
1
°
³:
For convergence to a random variable,
Y
, we need
P
°
!
:
´
´
°
Y
n
(
!
)
°
Y
(
!
)
´
´
< ²
±
>
1
°
³;
(1)
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
that is, for large
n
we want the probability that the histogram for
°
Y
n
is close to the
histogram for
Y
.
There is no natural measure of distance here, although we think
of
²
as de°ning the histogram bin width.
If the two histograms are arbitrarily
close as
n
increases, then we write
°
Y
n
P
!
Y:
Because the two histograms are arbitrarily close, it should not be surprising that
°
Y
n
P
!
Y
)
°
Y
n
D
!
F
Y
(sometimes written
°
Y
n
D
!
Y
).
Why doesn±t the reverse hold?
Because of the basic workings of probability
spaces, about which we rarely concern ourselves.
By the fact that both
°
Y
n
and
Y
are indexed by the same
!
in (1), both quantities are de°ned on the same
probability space.
No such requirement is made in establishing convergence of
distribution, we simply discuss a sequence of distribution functions.
Not only
could
°
Y
n
and
Y
be de°ned on di/erent probability spaces, for each
n
,
°
Y
n
could be
de°ned on a di/erent probability space.
This is the end of the preview.
Sign up
to
access the rest of the document.
 Fall '08
 Staff
 Economics, Probability distribution, Probability theory, yn, Convergence of random variables

Click to edit the document details