*This preview shows
pages
1–4. Sign up
to
view the full content.*

This
** preview**
has intentionally

**sections.**

*blurred***to view the full version.**

*Sign up*This
** preview**
has intentionally

**sections.**

*blurred***to view the full version.**

*Sign up*
**Unformatted text preview: **MIT OpenCourseWare http://ocw.mit.edu 14.30 Introduction to Statistical Methods in Economics Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms . 1 14.30 Introduction to Statistical Methods in Economics Lecture Notes 17 Konrad Menzel April 16, 2009 The Central Limit Theorem Remember that last week, we saw the DeMoivre-Laplace theorem for Binomial random variables, which essentially said that for large values of n , the standardization of the random variable Y B ( n, p ), Z = Y E [ Y ] follows approximately a standard normal distribution. Since a binomial is a sum of i.i.d. n Var ( Y ) zero/one random variables X i (counting the number of trials resulting in a success), we can think of Y as the sample mean of X 1 , . . . , X n . n Therefore the DeMoivre-Laplace theorem is in fact also a result on the standardized mean of i.i.d. zero/one random variables. The Central Limit Theorem generalizes this to sample means of i.i.d. sequences from any other distribution with finite variance. Theorem 1 (Central Limit Theorem) Suppose X 1 , . . . , X n is a random sample of size n from a given distribution with mean and variance 2 &lt; . Then for any fixed number x , lim P n X n x = ( x ) n We say that nX n converges in distribution (some people also say converges in law) to a normal with mean and variance 2 , or in symbols: d n ( X n ) N (0 , 2 ) So how does the mean converge both to a constant (according to the Law of Large Numbers), and a random variable with variance one (according to the central limit theorem) at the same time? The crucial detail here is that for the central limit theorem, we blow the sample mean up by a factor of n which turns out to be exactly the right rate to keep the distribution from collapsing to a point (which happens for the Law of Large Numbers) or exploding to infinity. Why is the standard normal distribution a plausible candidate for a limiting distribution of a sample mean to start with? Remember that we argued that the sum of two independent normal random variables again follows a normal distribution (though with a different variance, but since we only look at the standardized mean, this doesnt matter), i.e. the normal family of distributions is stable with respect to convolution (i.e. addition of independent random variables from the family). Note that this is not true for most other distributions (e.g. the uniform or the exponential). Since the sample mean is a weighted sum of the individual observations, increasing the sample from n to 2 n , say, amounts to adding the mean of the sequence X n +1 , . . . , X 2 n to the first mean, and then dividing by 2. Therefore, if we postulated that even for large n , the distribution of X n was not such that the sum 1 Xbar_n 0 .2 .4 .6 .8 1 Xbar_n 1.5 1 .5 0 .5 1 0 100 200 300 400 500 0 100 200 300 400 500...

View
Full
Document