This preview shows page 1. Sign up to view the full content.
Unformatted text preview: ECON 103, Lecture 3: Statistics Review (contd.)
Maria Casanova January 14th (version 0) Maria Casanova Lecture 3 Requirements for this lecture: Chapter 2 and beginning of chapter 3 of Stock and Watson Maria Casanova Lecture 3 1. Estimators
ˆ An estimator θ of a parameter θ is a function of the random variables in the sample: ˆ θ = h(X1 , X2 , ..., Xn ) Therefore, the estimator itself is a random variable, and will have a distribution. The sampling distribution will typically vary with sample size n. Characterizing the sample distribution: Finitesample distribution: In some cases, we will be able to derive the exact sampling distribution for any sample size n. Asymptotic distribution: Other times we are only able to establish what the sampling distribution looks like as n → ∞
Maria Casanova Lecture 3 1. Estimators
Figure: Sampling distribution of µx for diﬀerent sample sizes when ˆ X ∼ N (6, 0.09)
1.5 2 1 .5 0 5 5.5 6 n=1 6.5 7 0 .5 1 1.5 5 5.5 n=1 6 n=10 6.5 7 (a) n=1 (b) n=10 Maria Casanova Lecture 3 1. Estimators
Figure: Sampling distribution of µx for diﬀerent sample sizes when ˆ X ∼ N (6, 0.09)
4 8 3 2 1 0 5 5.5 n=1 n=50 6 n=10 6.5 7 0 2 4 6 5 5.5 n=1 n=50 6 n=10 n=250 6.5 7 (a) n=50 (b) n=250 Maria Casanova Lecture 3 1. Estimators
What are the desirable characteristics of an estimator? Finite sample properties: Unbiasedness: If we drew inﬁnitely many samples and computed an estimate for each sample, the average of all these estimates would give the true value of the parameter. ˆ Formally: E (θ) = θ Eﬃciency: Supposing the estimator is unbiased, it has the lowest variance. Asymptotic properties Consistency: The probability that the estimator is close to the true value of the parameter increases to 1 as the sample size gets larger. ˆ Formally: lim θ = θ
n→inf
Maria Casanova Lecture 3 1. Estimators
Case study:
You are interested in the mean height of American men aged 20 to 55 (µ) You take a random sample of men and measure their height: X1 , X2 , ..., Xn . The Xi ’s are i.i.d. with E (Xi ) = E (X ) = µ, Var (Xi ) = Var (X ) = σ 2 You consider two alternative estimators of µ: µ1 is the ﬁrst observation that you sample: µ1 = X1 ˆ ˆ ¯ µ2 is the sample mean: µ2 = X ˆ ˆ Maria Casanova Lecture 3 1. Estimators Finite sample properties 1 Unbiasedness E (ˆ1 ) = E (X1 ) = E (X ) = µ µ ¯ E (ˆ2 ) = E (X ) = E µ = 1 n 1 n Xi
i = µ= 1 nµ = µ n E (Xi ) =
i 1 n i Both µ1 and µ2 are unbiased estimators of µ ˆ ˆ Maria Casanova Lecture 3 1. Estimators 2 Eﬃciency
Var (ˆ1 ) = Var (X1 ) = Var (X ) = σ 2 µ ¯ Var (ˆ2 ) = Var (X ) = Var µ = 1 n2 1 n Xi
i = 1 Var ( n2 Xi ) =
i Var (Xi ) =
i 12 σ2 nσ = n2 n As n increases, µ2 becomes more precise, while µ1 doesn’t. ˆ ˆ ¯ It can be shown that µ2 (or X ) is the MVLUE  Minimum Variance ˆ Linear Unbiased Estimator or BLUE  Best Linear Unbiased Estimator. Maria Casanova Lecture 3 1. Estimators
3 Finitesample distribution It is convenient to also know the whole distribution of the estimator, as this allows us to do statistical inference. If Xi ∼ N (µ, σ 2 ), then the exact distribution of µ1 and µ2 is ˆ ˆ the following: µ1 ∼ N (µ, σ 2 ) ˆ µ2 ∼ N (µ, ˆ σ2 ) n If Xi is not normally distributed, µ1 will have the same (ﬁnite ˆ sample) distribution as Xi . In general, the ﬁnite sample distribution of µ2 will have a ˆ complicated form.
Maria Casanova Lecture 3 1. Estimators
Large sample or asymptotic properties 1 Consistency The Law of large numbers states that, under general ¯ conditions, X will be close to µ with very high probability when n is large. The Law of large numbers implies that µ2 is consistent: ˆ
n→∞ lim µ2 = µ ˆ µ1 , however, is not consistent: ˆ
n→∞ lim µ1 = µ ˆ Maria Casanova Lecture 3 1. Estimators
Figure: Sampling distribution of µ1 for diﬀerent sample sizes when X is ˆ bernoulli, P (X = 1) = 0.5, P (X = 0) = 0.5 (coin toss) 0 0 .25 .5 .75 1 0 0 .25 .5 .75 1 (a) n=1 (b) n=250 Maria Casanova Lecture 3 1. Estimators
Figure: Sampling distribution of µ2 for diﬀerent sample sizes when X is ˆ bernoulli, P (X = 1) = 0.5, P (X = 0) = 0.5 (coin toss) 0 0 .25 .5 .75 1 0 0 .25 .5 .75 1 (a) n=1 (b) n=5 Maria Casanova Lecture 3 1. Estimators
Figure: Sampling distribution of µ2 for diﬀerent sample sizes when X is ˆ bernoulli, P (X = 1) = 0.5, P (X = 0) = 0.5 (coin toss) 0 0 .25 .5 .75 1 0 0 .25 .5 .75 1 (a) n=25 (b) n=250 Maria Casanova Lecture 3 1. Estimators
2 Asymptotic distribution The Central Limit Theorem says that if X1 , X2 , ...Xn constitute a random sample from a population with mean µ and variance σ 2 then, under certain regularity conditions: σ2 ) n That is, sample means are approximately normal when n is large (generally n ≥ 30) µ2 ˆ
A N (µ, ¯ We can then standardize µ2 (= X ): ˆ Z= µ2 − µ ˆ σ 2 /n
A N (0, 1) Maria Casanova Lecture 3 1. Estimators
Figure: Sampling distribution of µ2 for n = 250 when X is bernoulli, ˆ P (X = 1) = 0.5, P (X = 0) = 0.5 (coin toss) 0 0 .25 .5 .75 1 0 0 .25 .5 .75 1 (a) n=250 (b) n=250 Maria Casanova Lecture 3 1. Estimators
Remember The CLT guarantees asymptotic normality for any appropriately standardized sample average. In other words, the sample average of any function of the observation unit will be asymptotically normally distributed. Example: ¯ Let Yi = Xi2 and Y =
1 n i Yi . Then: Z= ¯ ¯ Y − E (Y ) ¯ Var (Y ) A N (0, 1) Maria Casanova Lecture 3 Maria Casanova Lecture 3 Maria Casanova Lecture 3 Maria Casanova Lecture 3 ...
View
Full
Document
 Winter '07
 SandraBlack
 Econometrics

Click to edit the document details