{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

lecture_2_slides

# lecture_2_slides - 1 Review of Statistics(SW Chapter 3 1...

This preview shows pages 1–9. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 1 Review of Statistics (SW Chapter 3) 1. The probability framework for statistical inference (Lecture 1) 2. Estimation 3. Testing 4. Confidence Intervals 2 Estimation Y is the natural estimator of the mean. But: (a) What are the properties of Y ? (b) Why should we use Y rather than some other estimator? • Y 1 (the first observation) • maybe unequal weights – not simple average • median( Y 1 ,…, Y n ) The starting point is the sampling distribution of Y … 3 (a) The sampling distribution of Y Y is a random variable, and its properties are determined by the sampling distribution of Y • The individuals in the sample are drawn at random. • Thus, the values of ( Y 1 ,…, Y n ) are random • Thus functions of ( Y 1 ,…, Y n ), such as Y , are random: had a different sample been drawn, they would have taken on a different value • The distribution of Y over different possible samples of size n is called the sampling distribution of Y . • The mean and variance of Y are the mean and variance of its sampling distribution, E ( Y ) and var( Y ). • The concept of the sampling distribution underpins all of econometrics. 4 The sampling distribution of Y , ctd. Example : Suppose Y takes on 0 or 1 (a Bernoulli random variable) with the probability distribution, Pr[ Y = 0] = .22, Pr( Y =1) = .78 Then E ( Y ) = p × 1 + (1 – p ) × 0 = p = .78 2 Y σ = E [ Y – E ( Y )] 2 = p (1 – p ) [remember this?] = .78 × (1–.78) = 0.1716 The sampling distribution of Y depends on n . Consider n = 2. The sampling distribution of Y is, Pr( Y = 0) = .22 2 = .0484 Pr( Y = ½) = 2 × .22 × .78 = .3432 Pr( Y = 1) = .78 2 = .6084 5 The sampling distribution of Y when Y is Bernoulli ( p = .78): 6 Things we want to know about the sampling distribution: • What is the mean of Y ? o If E ( Y ) = true μ = .78, then Y is an unbiased estimator of μ • What is the variance of Y ? o How does var( Y ) depend on n (see below for the formula) • Does Y become close to μ when n is large? o Law of large numbers: Y is a consistent estimator of μ • Y – μ appears bell shaped for n large…is this generally true? o In fact, Y – μ is approximately normally distributed for n large (Central Limit Theorem) 7 The mean and variance of the sampling distribution of Y General case – that is, for Y i i.i.d. from any distribution, not just Bernoulli: mean: E ( Y ) = E ( 1 1 n i i Y n = ∑ ) = 1 1 () n i i EY n = ∑ = 1 1 n Y i n μ = ∑ = μ Y Variance: var( Y ) = E [ Y – E ( Y )] 2 = E [ Y – μ Y ] 2 = E 2 1 1 n i Y i Y n μ = - ∑ = E 2 1 1 ( ) n i Y i Y n μ = - ∑ 8 so var( Y ) = E 2 1 1 ( ) n i Y i Y n μ = - ∑ = 1 1 1 1 ( ) ( ) n n i Y j Y i j E Y Y n n μ μ = = - ×- ∑ ∑ = 2 1 1 1 ( )( ) n n i Y j Y i j E Y Y n μ μ = = -- ∑∑ = 2 1 1 1 cov(,) n n...
View Full Document

{[ snackBarMessage ]}

### Page1 / 34

lecture_2_slides - 1 Review of Statistics(SW Chapter 3 1...

This preview shows document pages 1 - 9. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online