{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Lecture 3

# Lecture 3 - Why Use Y To Estimate Y Y Y Y is the least...

This preview shows pages 1–4. Sign up to view the full content.

Why Use Y To Estimate μ Y ? Y is unbiased: E ( Y ) = μ Y ; consistent: Y p μ Y Y is the “least squares” estimator of μ Y ; Y solves, 2 1 min ( ) n m i i Y m = - so, Y minimizes the sum of squared “residuals” optional derivation (also see App. 3.2) 2 1 ( ) n i i d Y m dm = - = 2 1 ( ) n i i d Y m dm = - = 1 2 ( ) n i i Y m = - Set derivative to zero and denote optimal value of m by ˆ m : 1 n i Y = = 1 ˆ n i m = = ˆ nm or ˆ m = 1 1 n i i Y n = = Y Y has a smaller variance than all other linear unbiased estimators: consider the estimator, 1 1 ˆ n Y i i i a Y n μ = = , where { a i } are such that ˆ Y μ is unbiased; then var( Y ) var( ˆ Y μ ) Hypothesis Testing The hypothesis testing problem (for the mean): make a provisional decision, based on the evidence at hand, whether a null hypothesis is true, or instead that some alternative hypothesis is true. That is, test H 0 : E ( Y ) = μ Y ,0 vs. H 1 : E ( Y ) > μ Y ,0 (1-sided, >) {belief} H 0 : E ( Y ) = μ Y ,0 vs. H 1 : E ( Y ) < μ Y ,0 (1-sided, <) {alternative} H 0 : E ( Y ) = μ Y ,0 vs. H 1 : E ( Y ) μ Y ,0 (2-sided) { p - value = probability of drawing a statistic (e.g. Y ) at least as extreme as the one that was actually computed with your data, assuming that the null hypothesis is true.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Calculating the p-value based on Y : p -value = 0 ,0 ,0 Pr [| | | |] act H Y Y Y Y μ μ - - where act Y is the value of Y actually observed (nonrandom) To compute the p -value, you need the to know the sampling distribution of Y , which is complicated if n is small. If n is large, you can use the normal approximation (CLT): p -value = 0 ,0 ,0 Pr [| | | |] act H Y Y Y Y μ μ - - , = 0 ,0 ,0 Pr [| | | |] / / act Y Y H Y Y Y Y n n μ μ σ σ - - = 0 ,0 ,0 Pr [| | | |] act Y Y H Y Y Y Y μ μ σ σ - - 2245 probability under left+right N (0,1) tails where Y σ = std. dev. of the distribution of Y = σ Y / n . Calculating the p-value with σ Y known: For large n , p -value = the probability that a N (0,1) random variable falls outside |( act Y μ Y ,0 )/ Y σ | Estimator of the variance of Y : In practice, Y σ is unknown – it must be estimated
2 Y s = 2 1 1 ( ) 1 n i i Y Y n = - - = “sample variance of Y Fact: If ( Y 1 ,…, Y n ) are i.i.d. and E ( Y 4 ) < , then 2 Y s p 2 Y σ Why does the law of large numbers apply?

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 13

Lecture 3 - Why Use Y To Estimate Y Y Y Y is the least...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online