{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

note6d - y i-ˆ y i 2 n-p Weighted Least Square Recall Y i...

This preview shows pages 1–9. Sign up to view the full content.

STAT5044: Regression and Anova Inyoung Kim

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Outline 1 Weighted Least Square
Weighted Least Square Only problem: nonconstant variance Linear model with heterogeneity: Y i = β 0 + β 1 x i + σδ i ε i Y i δ i = β 0 δ i + β 1 δ i x i + σε We want to minimize this form. i { y i δ i - ( β 0 δ i + β 1 δ i x i ) } 2

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Weighted Least Square min β 0 , β 1 i { Y i δ i - ( β 0 δ i + β 1 δ i x i ) } 2 = i = 1 [ δ - 2 i { y i - ( β 0 + β 1 x i ) } 2 ] = i = 1 w i { y i - ( β 0 + β 1 x i ) } 2 , where w i = c × δ - 2 i and c is the positive constant. The “best” weight is proportion to the inverse of the variance. This is so called “weighted least square” criterion.
Weighted Least Square New Y new Y = w 1 / 2 1 ··· w 1 / 2 2 ··· ··· w 1 / 2 n Y 1 Y 2 . . . Y n = W 1 / 2 Y New mean: w 1 / 2 1 x 1 1 x 2 . . . . . . 1 x n β 0 β 1 = W 1 / 2 X β To minimize | w 1 / 2 y - w 1 / 2 X β | 2

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Weighted Least Square ˆ β = (( w 1 / 2 x ) t ( w 1 / 2 X )) - 1 ( w 1 / 2 x ) t w 1 / 2 y = ( x t ( w 1 / 2 ) t ( w 1 / 2 ) x ) - 1 X t w 1 / 2 w 1 / 2 y = ( x t wx ) - 1 x t wy This is a linear form of y N ( X β , σ 2 w - 1 ) ˆ β N ( β , ( X t WX ) - 1 σ 2 )
Weighted Least Square ˆ σ 2 = [ n i δ - 2 i ( y i - ˆ y i ) 2 ] ( n - p ) = [

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: y i-ˆ y i ) 2 ] ( n-p ) Weighted Least Square Recall: Y i = β + β 1 x i + σδ i ε i , σδ i ∝ w-1 / 2 i r i = y i-( ˆ β + ˆ β 1 x i ) behave like σδ i ε i ⇒ r 2 i behave like σ 2 δ 2 i ε 2 i ⇒ r 2 i δ 2 i behave like σ 2 ε 2 i Weighted Least Square Remark 1: Use WLS when the unequal variance is only problem. (heteroscedasticity: different variance) Remark 2: Give larger weight to the point with smaller variance As a result, one gets better efﬁciency in ˆ β and better power is testing. Remark3: Diagnostic plot should be performed on the “weighted residual”, w 1 / 2 i r i ....
View Full Document

{[ snackBarMessage ]}

Page1 / 9

note6d - y i-ˆ y i 2 n-p Weighted Least Square Recall Y i...

This preview shows document pages 1 - 9. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online