This preview shows page 1. Sign up to view the full content.
Unformatted text preview: d then use OLS
o Estimating the transformed equation by OLS is an example of generalized least squares (GLS) VER. 10/23/2012. © P. KOLM 55 Recall: Classical Linear Regression Assumptions (Multivariable Case)
• Population model is linear in parameters: y = β0 + β1x1 + β2x 2 +…+ βk xk + u [MLR.1] • {(x i1, xi 2,…, xik , yi ) : i = 1,2,…, n} is a random sample from the population model, so that yi =β0 + β1xi1 + β2xi 2 +…+ βk xik + ui [MLR.2] • E(u  x1, x 2,…xk ) = 0 , implying that all of the explanatory variables are
[MLR.3] uncorrelated with the error • None of the x ’s is constant, and there are no exact linear relationships among them4 [MLR.4] • Homoscedasticity: Assume Var (u  x 1, x 2,..., x k ) = σ 2 [MLR.5] • Normality: u ∼ N (0, σ 2 ) [MLR.6] (needed for hypothesis testing, etc.) → MLR.1MLR.5 are known as the GaussMarkov assumptions
→ MLR.1MLR.6 are called the classical linear model assumptions (CLM) VER. 10/23/2012. © P. KOLM 56 The Standard Case: Heteroscedasticity Is Known up to a Multiplicative Constant Assume MLR.1MLR.4 are valid, but Var (ui  x i 1,..., x ik ) = σ 2h(x i 1,..., x ik )
→ MLR.5 is violated
→ The trick is to figure out what h(xi1,..., xik ) ≡ hi (the “heteroscedasticity function”) looks like
∗
If we know hi , we define ui = ui
hi and see that Var (ui∗  x i 1,..., x ik ) = σ 2
because Var (ui  x i 1,..., x ik ) = σ 2hi
• So, if we divided our whole model by hi we would have a model where the error is homoscedastic! VER. 10/23/2012. © P. KOLM 57 Original model: yi = β0 + β1xi1 +…+ βk xik + ui
with Var (ui ) = σ 2hi
Transformed model: yi hi = β0 1
hi + β1 xi1
hi +…+βk xik
hi + ui
hi or
∗
∗
yi∗ = β0x 0 + β1xi∗1 +…+ βk xik + ui∗ ∗
0 where x = 1
hi ∗
ij ,x = xij
hi etc., and Var (ui∗ ) = σ 2 The transformed model satisfies the GaussMarkov (MLR.1MLR.5)5 VER. 10/23/2012. © P. KOLM 58 Generalized Least Squares
• GLS will be BLUE since the transformed model satisfies MLR.1MLR.5
• GLS is a weighted least squares (WLS) procedure where each squared residual is weighted by the inverse of Var (ui  x i ) VER. 10/23/2012. © P. KOLM 59 GLS and WLS We can use OLS on the transformed model ∗
∗
yi∗ = β0x 0 + β1xi∗1 +…+ βk xik + ui∗ where Var (ui∗ ) = σ 2 . Minimizing the squared residuals, we obtain
n min ˆˆ
ˆ
β0 , β1 ,..., βk ∑ (u )
i =1 = ˆ min ˆ
ˆ β0 , β1 ,..., βk 2 ∗
i n = ∑(
i =1 ˆ∗ ˆ
ˆ∗
yi∗ − β0x 0 − β1x i∗1 −… − βk x ik ) 2 =
2 ⎞
n⎛
xi1
x ik ⎟
⎜ yi
1
⎟
ˆ
ˆ
ˆ
⎟
= ˆ min ˆ ∑ ⎜
− β0
− β1
−…−βk
⎜
⎟
ˆ ,..., β
⎜h
β0 , β1
⎟
k i =1 ⎜
hi
hi
hi ⎠
⎟
⎝i
= ˆ min ˆ
ˆ β0 , β1 ,..., βk VER. 10/23/2012. © P. KOLM n ˆ
ˆ
ˆ
∑ 1 / hi (yi − β0 − β1x i1 −…−βk x ik ) 2 i =1 60 Remarks
• The last expression is just the usual OLS problem except each observation is weighted by 1 / hi
o This is called weighted least squares (WLS) VER. 10/23/2012. © P. K...
View
Full
Document
This document was uploaded on 02/17/2014 for the course COURANT G63.2751.0 at NYU.
 Fall '14

Click to edit the document details