{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Lecture 12 Properties of OLS in the multiple regression model

# Lecture 12 Properties of OLS in the multiple regression model

This preview shows pages 1–6. Sign up to view the full content.

Economics 326 Methods of Empirical Research in Economics Lecture 12: Properties of OLS in the multiple regression model Vadim Marmer University of British Columbia March 3, 2009

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Multiple regression and OLS I Consider the multiple regression model with k regressors: Y i = β 0 + β 1 X 1 , i + β 2 X 2 , i + . . . + β k X k , i + U i . I Let ˆ β 0 , ˆ β 1 , . . . , ˆ β k be the OLS estimators: if ˆ U i = Y i ° ˆ β 0 ° ˆ β 1 X 1 , i ° ˆ β 2 X 2 , i ° . . . ° ˆ β k X k , i , then n i = 1 ˆ U i = n i = 1 X 1 , i ˆ U i = . . . = n i = 1 X k , i ˆ U i = 0 . 1/16
Multiple regression and OLS I As in Lecture 10, we can write ˆ β 1 as ˆ β 1 = n i = 1 ˜ X 1 , i Y i n i = 1 ˜ X 2 1 , i , where I ˜ X 1 , i are the °tted OLS residuals: ˜ X 1 , i = X 1 , i ° ˆ γ 0 ° ˆ γ 2 X 2 , i ° . . . ° ˆ γ k X k , i . I ˆ γ 0 , ˆ γ 2 , . . . , ˆ γ k are the OLS coe¢ cients: n i = 1 ˜ X 1 , i = n i = 1 ˜ X 1 , i X 2 , i = . . . = n i = 1 ˜ X 1 , i X k , i = 0 . I Similarly, we can write ˆ β 2 as ˆ β 2 = n i = 1 ˜ X 2 , i Y i n i = 1 ˜ X 2 2 , i , where I ˜ X 2 , i are the °tted OLS residuals: ˜ X 2 , i = X 2 , i ° ˆ δ 0 ° ˆ δ 1 X 1 , i ° ˆ δ 3 X 3 , i ° . . . ° ˆ δ k X k , i . I ˆ δ 0 , ˆ δ 1 , ˆ δ 3 , . . . , ˆ δ k are the OLS coe¢ cients: n i = 1 ˜ X 2 , i = n i = 1 ˜ X 2 , i X 1 , i = n i = 1 ˜ X 2 , i X 3 , i = . . . = n i = 1 ˜ X 2 , i X k , i = 0. 2/16

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
The OLS estimators are linear I Consider ˆ β 1 : ˆ β 1 = n i = 1 ˜ X 1 , i Y i n i = 1 ˜ X 2 1 , i = n i = 1 ˜ X 1 , i n l = 1 ˜ X 2 1 , l Y i = n i = 1 w 1 , i Y i , where w 1 , i = ˜ X 1 , i n l = 1 ˜ X 2 1 , l . I Recall that ˜ X 1 are the residuals from a regression of X 1 against X 2 , . . . , X k and a constant, and therefore w 1 , i depends only on X ±s. 3/16
Unbiasedness I Suppose that 1. Y i = β 0 + β 1 X 1 , i + β 2 X 2 , i + . . . + β k X k , i + U i . 2. Conditional on X ±s, E ( U i ) = 0 for all i ±s. I Conditioning on X ±s means that we condition on X 1 , 1 , . . . , X 1 , n , X 2 , 1 , . . . , X 2 , n , . . . , X k , 1 , . . . , X k , n : E ( U i j X 1 , 1 , . . . , X 1 , n , X 2 , 1 , . . . , X 2 , n , . . . , X k , 1 , . . . , X k , n ) = 0 . I Under the above assumptions: E ˆ β 0 = β 0 , E ˆ β 1 = β 1 , . . . E ˆ β k = β k .

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}