LN2+Mechanics+and+Interpretation+of+OLS

LN2+Mechanics+and+Interpretation+of+OLS - Lecture Note 2...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Lecture Note 2 Mechanics and Interpretation of OLS Empirical Methods II (API202A) — Spring 2009 Harvard Kennedy School 1 How Do We Compute the Partial Association We Observe in the Data – i.e. the b β s? Recall our SEQ: y = β + β 1 x 1 + β 2 x 2 + ... + β K x K + μ ‡ for example; K = 1: Test Scores = β + β 1 Student-Teacher Ratio + μ · ‡ for example; K = 2: Salary = β + β 1 Years of Schooling + β 2 Experience + μ · Goal of regression analysis : Infer the value of the β coefficients based on the partial association that we observed between our variables in the sample . The partial association that we observe between variables is represented by our SR: y = b β + b β 1 x 1 + b β 2 x 2 + ... + b β K x K + b μ ‡ for example; K = 1: Test Scores = b β + b β 1 Student-Teacher Ratio + b μ · ‡ for example; K = 2: Salary = b β + b β 1 Years of Schooling + b β 2 Experience + b μ · • We use the OLS method to compute the partial association - i.e. the b β s. • In LN4 we will study under what conditions we can infer a causal effect from the partial association that we observe between variables. 1 LN2—API202A Spring 2009 Harvard Kennedy School 1.1 Computing the b β s by OLS Definitions: Fitted value b y : The part of y that can be associated with the RHS variables in the SR. For K=2: b y = b β + b β 1 x 1 + b β 2 x 2 Thus, y = b y + b μ Therefore, our sample errors b μ represent the part of y that cannot be associated with any RHS variable. b μ = y- b y What does OLS do? Finds the values of the b β s that minimize the sample errors b μ . How? By minimizing what is called the sum of the squared residuals (SSR). SSR = ∑ N i =1 b μ 2 i = ∑ N i =1 ( y i- b y i ) 2 b μ b μ = ( Y- b Y ) ( Y- b Y ) (matrix notation) – Intuition for why minimize the sample errors: Extract as much information as possible from our explanatory variables. – Why squared? Notation : We can write the SR in either way (equivalent notations): 1 y = b β + b β 1 x 1 + b β 2 x 2 + b μ y i = b β + b β 1 x 1 ,i + b β 2 x 2 ,i + b μ i 1 Subindex i is a way to denote a single representative observation from the N observations in the sample. 2 LN2—API202A Spring 2009 Harvard Kennedy School How does OLS find the b β s that minimize the SSR? With optimization theory (cal- culus/first order conditions): For K=1: min b β , b β 1 SSR = N X i =1 b μ 2 i = N X i =1 ‡ y i- b β- x 1 ,i b β 1 · 2 first order conditions (FOC): ∂ SSR ∂ b β =- 2 N X i =1 ‡ y i- b β- x 1 ,i b β 1 · = 0 ∂ SSR ∂ b β 1 =- 2 N X i =1 ‡ y i- b β- x 1 ,i b β 1 · x 1 ,i = 0 2 equations and 2 unknowns 2 :-→ b β 1 = ∑ N i =1 ( x 1 ,i- ¯ x 1 )( y i- ¯ y ) ∑ N i =1 ( x 1 ,i- ¯ x 1 ) 2 and b β = ¯ y- ¯ x 1 b β 1 For any K: The presentation is simpler with matrix notation because we keep track of only one FOC: min b β SSR = ( Y- X b β ) ( Y- X b β ) ∂ SSR ∂ b β =- 2 X ( Y- X b β ) = 0-→ X...
View Full Document

This note was uploaded on 04/12/2009 for the course HKS API202A taught by Professor Levy during the Spring '09 term at Harvard.

Page1 / 26

LN2+Mechanics+and+Interpretation+of+OLS - Lecture Note 2...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online