725 note 2010_s1_1_linear

725 note 2010_s1_1_linear - 1. LINEAR REGRESSION UNDER...

Info iconThis preview shows pages 1–10. Sign up to view the full content.

View Full Document Right Arrow Icon
1 | Linear Regressions under Ideal Conditions (I) 1. LINEAR REGRESSION UNDER IDEAL CONDITIONS (I) What do we learn in this section? [1] Regression model. • What is “regression model”? [2] (Strong) Assumptions. • What assumptions for regression models? • How should a sample be collected from population? [3] Ordinary Least Squares (OLS). • This is the popular estimation method for regression models. [4] Goodness of Fit. • Does your estimated regression model explain your sample well? [5] Statistical Properties of the OLS estimator. • Is the OLS estimator unbiased and normal?
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
2 | Linear Regressions under Ideal Conditions (I) What do we learn in this section? Continue… [6] Efficiency. • Is the OLS estimator most reliable estimator? • If so, in what sense? [7] Testing Linear Hypothesis. • How can we test the hypotheses related to regression models? [8] Forecasting. • Can we use our regression results for forecasting? [9] Weaker Assumptions. • Does the OLS estimator have good properties under more realistic circumstances?
Background image of page 2
3 | Linear Regressions under Ideal Conditions (I) [1] What is “Regression Model”? • Interested in the average relation between income ( y ) and education ( x ). • For the people with 12 years of schooling ( x =12), what is the average income ( (| 1 2 ) E yx )? • For the people with x years of schooling, what is the average income ( ( | ) Ey x )? • Regression model: (|) yE y x u , where u is an error term with ( | )0 E ux .
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
4 | Linear Regressions under Ideal Conditions (I) Warming-Up Probability Theory (1) Bivariate Distributions Consider two random variables (RV), X and Y, with a joint probability density function (pdf): f ( x , y ) = Pr( X = x , Y = y ). Marginal (unconditional) pdf: f x ( x ) = y f ( x , y ) = Pr( X = x ) regardless of Y ; f y ( y ) = x f ( x , y ) = Pr( Y = y ) regardless of X . Conditional pdf: f ( y | x ) = Pr( Y = y , given X = x ) = f ( x , y )/ f x ( x ).
Background image of page 4
5 | Linear Regressions under Ideal Conditions (I) Conditional Mean and Variance: X , Y : RVs with f ( x , y ). (e.g., Y = income, X = education) • Population of billions and billions: {( x (1) , y (1) ), . ... ( x (b) , y (b) )}. • Average of y ( j ) = E( y ). • For the people earning a specific education level x , what is the average of y ? (| ) EyX x = ( | ) y yf y x . 22 var( | ) [( ( | )) | ] [ ( | )] ( | ) y yX x E y Eyx x y Eyx fyx  .
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
6 | Linear Regressions under Ideal Conditions (I) Regression model: • Let (|) uy E y x  (deviation from conditional mean). y Eyx y Eyx Eyx u  (regression model). E yx = explained part of y by x . u = unexplained part of y (called disturbance term) with (|) 0 Eu x .
Background image of page 6
7 | Linear Regressions under Ideal Conditions (I) EX: • A population with X (income=$10,000) and Y (consumption = $10,000). • Joint pdf: Y\X 4 8 1 1/2 0 2 1/4 1/4 • Graph for this population: y x 1 2 4/3 4 8 Regression line
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
8 | Linear Regressions under Ideal Conditions (I) • Marginal pdf: Y \ X 4 8 f y ( y ) 1 1/2 0 1/2 2 1/4 1/4 1/2 f x ( x ) 3/4 1/4 • Conditional Probabilities of f ( y | x ): Y \ X 4 8 1 2/3 0 2 1/3 1
Background image of page 8
9 | Linear Regressions under Ideal Conditions (I) • Conditional mean: E ( y | x = 4) = y yf ( y | x = 4) = 1×f( y =1| x = 4) + 2×f( y = 2| x = 4) = 1×(2/3) + 2×(1/3) = 4/3.
Background image of page 9

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 10
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 11/19/2010 for the course ECON 270a taught by Professor Ahn during the Spring '10 term at ASU.

Page1 / 78

725 note 2010_s1_1_linear - 1. LINEAR REGRESSION UNDER...

This preview shows document pages 1 - 10. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online