This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: 4/7/2010 1 Estimation - I As before, we are going to try to use our random sample to learn about the population model. More specifically, we are going to use our sample to try to estimate the parameters and 1 . Why? Well, 1 , for example, measures the effect of changing X on Y , exactly what we are trying to learn. Estimation - II We are going to perform our estimation under 3 key assumptions: A1) Corr( X i , u i ) = Cov( X i , u i ) = 0. This is saying that the errors u i are uncorrelated with the regressors X i . A2) ( X i , Y i ) are i.i.d. (independent and identically distributed). This is the case if we have a random sample. A3) (technical) X i and Y i have finite fourth moments i.e. E [ X i 4 ] < u221e and E [ Y i 4 ] < u221e . In practice, this means that large outliers , i.e. values of X i and Y i that are far outside the range of the data, are very unlikely. Later, we are going to pay particular attention to Assumption A1, and the question of whether it holds. But for now, lets just assume that it does. 4/7/2010 2 Estimation - III The most common way of estimating the parameters and 1 is called Ordinary Least Squares (OLS) The OLS estimators of and 1 , which we will call and 1 , minimize the following quantity: In words, and 1 minimize the sum of squared vertical distances between the values Y i and the OLS Regression Line + 1 X i . Intuitively, this is making the Regression line as close as possible to the points on the scatterplot. Graphic illustration....
View Full Document
This note was uploaded on 06/17/2010 for the course ECON 103 taught by Professor Sandrablack during the Spring '07 term at UCLA.
- Spring '07