This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: 4/7/2010 1 Estimation  I • As before, we are going to try to use our random sample to learn about the population model. • More specifically, we are going to use our sample to try to estimate the parameters β and β 1 . Why? Well, β 1 , for example, measures the effect of changing X on Y , exactly what we are trying to learn. Estimation  II • We are going to perform our estimation under 3 key assumptions: – A1) Corr( X i , u i ) = Cov( X i , u i ) = 0. This is saying that the errors u i are uncorrelated with the regressors X i . – A2) ( X i , Y i ) are i.i.d. (independent and identically distributed). This is the case if we have a random sample. – A3) (technical) X i and Y i have finite fourth moments – i.e. E [ X i 4 ] < u221e and E [ Y i 4 ] < u221e . In practice, this means that large outliers , i.e. values of X i and Y i that are far outside the range of the data, are very unlikely. • Later, we are going to pay particular attention to Assumption A1, and the question of whether it holds. But for now, lets just assume that it does. 4/7/2010 2 Estimation  III • The most common way of estimating the parameters β and β 1 is called Ordinary Least Squares (OLS) • The OLS estimators of β and β 1 , which we will call β and β 1 , minimize the following quantity: • In words, β and β 1 minimize the sum of squared vertical distances between the values Y i and the OLS Regression Line β + β 1 X i . Intuitively, this is making the Regression line “as close as possible” to the points on the scatterplot. Graphic illustration....
View
Full Document
 Spring '07
 SandraBlack
 Econometrics, Normal Distribution, Regression Analysis, Variance, Yi, average test

Click to edit the document details