{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

pam2100 stock_watson_ch4

pam2100 stock_watson_ch4 - Stock and Watson Chapter 4...

Info icon This preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon
Stock and Watson Chapter 4: Linear Regression with One Regressor The linear regression model is: Y i = β 0 + β 1 X i + u i The subscript i runs over observations, i = 1,…, n ; Y i is the dependent variable (or the regressand or the response variable or the left-hand-side variable) X i is the independent variable (or the regressor or the explanatory variable or the right-hand-side variable) β 0 + β 1 X i is the population regression line (or population regression function) β 0 is the intercept of the population regression line β 1 is the slope of the population regression line u i is the error term (the error term contains all the other factors besides X the determine the value of the dependent variable, Y , for a specific observation i ) In practice, we don’t know the intercept β 0 and the slope β 1 . We use the sample regression line to estimate the population regression line.
Image of page 1

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Table 4.1 Summary of the Distribution of Student-Teacher Ratios and Fifth-Grade Test Scores for 420 K-8 Districts in California in 1998 x s 10% 25% 40% 50% 60% 75% 90% STR 19.6 1.9 17.3 18.6 19.3 19.7 20.1 20.9 21.9 TS 665.2 19.1 630.4 640.0 649.1 654.5 659.4 666.7 679.1 4 Mechanics of OLS The population regression line: Test Score = β 0 + β 1 STR β 1 = Tes S = ?? Ordinary Least Squares (OLS) Estimator OLS regression line minimizes: •the sum of the squares of the vertical distances between the data points and the regression line •the sum of the squared mistakes made in predicting Y given X let b 0 and b 1 be some estimators of β 0 + β 1 the value of Y i predicted using this line is b 0 + b 1 X i 2
Image of page 2
The OLS Estimator, Predicted Values, and Residuals The OLS estimators of the slope β 1 and the intercept β 0 are: 1 ˆ β = - - - 2 ) ( ) )( ( X X Y Y X X i i i 0 ˆ β = Y - 1 ˆ β X The OLS predicted values i Y ˆ and residuals i u ˆ are i Y ˆ = 0 ˆ β + 1 ˆ β X i , i = 1,…, n i u ˆ = Y i - i Y ˆ , i = 1,…, n The estimated slope intercept ( 0 ˆ β ), slope ( 1 ˆ β ), and residual ( i u ˆ ) are computed from a sample of n observations of and , i = 1,…, n . These are estimates of the unknown true population intercept (β 0 ), slope (β 1 ), and error term ( u i ).
Image of page 3

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 4
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern