Solution - Statistics GR5205 Fall 2016 Midterm Exam...

This preview shows page 1 - 3 out of 6 pages.

Statistics GR5205 — Fall 2016Midterm ExamSolution set1. Given data{(xi, yi) :i= 1, . . . , n}and assuming the simple linear regression modelYi=β0+β1xi+εi, it will generally be the case thatnXi=1(yi-β0-β1xi)2<nXi=1(yi-b0-b1xi)2whereb1andb0denote the least squares estimates ofβ1andβ0, respectively.
2. The least squares estimates of the slope and intercept in simple linear regression are the valuesofb1andb0that minimize the sum of the squared vertical distances between the points (xi, yi)and the liney=b0+b1xin a scatterplot ofyversusx.
3. The least squares estimates of the slope and intercept in simple linear regression are found bysolving the so-callednormal equations, a nonlinear system with no closed-form solution, butreadily solvable with modern computers, via an iterative algorithm.
4. In simple linear regression, the estimation error of the least squares line is guaranteed to bezero at at least one point, the sample mean of thex-values in the data set. This is because,althoughb1andb0will not equalβ1andβ0, they will satisfyE(Y|X= ¯x) =β0+β1¯x=b0+b1¯
5. In simple linear regression, the ANOVAF-test of no linear association, and thet-test ofH0:β1= 0 versusHa:β16= 0, will give the exact same result; in fact, the test statistics are relatedbyF*= (t*)2.
6. One reason the assumption of constant variance is so crucial in simple linear regression is thatthe least squares estimatorsb1andb0are no longer unbiased forβ1andβ0, respectively, if theassumption of constant variance is violated.
7. If we think ofSSTO=ni=1(yi-¯y)2as a measure of the total variation observed in our dataset, and define the residual sum of squares for our model fit bySSE=ni=1(yi-ˆyi)2then wecan reasonably interpretSSE/SSTOas the proportion of that variation which is unexplainedby the model which yielded the fitted values ˆyi.
8. The residuals versus fitted values plot is a useful graphical tool for assessing the assumption thatthe error term and response variable are uncorrelated, that is, Cov(εi, Yi) = 0 fori= 1, . . . , n.

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture