MS&E
solutions_midterm_take home

# solutions_midterm_take home - MS&E 226 Small Data In-Class...

• Test Prep
• 10
• 100% (9) 9 out of 9 people found this document helpful

This preview shows pages 1–3. Sign up to view the full content.

MS&E 226 In-Class Midterm Examination “Small” Data October 20, 2015 PROBLEM 1. In this problem we will investigate some of the properties of weighted least squares . If you haven’t already done so, you should review the notes for Discussion Section 3. Suppose we are given n observations ( Y i , X i ) , together with positive weights w i , i = 1 , . . . , n . In weighted least squares, we find the coefficients ˆ γ that minimize the following objective function: n X i =1 w i ( Y i - ˆ γ 0 - ˆ γ 1 X i 1 - · · · - ˆ γ p X ip ) 2 . (1) We refer to the resulting coefficients ˆ γ as the weighted least squares (WLS) solution; we refer to w = ( w 1 , . . . , w n ) as the vector of weights . Weighted least squares allows us to force the fit to be closer at observations with higher weight. (a) (5 points) The first part of this question shows there is “no free lunch”: this closer fit at highly weighted observations comes at the expense of poorer fit at observations with lower weight. Suppose that in a simple linear regression setting (i.e., only one covariate), ˆ r i = Y i - ˆ β 0 - ˆ β 1 X i are the residuals resulting from the ordinary least squares solution, while ˜ r i = Y i - ˆ γ 0 - ˆ γ 1 X i are the residuals from the weighted least squares solution. Is it possible to have | ˜ r i | < | ˆ r i | for all i ? (b) (5 points) This part shows how the weights allow us to tailor the fit of our linear model. Run the following R code: X = 1:10 Y = Xˆ2 This is a set of 10 observations, where X = (1 , 2 , . . . , 10) , and Y i = X 2 i for each i . To run weighted least squares in R, we use lm with the weights option specifying the vector of weights. Run the following two pieces of code: w1 = c(1,1,0,0,0,0,0,0,0,0) lm(Y ˜ 1 + X, weights = w1) w2 = c(0,0,0,0,0,0,0,0,1,1) lm(Y ˜ 1 + X, weights = w2) Examine the coefficients, and explain the result. 1

This preview has intentionally blurred sections. Sign up to view the full version.

(c) (5 points) Assume there is no intercept, i.e., the WLS solution is obtained by minimizing: n X i =1 w i ( Y i - ˆ γ 1 X i 1 - · · · - ˆ γ p X ip ) 2 . (2) We show how the solution to WLS may be obtained via OLS. For each i , define ˜ Y i = w i Y i , and define ˜ X ij = w i X ij . Now let ˜ β be the vector of coefficients obtained by applying ordinary least squares to the data ˜ X , ˜ Y . Explain why ˜ β = ˆ γ . Use this fact to show that: ˆ γ = ( X > WX ) - 1 X > WY , where W = diag( w 1 , . . . , w n ) . Hint : Recall that the OLS solution is given by ( ˜ X > ˜ X ) - 1 ˜ X > ˜ Y . In the remainder of the problem, we investigate one situation where weighted least squares can be useful. To simplify things we focus on a simple linear regression setting. We will generate data using the following R code: X = rpois(1000,20) sd_err = 2 * Xˆ2 Y = 3 * X + rnorm(1000, 0, sd_err) Do this once and plot the results of Y against X , so that you can see what is happening. The key feature of this data generation process (population model) is that the variance of the error term scales with the value of X i . Note that this is different than the typical population model we considered in lecture, where the error variance was constant regardless of the value of X i .
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### What students are saying

• As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

Kiran Temple University Fox School of Business ‘17, Course Hero Intern

• I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

Dana University of Pennsylvania ‘17, Course Hero Intern

• The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

Jill Tulane University ‘16, Course Hero Intern