# 7 y1 20 lmy x1 x2 x1 07 x2 44 point

This preview shows page 1. Sign up to view the full content.

This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: gt; lm(y ~ x1 + x2) Call: lm(formula = y ~ x1 + x2) Coefficients: (Intercept) 3.7 > y[1] = 20 > lm(y ~ x1 + x2) x1 -0.7 x2 4.4 ### point 1 has large leverage Call: lm(formula = y ~ x1 + x2) Coefficients: (Intercept) 8.875 x1 -1.375 x2 4.625 > y[1] = 11 > y[4] = 30 > lm(y ~ x1 + x2) ### point 4 has small leverage Call: lm(formula = y ~ x1 + x2) Coefficients: (Intercept) 5.7 x1 -0.7 x2 4.4 > mean(x1); mean(x2) [1] 7 [1] 3 Example 3: > > > > > > > > > > x1 = c(5,4,3,0,-3,-4,-5,-4,-3,0,3,4) x2 = c(0,3,4,5,4,3,0,-3,-4,-5,-4,-3) par(pty="s") ### square plot plot(x1,x2) points(0,0,pch=3) X = cbind(rep(1,12),x1,x2) H = X %*% solve(t(X)%*%X) %*% t(X) lev = rep(0,12) for (i in 1:12) lev[i]=H[i,i] lev [1] 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 > sum(lev) [1] 3 ˆ e=Y– Y = ( – ) Y. Var ( e i ) = ri = Studentized residuals: Cook’s Distance: Di = 1 p ri2 ei s 1 − hi hi , 1 − hi , ( 1 – h i ) σ 2. i = 1, 2, … , n. i = 1, 2, … , n. measures the influence of a data point on the regression equation....
View Full Document

## This note was uploaded on 04/03/2014 for the course STAT 420 taught by Professor Stepanov during the Spring '08 term at University of Illinois, Urbana Champaign.

Ask a homework question - tutors are online