• 1 Page glim-1
    Glim-1

    School: University Of Michigan

    1.0 0.8 Probability 0.6 P (Y=1| X) P (Y=0 | X) 0.4 0.2 0.0 -6 -4 -2 X 0 2 4 6

  • 1 Page fit-complexity
    Fit-complexity

    School: University Of Michigan

    q 4 q q 2 q q q q q q qq q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q qq Y q q q q -2 0 q -2 -1 0 X 1 2

  • 1 Page fig-cov_int_slope
    Fig-cov_int_slope

    School: University Of Michigan

    Underestimate slope, overestimate intercept - 0.5 0.0 Y Overestimate slope, underestimate intercept 0.5 1.0 X 1.5 2.0 2.5

  • 1 Page fig-convexity
    Fig-convexity

    School: University Of Michigan

    =1 Q(v) Q(v)+(1-)Q(w) =0 Q(w) Q(v+(1-)w) v w

  • 1 Page diagnostics-4
    Diagnostics-4

    School: University Of Michigan

    1.5 q qq q q q qq qq q q q q q q qq q q q q qq q q q q q q q q qq qq q Sample Quantiles -1.5 -0.5 0.5 q q q -2 -1 0 1 2 Theoretical Quantiles

  • 1 Page diagnostics-3
    Diagnostics-3

    School: University Of Michigan

    q q 1.5 q q q q q q q q q q q q X2 0.5 -0.5 q qq q q q q q qq q qq q q q q q q q q qq q q -1.5 q -2 -1 0 1 2 Studentized residuals

  • 1 Page diagnostics-1
    Diagnostics-1

    School: University Of Michigan

    q 1.0 1.2 q q q q q q q q q q q q q q q q q q q q q q q q q qq qq q q q q q q q Fitted values 0.8 q q q 0.2 -2 0.4 0.6 -1 0 1 2 Studentized residuals

  • 9 Pages ps06
    Ps06

    School: University Of Michigan

    Statistics 600 Problem Set 6 Due in class on November 2nd 1. Prove that as long as there is an intercept in the model, Pii 1/n, where n is the number of cases. As a hint, consider centering each column of the design matrix (except the intercept). So

  • 8 Pages exam-2006
    Exam-2006

    School: University Of Michigan

    Statistics 600 Exam December 5, 2006 1. Suppose the data generating model is a simple linear model Y = + X + with E( |X) = 0 and cov( |X) = 2 I. Suppose the sample size is n, an even number, and we divide the cases into non-overlapping pairs (i1 ,

  • 4 Pages ps01
    Ps01

    School: University Of Michigan

    Statistics 600 Problem Set 1 Due in class on September 25 1. Prove that the "horizontal residuals" sum to zero based on a least squares fit of Y (as the dependent variable) on X (as the independent variable). The horizontal residuals are the lengths

  • 5 Pages exam1
    Exam1

    School: University Of Michigan

    Statistics 600 Exam 1 October 14, 2008 1. (a) Suppose we t a simple linear regression between Y and X. Our goal is to estimate the coecient in the relationship E(Y |X) = + X with the least possible variance, where cov(Y |X) = 2 I can be assumed to

  • 13 Pages prediction
    Prediction

    School: University Of Michigan

    Prediction Ridge regression uses the minimizer of a penalized squared error loss function to estimate the regression coefficients: 2 ^ argmin Y - X + D. Typically D is a diagonal matrix with 0 in the 1,1 position and ones on the rest of the di

  • 39 Pages decomposing-variance
    Decomposing-variance

    School: University Of Michigan

    Decomposing variance Pearson correlation The population Pearson correlation coecient of two jointly distributed random variables X and Y is cov(X, Y ) . X Y XY = It is estimated by XY cov(X, Y ) = = X Y X)(Yi Y ) (X X) (Y Y ) = .

  • 29 Pages glim
    Glim

    School: University Of Michigan

    Generalized linear models The key properties of a linear model are that E(Y |X) = X and var(Y |X) 1. In some cases where these conditions are not met, we can transform Y to rectify things. However it is often dicult to nd a transform that simu

  • 7 Pages ps03
    Ps03

    School: University Of Michigan

    Statistics 600 Problem Set 3 Due in class on Tuesday, October 23rd 1. Suppose we have a bivariate regression model Y = + X + Z + where the usual assumptions E( |X, Z) = 0 and cov( |X, Z) I hold. In addition, for simplicity, assume that EX = EZ = 0

  • 43 Pages diagnostics
    Diagnostics

    School: University Of Michigan

    Diagnostics Motivation When working with a linear model with design matrix X, we may optimistically suppose that var(Y |X) = 2 I. EY col(X) and Point estimates and inferences depend on these assumptions approximately holding. Inferences for sma

  • 26 Pages confound
    Confound

    School: University Of Michigan

    Model mis-specification and confounding Suppose we have a data generating model of the form Y = + X + Z + . The usual assumptions E( |X, Z) = 0 and var ( |X, Z) = 2 hold. The covariate X is observed, but Z is not observable. If we regress Y on X,

  • 10 Pages ps07
    Ps07

    School: University Of Michigan

    Statistics 600 Problem Set 7 Due in class on November 14th 1. What is the relationship between measurement error and confounding? In the notation of the confounding course notes, let Z be the true covariate level, and let X be the covariate as observ

  • 9 Pages ps02
    Ps02

    School: University Of Michigan

    Statistics 600 Problem Set 2 Due in class on September 26 1. In this problem you will use simulation to better understand the sampling variation of when two covariates are strongly dependent. Suppose we can partition the design matrix as X = [1 U V

  • 1 Page syllabus
    Syllabus

    School: University Of Michigan

    Statistics 600 Applied statistics and data analysis I Instructor: Oce: E-mail: Oce hours: Course web page: Kerby Shedden 461 West Hall kshedden@umich.edu Monday 3-4, Wednesday 11-12 www.stat.lsa.umich.edu/kshedden/Courses/Stat600 Course description:

  • 132 Pages least-squares
    Least-squares

    School: University Of Michigan

    Least Squares Fitting and Inference Kerby Shedden August, 2008 Denitions and Motivation Independent variables (predictors, regressors, covariates): X = (X1 , . . . , Xp ) Dependent variable (response, outcome): Y The goal is to learn about an unkno

  • 11 Pages ps02
    Ps02

    School: University Of Michigan

    Statistics 600 Problem set 2 Due in class on October 2nd. 1. Suppose A and B are symmetric matrices, and A, B, and A + B are all idempotent. Show that AB 0. Hint: use the Schur decomposition for a symmetric matrix S, S = V DV , where D is diagonal a

Back to course listings