regression1_article

# regression1_article - 1 1.1 Introduction to Linear Models...

This preview shows pages 1–6. Sign up to view the full content.

1 Introduction to Linear Models 1.1 Uses of regression analysis Introduction Example in lab: BatchTime and design variables of casting mold What regression analysis can do: investigate how variables are related while controlling for the eﬀects of other variables test if there is a relationship between variables predict future values of variables use known future values of other variables control system output use estimated relationship with system inputs

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Fitting equations to data Two types of mathematical models: Theoretical: Derived from assumptions and mathematical rea- soning Empirical: Derived by “ﬁtting” an equation to data typically the equation has a simple parametric form parameters are estimated by adjusting them to get the best ﬁt to the data sometimes a theoretical model has parameters that are esti- mated empirically Example: remediation of contaminated groundwater 1.2 Linear models Linear models (LM’s) E ( Y | X 1 , . . . , X p ) = β 0 + β 1 X 1 + β 2 X 2 + ··· β p X p So Y = β 0 + β 1 X 1 + β 2 X 2 + β p X p + ± where ± is variation of Y about E ( Y | X 1 , . . . , X p ) In example: Y = BatchTime X 1 , . . . , X p are the design variables 2
When is a model linear? From previous page: Y = β 0 + β 1 X 1 + β 2 X 2 + ··· β p X p + ± Deﬁnition: A model is linear if Y is linear in the parameters Key point: X 1 , . . . , X p can be anything observable, directly or indirectly In particular, they can be nonlinear functions of measured quantities. Nonlinear models Nonlinear regression is covered in more advanced courses, e.g., ORIE 473 For now, you only need to know a nonlinear model when you see it Example: Y = β 0 + β 1 exp( β 2 X ) + ± This model is nonlinear because Y is a nonlinear function of β 2 1.3 Parameters, variables, and noise Variables and parameters Y = outcome = response = dependent variable X 1 , . . . , X p = predictors = independent variables = covariates β 0 is the intercept 3

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
β 1 , . . . , β p are the regression coeﬃcient = slopes = partial deriva- tives β j = ∂X j E ( Y | X 1 , . . . , X p ) Noise From a previous slide: Y = β 0 + β 1 X 1 + β 2 X 2 + ··· β p X p + ± ± is the unpredictable variation in Y ± cannot be predicted from X 1 , . . . , X p because E ( ± | X 1 , . . . , X p ) = 0 so ± is uncorrelated with X 1 , . . . , X p ± is also called the noise error residual variation 2 Estimation 2.1 Data Data Observe ( Y i , X i, 1 , . . . , X i,p ) , for i = 1 , . . . , n i = index of “observation” = “case” = “subject” = “row in data spreadsheet” So the basic model can be rewritten as: Y i = β 0 + β 1 X i, 1 + β 2 X i, 2 + + β p X i,p + ± i Notice that β 0 , β 1 , . . . , β p do not depend on i The columns of the data spreadsheet are Y i , X i, 1 , . . . , X i,p 4
2.2 Least-squares Least-squares Estimation b β 0 , b β 1 , . . . , b β p minimize n X i =1 ± Y i - ² β 0 + β 1 X i, 1 + β 2 X i, 2 + ··· β p X i,p ³ ´ 2 Residuals and Fitted Values Fitted value: b Y i = b β 0 + b β 1 X i, 1 + b β 2 X i, 2 + b β p X i,p Residual: b ± i = Y i - b Y i Least-squares: makes n X i =1 b ± 2 i as small as possible.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 23

regression1_article - 1 1.1 Introduction to Linear Models...

This preview shows document pages 1 - 6. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online