mws_gen_reg_txt_straightline

mws_gen_reg_txt_straightline - 06.03.1 Chapter 06.03 Linear...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 06.03.1 Chapter 06.03 Linear Regression After reading this chapter, you should be able to 1. define regression, 2. use several minimizing of residual criteria to choose the right criterion, 3. derive the constants of a linear regression model based on least squares method criterion, 4. use in examples, the derived formulas for the constants of a linear regression model, and 5. prove that the constants of the linear regression model are unique and correspond to a minimum. Linear regression is the most popular regression model. In this model, we wish to predict response to n data points ) , ( ),......, , ( ), , ( 2 2 1 1 n n y x y x y x by a regression model given by x a a y 1 + = (1) where a and 1 a are the constants of the regression model. A measure of goodness of fit, that is, how well x a a 1 + predicts the response variable y is the magnitude of the residual i ε at each of the n data points. ) ( 1 i i i x a a y E + − = (2) Ideally, if all the residuals i ε are zero, one may have found an equation in which all the points lie on the model. Thus, minimization of the residual is an objective of obtaining regression coefficients. The most popular method to minimize the residual is the least squares methods, where the estimates of the constants of the models are chosen such that the sum of the squared residuals is minimized, that is minimize ∑ = n i i E 1 2 . Why minimize the sum of the square of the residuals? Why not, for instance, minimize the sum of the residual errors or the sum of the absolute values of the residuals? Alternatively, constants of the model can be chosen such that the average residual is zero without making individual residuals small. Will any of these criteria yield unbiased 06.03.2 Chapter 06.03 parameters with the smallest variance? All of these questions will be answered below. Look at the data in Table 1. Table 1 Data points. x y 2.0 4.0 3.0 6.0 2.0 6.0 3.0 8.0 To explain this data by a straight line regression model, x a a y 1 + = (3) and using minimizing ∑ = n i i E 1 as a criteria to find a and 1 a , we find that for (Figure 1) 4 4 − = x y (4) Figure 1 Regression curve 4 4 − = x y for y vs. x data. the sum of the residuals, 4 1 = ∑ = i i E as shown in the Table 2. Table 2 The residuals at each data point for regression model 4 4 − = x y . x y predicted y predicted y y − = ε 2.0 4.0 4.0 0.0 3.0 6.0 8.0 -2.0 2.0 6.0 4.0 2.0 3.0 8.0 8.0 0.0 4 1 = ∑ = i i ε 2 4 6 8 10 1 2 3 4 x y 4 4 − = x y Linear Regression 06.03.3 So does this give us the smallest error? It does as 4 1 = ∑ = i i E . But it does not give unique values for the parameters of the model. A straight-line of the model 6 = y (5) also makes 4 1 = ∑ = i i E as shown in the Table 3....
View Full Document

This note was uploaded on 06/12/2011 for the course EML 3041 taught by Professor Kaw,a during the Spring '08 term at University of South Florida - Tampa.

Page1 / 13

mws_gen_reg_txt_straightline - 06.03.1 Chapter 06.03 Linear...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online