Chapter Eleven

Chapter Eleven - Lecture Notes Chapter Eleven: Simple...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
Lecture Notes Chapter Eleven: Simple Linear Regression Randall Miller 1 | Page 1. Probabilistic Models General Form of Probabilistic Models Deterministic component Random error y = + where y is the variable of interest. We always assume that the mean value of the random error equals 0. This is equivalent to assuming that the mean value of y , E ( y ), equals the deterministic component of the model; that is, ( ) Deterministic component Ey = A First-Order (Straight-Line) Probabilistic Model 01 yx ββ ε = ++ Where y = Dependent or response variable (variable to be modeled) x = Independent or predictor variable (variable used as a predictor of y ) x + = E ( y ) = Deterministic component (epsilon) = Random error component 0 β (beta not) = y- intercept of the line – that is, the point at which the line intersects, or cuts through, the y -axis (see figure 11.2). 1 (beta one) = Slope of the line – that is, the amount of increase (or decrease) in the deterministic component of y for every one-unit increase in x . [ Note : A positive slope implies that E ( y ) increases by 1 . (See figure 11.2.) A negative slope implies that E ( y ) decreases by 1 .] 1. Hypothesize the deterministic component of the model that relates the mean E ( y ) to the independent variable x (Section 11.2). 2. Use the sample data to estimate unknown parameters in the model (Section 11.2). 3. Specify the probability distribution of the random-error term and estimate the standard deviation of this distribution (Section 11.3). 4. Statistically evaluate the usefulness of the model (Sections 11.4 and 11.5). 5. When satisfied that the model is useful, use it for prediction, estimation, and other purposes (Section 11.6). 2. Fitting the Model: The Least Squares Approach Definition 11.1 The least squares line ˆˆ ˆ = + is the line that has the following two properties: 1. The sum of the errors (SE) equals zero. 2. The sum of squared errors (SSE) is smaller than that for any other straight-line model.
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Lecture Notes Chapter Eleven: Simple Linear Regression Randall Miller 2 | Page Formulas for the Least Squares Estimates Slope : 1 SS ˆ SS xy xx β y-intercept : 01 ˆˆ yx ββ =
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 6

Chapter Eleven - Lecture Notes Chapter Eleven: Simple...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online