# Chap 13 - Chapter 13 Simple Linear Regression In regression...

This preview shows pages 1–3. Sign up to view the full content.

Chapter 13 Simple Linear Regression In regression we are trying to see how one variable relates to or are associated with another variable. General terms: 1. Dependent Variable – the variable that is being predicted or determined by another variable. 2. Independent Variable – variable(s) use to predict the value of the dependent variable. 3. Simple Linear Regression – when we have only 1 dependent variable and 1 independent variable. The relationship is approximated via a line. 4. Multiple Regression – when we have two or more independent variables used to estimate 1 dependent variable. A. Simple Linear Regression Model / Line 1. Simple Linear Regression Model – tells us how y (dependent variable) is related to x (independent variable) with an error term. Mathematically: y = Β 0 + Β 1 x + ε Β 0 = a constant term. It is the y-intercept Β 1 = Slope coefficient. Tells us how much y changes for each unit change in x. = error term. Since our model is only an estimate, when we approximate the model with real data, this terms captures how much our estimation is different from the real world data. It contains all the variability in y that cannot be explained by x. 2. Simple Linear Regression Line – tells us how y relates to x. It is sometimes called ‘the line of best fit.’ This is due to the fact that we fine the line that best approximates the relationship between x and y for our data. If we use the expected value function it is: E (y) = Β 0 + Β 1 x also known as a line of means. Note: so on avg we expect that E ( ) = 0 Graphs of some regression lines: Graph 1: Positive Relationship 1 x E(y) Β 0 Β 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
3. Estimated Regression Equation – this is when you actually use your sample statistics and data to estimate the equation above with the line that best fits the data. Mathematically: (1) y ˆ = b 0 + b 1 x or (2) y = a + bx y ˆ = y-hat estimated value of y b 0 or a = estimated value of Β 0 b 1 or b = estimated value of Β 1 note: (a) the error term goes away since it is all the things the data can’t explain. This makes sense since the data we have does not explain some portion of the relationship between the data. (b) the equation for the line can be either of above models B. Least Squares Method / Regression Line -the least squares method is what is used to determine the best fit line. It is not simply drawn in. -what is essentially done is the data is plotted as a scatter diagram and the line that minimizes the difference between the actual data points and the line drawn is the best fit line. 1. Least Squares Criterion – here is the mathematical idea of the explanation above. Min Σ (y i - i y ˆ ) 2 , where y i = the actual value for the ith observation and i y ˆ = the estimated value of the dependent variable for the ith observation. -we find the b
This is the end of the preview. Sign up to access the rest of the document.

## This note was uploaded on 12/04/2011 for the course ACCT 3311 taught by Professor Smith during the Spring '10 term at University of the Incarnate Word.

### Page1 / 8

Chap 13 - Chapter 13 Simple Linear Regression In regression...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online