# NOTES - 1 QM 670DECISION THEORY Dr Doug Barrett 1.0...

This preview shows pages 1–7. Sign up to view the full content.

1 1.0 Decision Theory Introduction Decision Making: 1) Future—uncertainty 2) Tradeoffs: Non-technical Better or best choice Not necessarily the best choice Linear program: unlikely to know all the coefficients Forecasting —predicting the future Techniques: 1) Regression 2) Forecasting Regression 2 quantitative (measurable) variables We want to assess the relationship between them (correlation) Use one variable to predict the other i. Y= response (dependent) variable ii. X=explanatory (independent, predictor) variable Simple Linear Regression: “Method of Least Squares” Steps: 1) Identify the problem 2) Determine applicable variable(s) 3) Collect data 4) Graph the data 5) Numerical analysis (Most appropriate technique) 6) Interpret output 7) Conclusion/decision 8) Communicate! Scatter plot (scatter diagram) “X vs. Y plot” QM 670—DECISION THEORY Dr. Doug Barrett Positive Linear

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
2 Positive Nonlinear No relationship
3 Interpreting scatter plots: 1) Nature of relationship (+/-/neither) 2) Function of relationship (linear, nonlinear) 3) Unusual observations (Outlier, high leverage) Ideal case: + or – and linear. Positive Nonconstant variance Heteroscedasticity a) Outlier - unusual x, y combination b) High leverage case - unusually large (or small) x- value

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
4 2.0 Regression Regression: Fitting a line to the data Excel gives us the linear equation (equation of a line) o Line that “best fits the data” o ŷ= predicted y-value Prediction equation: ŷ= a + bx ŷ= a + bx ŷ= predicted y-value rise run # of units y increase # of units x increase = # of units y increase as x increases by one unit b = slope = = a= y-intercept = predicted y-value when x = 0 What is a “good fit”? o Small e i values o Positive and negative e i values (If we have positive residuals, we need negative residuals to offset) o e i –values sum to zero. Criterion: -We select the line that minimizes the sum of squared residuals. ŷx 0 yx 0 x 0 y i- ŷ i= e i (residual) The smaller the error (in magnitude), the better. Ideally, errors are close to 0.
5 -Least squares regression - “ordinary least squares”- OLS 2.1 Simple Linear Regression -Simple linear regression- one x, one y Excel Output for Regression: 1. Regression Statistics—measures the strength of the linear relationship (association) 2. ANOVA—hypothesis test for the slope 3. Coefficients—slope and intercept values, tests for slope & intercept Example (sales in thousands) Output : ŷ= 72.96 + 1.47x usually call the first period ‘0’ For 2002, ŷ= 72.96 + 1.47(10) = 87.66 =) ŷ = \$87, 660 Interpreting a and b: a= 72.96 =) The predicted sales for 1992 are \$72, 960. b=1.47 =) For each year progressing, the predicted increase in sales is \$1,470. Predicting for other periods: -extrapolation (predicting outside the range of the x’s) Regression Statistics: (R 2 –not under simple regression) (Multiple R)

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
6 -goodness of fit -R & R 2 R= correlation coefficient -measures the strength of the linear associations between y and x.
This is the end of the preview. Sign up to access the rest of the document.

## This note was uploaded on 12/02/2011 for the course QM 670 taught by Professor Dr.keeney during the Fall '11 term at Jefferson College.

### Page1 / 33

NOTES - 1 QM 670DECISION THEORY Dr Doug Barrett 1.0...

This preview shows document pages 1 - 7. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online