# regression_overhead - Regression 1 REGRESSION Introduction...

This preview shows pages 1–14. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Regression 1 REGRESSION Introduction • “Data version” of best linear prediction. • Very widely used. Available data • Y i = value of response variable for i th observation • X i 1 ,...,X ip = values of predictor variables 1 through p for the i th observation Regression 2 Goals: • to understand how Y is related to X 1 ,...,X p . • to model the conditional expectation of Y given X 1 ,...,X p • to predict future Y values from X 1 ,...,X p Regression 3 Straight Line Regression • only one predictor variable • model is Y i = β + β 1 X i + ² i – β and β 1 are the unknown intercept and slope of the line – ² 1 ,...,² n are iid with mean 0 and constant variance σ 2 – often the ² i ’s are assumed to be normally distributed Regression 4 Least-squares estimation Regression 5 • least-squares estimate finds b and b 1 to minimizes n X i =1 { Y i- ( b + b 1 X i ) } 2 Regression 6 • using calculus, one can show that b 1 = ∑ n i =1 ( Y i- Y )( X i- X ) ∑ n i =1 ( X i- X ) 2 . and b = Y- b 1 X Regression 7 • the least-squares line is b Y = b + b 1 X = Y + b 1 ( X- X ) = Y + ‰∑ n i =1 ( Y i- Y )( X i- X ) ∑ n i =1 ( X i- X ) 2 ( X- X ) = Y + s xy s 2 x ( X- X ) , where s xy = ( n- 1)- 1 n X i =1 ( Y i- Y )( X i- X ) and s 2 x is the sample variance of the X i ’s, that is, s 2 x = ( n- 1)- 1 n X i =1 ( X i- X ) 2 . Regression 8 Exercise: Show that if ² 1 ,...,² n are IID N (0 ,σ 2 ) then the least-squares estimates of β and β 1 are also the maximum likelihood estimates. Example: Some data on weekly interest rates, from Jan 1, 1970 to Dec 31, 1993, were obtained from the Federal Reserve Bank of Chicago. The URL is: http://www.chicagofed.org/economicresearchanddata/data/index.cfm Regression 9-0.01-0.005 0.005 0.01-8-6-4-2 2 4 6 x 10-3 Change in 10YR T Rate Change in AAA Rate Change in “CM10 = 10-YEAR TREASURY CONSTANT MATURITY RATE (AVERAGE, NSA)” plotted against “AAA = MOODYS SEASONED CORPORATE AAA BOND YIELDS”. Regression 10 Fitted line plot from MINITAB. Regression 11 Output from fitted line plot in MINITAB Regression Analysis: aaa_diff versus cm10_diff The regression equation is aaa_diff = 0.0002266 + 0.572371 cm10_diff S = 0.0633340 R-Sq = 68.5 % R-Sq(adj) = 68.5 % Analysis of Variance Source DF SS MS F P Regression 1 10.8950 10.8950 2716.14 0.000 Error 1249 5.0100 0.0040 Total 1250 15.9049 Fitted Line Plot: aaa_diff versus cm10_dif Regression 12 Here is the same analysis using “regression” in MINITAB. The first output is the estimated regression line: The regression equation is aaa_diff = 0.00023 + 0.572 cm10_diff 1251 cases used 1 cases contain missing values Next comes the estimates, standard errors, T-statistics, and p-values: Predictor Coef SE Coef T P Constant 0.000227 0.001791 0.13 0.899 cm10_dif 0.57237 0.01098 52.12 0.000 Regression 13 The next output is S = estimate of σ , R 2 and adjusted R 2 : S = 0.06333 R-Sq = 68.5% R-Sq(adj) = 68.5% Finally, the analyis of variance table is printed. This tableFinally, the analyis of variance table is printed....
View Full Document

## This note was uploaded on 04/06/2008 for the course ORIE 473 taught by Professor Anderson during the Spring '07 term at Cornell.

### Page1 / 99

regression_overhead - Regression 1 REGRESSION Introduction...

This preview shows document pages 1 - 14. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online