Probability and Statistics for Engineering and the Sciences (with CD-ROM and InfoTrac )

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
Stat 312: Lecture 21 Linear Regression II. Moo K. Chung [email protected] Dec 7, 2004 Concepts 1. Maximum likelihood estimation. Given a linear model Y j = β 0 + β 1 x j + ² j with ² j N (0 2 ) , we will estimate β 0 1 using the maximum likelihood estimation. Note that Y j N ( β 0 + β 1 x j 2 ) . The density function for Y j are f ( y j ) = 1 2 πσ exp h ( y j - β 0 - β j x j ) 2 2 σ 2 i . The loglikelihood is L ( β 0 1 ) = const - 1 2 σ 2 n X j =1 ( y j - β 0 - β 1 x j ) 2 . We maximize the loglikelihood which is equivalent to minimizing the sum of the residuals 1 2 σ 2 n X j =1 ( y j - β 0 - β 1 x j ) 2 . Hence MLE is LSE in linear regression. 2. The least squares estimation for β 1 are given by ˆ β 1 = S xy S xx where S xy = n ( xy - ¯ x ¯ y ) . It can be shown that ˆ β 1 = S xy S xx = n X j =1 c j Y j , where c j = ( x j - ¯ x ) /S xx . So LSE is a linear estimation. From n j =1 c j = 0 , n j =1 c j x j = 1 and n j =1 c 2 j = S - 1 xx we can show that E ˆ β 1 = β 1 showing unbiasness. Further, V ˆ β 1 = σ 2 /S xx . Since ˆ β 1 is a linear combination of normals, ˆ β 1 N ( β 1 2 /S xx ) . 3. We are interested in hypothesis testing H 0 : β 1 = 0 vs. H 1 : β 1 6 = 0 . Inference on the slope parameter β 1 is based on test statistic T = ˆ β 1 - β 1 S ˆ β 1 t n - 2 , where S ˆ β 1 = ˆ σ/
Background image of page 1
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: S xx and ˆ σ = SSE/ ( n-2) . It can be shown that SSE = S yy-S 2 xy /S xx , we get S ˆ β 1 = 1 √ n-2 q S yy S xx-( S xy S xx ) 2 . Then we reject H if | t | > t α/ 2 ,n-2 at 100(1-α )% significance. We don’t usually compute the test statistic by hand. Use R-package. Example. We continue Lecture 19 example. >summary(lm(y˜x)) Call: lm(formula = y ˜ x) Residuals: Min 1Q Median 3Q Max-10.908 -6.312 1.758 4.354 10.836 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 29.48 13.23 2.22 0.06 x 0.55 0.17 3.12 0.01 *---Signif. codes: 0 ‘ *** ’ 0.001 ‘ ** ’ 0.01 ‘ * ’ Residual standard error: 7.647 on 8 degrees of freedom Multiple R-Squared: 0.5519, Adjusted R-squared:0.4959 F-statistic:9.854o n 1 and 8 DF, p-value: 0.01383...
View Full Document

Ask a homework question - tutors are online