Lecture_Statistics_Spring_2013b

# Vi linear regression variance of the regression s2

This preview shows page 1. Sign up to view the full content.

This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: s we need to minimize the function S, which is the sum of the squared deviations between the N observed y values, yi, and the best fit line y values, ŷi. VI. Linear Regression For simple linear regression, y = mx + b 2 N S = ∑ (y i − b − mxi) i =1 k=1, where b=β0 and m=β1 N 2 ˆ S = ∑ (y i − y i ) i =1 N ȹ k ȹ ȹ ȹ = ∑ ȹ y i − ȹ β 0 + ∑ β i X i ȹ ȹ ȹ ȹ ȹ ȹ i =1ȹ i =1 ȹ Ⱥ Ⱥ 2 VI. Linear Regression N 2 S = ∑ (y i − b − mxi) i =1 Note: xi and yi are simply values Partial differentiation with respect to b (the intercept), and then with respect to m (the slope) 2 ȹ 2 ȹ ∂ ȹ N ȹ ∂S ȹ ∂ ȹ N ȹ ∂S ȹ ȹ ∑ (y − b − mx ) ȹ = 0 ȹ ∑ (y − b − mx ) ȹ = 0 ȹ ȹ = ȹ ȹ = i i ȹ i i ȹ ∂b Ⱥ m ∂b ȹ i =1 ∂m Ⱥ b ∂m ȹ i =1 ȹ ȹ ȹ Ⱥ ȹ Ⱥ 2 ȹ 2 ȹ ȹ N ∂ ȹ N ∂ ȹ ∑ ȹ ∑ (y − b − mx ) ȹ = 0 (y i − b − mx i ) ȹ = 0 i i ȹ ȹ i =1∂m ȹ ȹ i =1∂b ȹ Ⱥ ȹ Ⱥ ȹ N ȹ ȹ N ȹ ȹ ∑ 2(y i − b − mx i )(− 1)ȹ = 0 ȹ ∑ 2(y i − b − mx i )(− x i )ȹ = 0 ȹ ȹ ȹ ȹ ȹ i =1 Ⱥ ȹ i =1 Ⱥ N N N i =1 i =1 i =1 N N ȹ N ȹ Nb + ȹ ∑ x i ȹm = ∑ y i ȹ ȹ i...
View Full Document

## This note was uploaded on 01/26/2014 for the course CHEM 3625 taught by Professor Mrjohnson during the Spring '08 term at Virginia Tech.

Ask a homework question - tutors are online