{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

5-classification-DA

# Olivedolive11c67 live111priorc0505 dolive111

This preview shows page 1. Sign up to view the full content.

This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: )*).#)*3()( Remaining variables are samples from single normal population Statistics 503, Spring 2013, ISU 13 @A\$/0*1B*,-%=\$*# APQ* ).#)*.%%=%PV APQVV *).#)*.%%=%PVGWQX Predicted training data Predicted test data Number of variables Statistics 503, Spring 2013, ISU 14 @A\$/0*1B*,-%=\$*# APQ* ).#)*.%%=%PV APQVV *).#)*.%%=%PVGWQX Predicted test data Number of variables Statistics 503, Spring 2013, ISU 14 @A\$/0*1B*,-%=\$*# APQ* ).#)*.%%=%PV APQVV *).#)*.%%=%PVGWQX Number of variables Statistics 503, Spring 2013, ISU 14 @A\$/0*1B*,-%=\$*# APQ* ).#)*.%%=%PV APQVV *).#)*.%%=%PVGWQX Statistics 503, Spring 2013, ISU 14 @A\$/0*1B*,-%=\$*# APQ* ).#)*.%%=%PV APQVV *).#)*.%%=%PVGWQX When does it blow up? Statistics 503, Spring 2013, ISU 14 ws the classiﬁcation boundaries for linear regression, nearest neighbors and LD of these methods are: linear regression 7/249 = 0.028, nearest neighbors 3/24 . used to compute LDA classiﬁcation rule is: -lda(d.olive[d.olive[,1]!=1,c(6,7)], live[,1]!=1,1],prior=c(0.5,0.5)) [d.olive[,1]!=1,1], mple1.lda,d.olive[d.olive[,1]!=1,c(6,7)],dimen=1)\$class) C>D*'(*0>'(E than two groups two groups, ﬁrst consider rearranging equation 2: ¯￿ ¯￿ ¯ ¯￿ ¯￿ ¯ X1 S−1 ed X0 − X1 S−1 ed X1 ≥ X2 S−1 ed X0 − X2 S−1 ed X2 pool pool pool pool Statistics all Spring to ISU see that in general the rule is to503, ocate 2013,the group which has the larges ¯ ￿ −1 Xk Spooled X0 − ¯￿ ¯ Xk S−1 ed Xk pool 15 k = 1, ..., g 456*#%1/\$)"%\$ d the linear discriminant functions. orating-!/*\$(+\$2+().#*)?"#*D2(')"),0*\$(++.3*)?.*+"'.(%* prior probabilities 3"#\$%"&"'(')0* k0**;=%*@%=2A*k bability for group k is pkcthen the discriminant functions become ¯￿ ¯￿ ¯ Xk S−1 ed X0 − Xk S−1 ed Xk + log pk pool pool k = 1, ..., g e natual log. This shifts the boundary away from the group with the larger e might use the sample size in the olive oils, nS ard = 98, nN th = 151 to ass 98/249 = 0.39, pN th = 151/249 = 0.61. The result is to change the constant 04 + log(151/98)). The training error is slightly better, 7/249 = 0.028. !"#\$%&'()*:+(##";,*X0*)=*@%=2A*<")?*+(%@.#)*ck Statistics 503, Spring 2013, ISU 16 ¯ − nd ¯ ¯ ws the classiﬁcation￿ bou1 ed X0 −or l￿inear regression,= 1, ..., t neighbors and LD Xk Spool aries f Xk S−1 ed Xk k neares g pool of these methods are: linear regression 7/249 = 0.028, nearest neighbors 3/24 d the linear discriminant functions. . used to compute LDA classiﬁcation rule is: 456*#%1/\$)"%\$ orating-!/*\$(+\$2+().#*)?"#*D2(')"),0*\$(++.3*)?.*+"'.(%* prior probabilities 3"#\$%"&"'(')0* k0**;=%*@%=2A*k -lda(d.olive[d.olive[,1]!=1,c(6,7)], bability for group k is pkcthen the discriminant functions become live[,1]!=1,1],prior=c(0.5,0.5)) ¯￿ ¯￿ ¯ [d.olive[,1]!=1,1],X0 − Xk S−1 ed Xk + log pk k = 1, ..., g Xk S−1 ed pool pool mple1.lda,d.olive[d.olive[,1]!=1,c(6,7)],dimen=1)\$class) e...
View Full Document

{[ snackBarMessage ]}