5-classification-DA

3 prior probabilities 30 k02ak ldadolivedolive11c67

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 0 pool ¯ ¯ ¯ ¯￿ ¯ − X1 Spooled X1 ≥ X2 Spooled X0 − X2 S−1 ed X2 pool !"#$%&'()*:+(##";,*X0*)=*@%=2A*<")?*+(%@.#)*ck Statistics all Spring to ISU see that in general the rule is to503, ocate 2013,the group which has the larges ¯ ￿ −1 Xk Spooled X0 − ¯￿ ¯ Xk S−1 ed Xk pool 16 k = 1, ..., g 4 456*#%1/$)"%$ d the linear discriminant functions. orating-!/*$(+$2+().#*)?"#*D2(')"),0*$(++.3*)?.*+"'.(%* prior probabilities 3"#$%"&"'(')0* k0**;=%*@%=2A*k bability for group k is pkcthen the discriminant functions become ¯￿ ¯￿ ¯ Xk S−1 ed X0 − Xk S−1 ed Xk + log pk pool pool k = 1, ..., g e natual log. This shifts the boundary away from the group with the larger probability e might use the sample size ingroup k e oils, nPrior = 98, nN th = 151 to ass the oliv S ar d Sample mean for 98/249 = 0.39, pN th = 151/249 = 0.61. The result is to change the constant 04 + log(151/Inverse pooled ing error ivariance better, 7/249 = 0.028. 98)). The train sample s slightly for group k New observation to be predicted !"#$%&'()*:+(##";,*X0*)=*@%=2A*<")?*+(%@.#)*ck Statistics 503, Spring 2013, ISU 16 ¯ − nd ¯ ¯ ws the classification￿ bou1 ed X0 −or l￿inear regression,= 1, ..., t neighbors and LD Xk Spool aries f Xk S−1 ed Xk k neares g pool of these methods are: linear regression 7/249 = 0.028, nearest neighbors 3/24 d the linear discriminant functions. . used to compute LDA classification rule is: 456*#%1/$)"%$ orating-!/*$(+$2+().#*)?"#*D2(')"),0*$(++.3*)?.*+"'.(%* prior probabilities 3"#$%"&"'(')0* k0**;=%*@%=2A*k -lda(d.olive[d.olive[,1]!=1,c(6,7)], bability for group k is pkcthen the discriminant functions become live[,1]!=1,1],prior=c(0.5,0.5)) ¯￿ ¯￿ ¯ [d.olive[,1]!=1,1],X0 − Xk S−1 ed Xk + log pk k = 1, ..., g Xk S−1 ed pool pool mple1.lda,d.olive[d.olive[,1]!=1,c(6,7)],dimen=1)$class) e natual log. This shifts the boundary away from the group with the larger probability e might use the sample size in the olive oils, nPrior = 98, nN th = 151 to ass S ar d than two groups 98/249 = 0.39, pN th = 151/249 = 0.61. The result is to change the constant 04 o groups, /Inverse sipooled ing error iequationy2:for group k = 0.028. 98)). The train sample s slightl tw + log(151 first con der rearranging variance better, 7/249 New observation to ￿be−1 predicted ￿ −1 ¯￿ X1 S−1 ed X0 pool ¯ ¯ ¯ ¯￿ ¯ − X1 Spooled X1 ≥ X2 Spooled X0 − X2 S−1 ed X2 pool !"#$%&'()*:+(##";,*X0*)=*@%=2A*<")?*+(%@.#)*ck Statistics all Spring to ISU see that in general the rule is to503, ocate 2013,the group which has the larges ¯ ￿ −1 Xk Spooled X0 − ¯￿ ¯ Xk S−1 ed Xk pool 16 k = 1, ..., g 4 456*#%1/$)"%$ d the linear discriminant functions. orating-!/*$(+$2+().#*)?"#*D2(')"),0*$(++.3*)?.*+"'.(%* prior probabilities 3"#$%&q...
View Full Document

This note was uploaded on 02/06/2014 for the course STAT 503 taught by Professor Staff during the Fall '08 term at Iowa State.

Ask a homework question - tutors are online