ohdatamineDISC2

4 1 1 1 1 2 1 1 1 1 1 1 1 1 1 1 0 1 1 11 1 1 11 1 2

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: y 1.240248e-05 -0.0059924778 . . . insulin -3.895587e-03 0.0005754322 . . . . . . . . . 4 1 1 1 1 2 1 1 1 1 1 1 1 1 1 1 0 1 1 11 1 1 11 1 2 1 22 1 1 12 1 2 −2 2 1 2 2 2 2 2 2 22 22 12 2 22 3 2 2 22 2 2 2 2 −4 LD2 1 3 3 3 3 3 33 33 3 3 33 3 3 3 3 3 1 3 3 33 3 3 333 33 3 33 3333 33 3 3 3 3 3 3 333 33 2 3 23 33 33 3 33 33 3 33 2 2 2 33 3 3 3 3 23 2 2 3 3 −6 −4 −2 0 2 . . . . . . Example of Linear Discrimination diab.ld=lda(diab[,1:5],grouping=diab[,6]) names(diab.ld) [1] "prior" "counts" "means" "scaling" "lev" "svd" [8] "call" > table(predict(diab.ld,diab[,1:5])$class,diab[,6]) 123 1 26 0 0 2 5 31 3 3 1 5 73 . . . . . "N" . Cross-validation To determine an estimate of the misclassification rate that is not biased, we use cross-validation. Usually for LDA we use leave-one out cross validation (n fold) X 1 ∪ X2 ∪ X 3 . . . ∪ X n . . . . . . conf <- function(class.predict,class){ confusion=table(class.predict,class) return(confusion) } library(class) train <- rbind(iris3[1:25,,1], iris3[1:25,,2], iris3[1:25,,3]) test <- rbind(iris3[26:50,,1], iris3[26:50,,2], iris3[26:50,,3]) cl <- factor(c(rep("s",25), rep("c",25), rep("v",25))) knn(train, test, cl, k = 3, prob=TRUE) iris.knncv2=knn.cv(train, cl, k = 2, prob = TRUE) iris.knncv4=knn.cv(train, cl, k = 4, prob = TRUE) iris.knncv8=knn.cv(train, cl, k =...
View Full Document

This note was uploaded on 07/29/2011 for the course STAT 202 at Stanford.

Ask a homework question - tutors are online