According to the equation 1 above we can conclude

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: onclude that b must be the eigen vector of can conclude from (1): ( . And, we ) As the value of is k, which is a constant, the optimal value of would be decided by the eigen value related to b. Thus, in order to achieve its maximal, b should be the eigen vector related to the largest eigen value, which means is the first discriminant variable. Question 3 (a) Page 3 of 5 We can get the conclusion that as the rank of the canonical variates increases, the centroids become less spread out. In the lower right panel they appear to be superimposed, and the classes most confused. (b) From this we can know that as the dimension goes larger , the training keeps reducing. But the test error reaches its minimum at Dimension=2, then goes up again. So we can get the conclusion that in this case the best error rate is for dimension 2. Question 6 (a) First, simulate the raw data in R ac...
View Full Document

This document was uploaded on 02/15/2014.

Ask a homework question - tutors are online