lgn27lgnpafaclclcet eed eedccdobakrdly1 labeling

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: ; soe =s$clns$cln > lp2 2saig2/2saig1 > itret =ma([2)soe*enX,] > necp2 enX,]-lp2ma([1) > aln(=necp2bsoe,o=rd) > bieaitret,=lp2cl"e" Plot the FLDA direction, again through the mean. > lgn(27lgn=(PA,FA)clc"lc""e",t=) > eed-,,eedc"C""D",o=(bak,rd)ly1 Labeling the lines directly on the graph makes it easier to interpret. Dis tance Metric Learning VS FDA In many fundamental machine learning problems, the Euclidean distances between data points do not represent the desired topology that we are trying to capture. Kernel methods address this problem by mapping the points into new spaces where Euclidean distances may be more useful. An alternative approach is to construct a Mahalanobis distance (quadratic Gaussian metric) over the input space and use it in place of Euclidean distances. This approach can be equivalently interpreted as a linear transformation of the original inputs,followed by Euclidean distance in the projected space. This approach has attracted a lot of recent interest. Some of the proposed algorithms are iterative and computationally expensive. In the paper,"Distance Metric Learning VS FDA (http://www.aaai.org/Papers/AAAI/2008/AAAI08- 095.pdf) " written by our instructor, they propose a closed- form solution to one algorithm that previously required expensive semidefinite optimization. They provide a new problem setup in which the algorithm performs better or as well as some standard methods, but without the computational complexity. Furthermore, they show a strong relationship between these methods and the Fisher Discriminant Analysis (FDA). They also extend the approach by kernelizing it, allowing for non- linear transformations of the metric. Fisher's Discriminant Analysis (FDA) - October 9, 2009 The goal of FDA is to reduce the dimensionality of data in order to have separable data points in a new space. We can consider two kinds of problems: 2- class problem multi- class problem Two-clas s problem In the two- class problem, we have the pre- knowledge that data points belong to two classes. Intuitively speaking points of each class form a cloud around the mean of the class, with each...
View Full Document

This document was uploaded on 03/07/2014.

Ask a homework question - tutors are online