This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: CSE 555 Spring 2010 Homework 2: Parametric Learning and Dimensionality Jason J. Corso Computer Science and Engineering SUNY at Buffalo jcorso@buffalo.edu Date Assigned 1 Feb 2010 Date Due 26 Feb 2010 Homework must be submitted in class. No late work will be accepted. This homework contains both written and computer questions. You must turn in the written questions (which may be parts of the computer questions) in class and you must submit the computer code via the CSE submit script. For the computer parts, on this homework in particular, it is highly recommended that you use Matlab (available in the department/SENS labs). However, you are free to choose your poison (C/C++ or Java). If you do so, I recommend you acquaint yourself with the CLAPACK (C Linear Algebra Package) or JAMA (Java Numerics http: //math.nist.gov/javanumerics/jama/doc ). Problem 1: Multivariate Gaussian MLE (15%) Derive the equations for the maximum likelihood solution to the mean and covariance matrix of a multivariance Normal distribution. (This was assigned in class.) Solution: For ddimensional multivariate normal distribution N ( , ) : N ( , ) = 1 (2 ) ( d/ 2)(   ) ( 1 / 2) e ( x ) T  1 ( x ) / 2 So given data D = x 1 ,x 2 , ,x n ,x i R d , we get loglikelihood l ( ,  D ) = dN 2 ln 2  N 2 ln    X i ( x i ) T  1 ( x i ) / 2 Take derivative for and l = X i  1 ( x i ) / 2 = 0 N = X i x i = i x i N for  1 , first we rewrite the l ( ,  D ) as l ( ,  D ) = dN 2 ln 2 + N 2 ln   1   X i tr ( 1 ( x i )( x i ) T / 2) because, ln   1   1 = and tr ( 1 ( x i )( x i ) T )  1 = ( x i )( x i ) T therefore, l  1 = N 2  X ( x i )( x i ) T / 2 = 0 = X ( x i )( x i ) T /N Problem 2: Maximum Entropy Parameter Estimation (20+20%) In lecture, we covered maximum likelihood parameter estimation, maximum a posteriori parameter estimation, and Bayesian parameter estimation. A fourth form of parameter estimation is called the method of maximum entropy. In maximum entropy estimation, we assume the distribution is fixed, but unknown (as before) but that we know number of related constraints, such as the mean, variance, etc.. The maximum entropy estimate of the distribution is the one that has maximum randomness subject to the known constraints.that has maximum randomness subject to the known constraints....
View
Full
Document
This note was uploaded on 11/24/2010 for the course STAT 201a taught by Professor Wu during the Spring '10 term at Pasadena City College.
 Spring '10
 wu

Click to edit the document details