{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

homework2-solution

# homework2-solution - CSE 555 Spring 2010 Homework 2...

This preview shows pages 1–3. Sign up to view the full content.

CSE 555 Spring 2010 Homework 2: Parametric Learning and Dimensionality Jason J. Corso Computer Science and Engineering SUNY at Buffalo [email protected] Date Assigned 1 Feb 2010 Date Due 26 Feb 2010 Homework must be submitted in class. No late work will be accepted. This homework contains both written and computer questions. You must turn in the written questions (which may be parts of the computer questions) in class and you must submit the computer code via the CSE submit script. For the computer parts, on this homework in particular, it is highly recommended that you use Matlab (available in the department/SENS labs). However, you are free to choose your poison (C/C++ or Java). If you do so, I recommend you acquaint yourself with the CLAPACK (C Linear Algebra Package) or JAMA (Java Numerics http: //math.nist.gov/javanumerics/jama/doc ). Problem 1: Multivariate Gaussian MLE (15%) Derive the equations for the maximum likelihood solution to the mean and covariance matrix of a multivariance Normal distribution. (This was assigned in class.) Solution: For d-dimensional multivariate normal distribution N ( μ, Σ) : N ( μ, Σ) = 1 (2 π ) ( d/ 2)( | Σ | ) ( 1 / 2) e - ( x - μ ) T Σ - 1 ( x - μ ) / 2 So given data D = x 1 , x 2 , · · · , x n , x i R d , we get log-likelihood l ( μ, Σ | D ) = - dN 2 ln 2 π - N 2 ln | Σ | - i ( x i - μ ) T Σ - 1 ( x i - μ ) / 2 Take derivative for μ and Σ ∂l ∂μ = i Σ - 1 ( x i - μ ) / 2 = 0 = i x i μ = i x i N for Σ - 1 , first we rewrite the l ( μ, Σ | D ) as l ( μ, Σ | D ) = - dN 2 ln 2 π + N 2 ln | Σ - 1 | - i tr - 1 ( x i - μ )( x i - μ ) T / 2) because, ∂ln | Σ - 1 | Σ - 1 = Σ and ∂tr - 1 ( x i - μ )( x i - μ ) T ) Σ - 1 = ( x i - μ )( x i - μ ) T therefore, ∂l Σ - 1 = - N 2 Σ - ( x i - μ )( x i - μ ) T / 2 = 0 Σ = ( x i - μ )( x i - μ ) T /N Problem 2: Maximum Entropy Parameter Estimation (20+20%)

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
In lecture, we covered maximum likelihood parameter estimation, maximum a posteriori parameter estimation, and Bayesian parameter estimation. A fourth form of parameter estimation is called the method of maximum entropy. In maximum entropy estimation, we assume the distribution is fixed, but unknown (as before) but that we know number of related constraints, such as the mean, variance, etc.. The maximum entropy estimate of the distribution is the one that has maximum randomness subject to the known constraints.
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 5

homework2-solution - CSE 555 Spring 2010 Homework 2...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online