This preview shows page 1. Sign up to view the full content.
Unformatted text preview: d density. The parameters µ and σ are the mean and standard
deviation of the distribution p, and are not known.
The maximum likelihood estimates of x, µ, σ are the maximizers of the log-likelihood function
i=1 log p(yi − aT x)
i = −m log σ + log f (
i=1 yi − a T x − µ
σ where y is the observed value. Show that if f is log-concave, then the maximum likelihood estimates
of x, µ, σ can be determined by solving a convex optimization problem.
6.2 Mean and covariance estimation with conditional independence constraints. Let X ∈ Rn be a
Gaussian random variable with density
p( x ) = 1
(2π )n/2 (det S )1/2 exp(−(x − a)T S −1 (x − a)/2). The conditional density of a subvector (Xi , Xj ) ∈ R2 of X , given the remaining variables, is also
Gaussian, and its covariance matrix Rij is equal to the Schur complement of the 2 × 2 submatrix
Sjj in the covariance matrix S . The variables Xi , Xj are called conditionally independent if the
covariance matrix Rij of their conditional distribution is diagonal.
Formulate the following problem as a convex optimization problem. We are given N independent
samples y1 , . . . , yN ∈ Rn of X . We are also given a li...
View Full Document
This note was uploaded on 09/10/2013 for the course C 231 taught by Professor F.borrelli during the Fall '13 term at University of California, Berkeley.
- Fall '13
- The Aeneid