bv_cvxbook_extra_exercises

n which is known to be nonnegative and monotonically

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: e) Optimality condition for ML estimation. Let ℓθ (x1 , . . . , xK ) be the log-likelihood function for K IID samples, x1 , . . . , xk , from the distribution or density pθ . Assuming log pθ is differentiable in θ, show that 1K (1/K )∇θ ℓθ (x1 , . . . , xK ) = c (xi ) − E c (x). θ K i=1 (The subscript under E means the expectation under the distribution or density pθ .) Intepretation. The ML estimate of θ is characterized by the empirical mean of c(x) being equal to the expected value of c(x), under the density or distribution pθ . (We assume here that the maximizer of ℓ is characterized by the gradient vanishing.) 6.4 Maximum likelihood prediction of team ability. A set of n teams compete in a tournament. We model each team’s ability by a number aj ∈ [0, 1], j = 1, . . . , n. When teams j and k play each other, the probability that team j wins is equal to prob(aj − ak + v > 0), where v ∼ N (0, σ 2 ). You are given the outcome of m past games. These are organized as ( j (i) , k (i) , y (i) )...
View Full Document

This note was uploaded on 09/10/2013 for the course C 231 taught by Professor F.borrelli during the Fall '13 term at Berkeley.

Ask a homework question - tutors are online