bv_cvxbook_extra_exercises

there can be small numerical errors in the newton

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 2 with a probability density φ(v ) = (1/ 2π )e−v /2 ). As a consequence, the sensor outputs yi are random variables with possible values ±1. We will denote prob(yi = 1) as Pi (x) to emphasize that it is a function of the unknown parameter x: 1 Pi (x) = prob(yi = 1) = prob(aT x + vi ≥ bi ) = √ i 2π 1 1 − Pi (x) = prob(yi = −1) = prob(aT x + vi < bi ) = √ i 2π ∞ b i −aT x i e− t b i −aT x i −∞ 2 /2 e− t dt 2 /2 dt. The problem is to estimate x, based on observed values y1 , y2 , . . . , ym of the m sensor outputs. ¯¯ ¯ We will apply the maximum likelihood (ML) principle to determine an estimate x. In maximum ˆ likelihood estimation, we calculate x by maximizing the log-likelihood function ˆ l(x) = log yi =1 ¯ Pi (x) (1 − Pi (x)) = yi = − 1 ¯ log Pi (x) + yi =1 ¯ yi = − 1 ¯ log(1 − Pi (x)). (a) Show that the maximum likelihood estimation problem maximize l(x) is a convex optimization problem. The variable is x. The measured vector y , and the param¯ eters ai and bi are given. 78 (b)...
View Full Document

This note was uploaded on 09/10/2013 for the course C 231 taught by Professor F.borrelli during the Fall '13 term at University of California, Berkeley.

Ask a homework question - tutors are online