bv_cvxbook_extra_exercises

# The mle also has skeleton plotting code we give you

This preview shows page 1. Sign up to view the full content.

This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: ). (d) Use CVX, with the overloaded max(), abs(), and sum() functions. (e) Use the CVX function huber(). (f) The current version of CVX handles the logarithm using an iterative procedure, which is slow and not entirely reliable. However, you can reformulate this problem as maximize ( m k=1 ((1 − (Ax − b)k )(1 + (Ax − b)k )))1/2m , and use the CVX function geo_mean(). 5.5 ℓ1.5 optimization. Optimization and approximation methods that use both an ℓ2 -norm (or its square) and an ℓ1 -norm are currently very popular in statistics, machine learning, and signal and image processing. Examples include Huber estimation, LASSO, basis pursuit, SVM, various ℓ1 regularized classiﬁcation methods, total variation de-noising, etc. Very roughly, an ℓ2 -norm corresponds to Euclidean distance (squared), or the negative log-likelihood function for a Gaussian; in contrast the ℓ1 -norm gives ‘robust’ approximation, i.e., reduced sensitivity to outliers, and also tends to yield sparse solutions (of whatever the argument of the norm is). (All of this is just background; you don’t need to know any of this to solve the problem...
View Full Document

## This note was uploaded on 09/10/2013 for the course C 231 taught by Professor F.borrelli during the Fall '13 term at Berkeley.

Ask a homework question - tutors are online