bv_cvxbook_extra_exercises

# Scatter plot the objective value of the ane policy y

This preview shows page 1. Sign up to view the full content.

This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: m is to be solved many times; in each time, the value of u (i.e., a sample) is given, and then the decision variable x is chosen. The mapping from u into the decision variable x(u) is called the policy, since it gives the decision variable value for each value of u. When enough time and computing hardware is available, we can simply solve the LP for each new value of u; this is an optimal policy, which we denote x⋆ (u). In some applications, however, the decision x(u) must be made very quickly, so solving the LP is not an option. Instead we seek a suboptimal policy, which is aﬃne: xaﬀ (u) = x0 + Ku, where x0 is called the nominal decision and K ∈ Rnp is called the feedback gain matrix. (Roughly speaking, x0 is our guess of x before the value of u has been revealed; Ku is our modiﬁcation of this guess, once we know u.) We determine the policy (i.e., suitable values for x0 and K ) ahead of time; we can then evaluate the policy (that is, ﬁnd xaﬀ (u) given u) very quickly, by matrix multiplication and add...
View Full Document

## This note was uploaded on 09/10/2013 for the course C 231 taught by Professor F.borrelli during the Fall '13 term at Berkeley.

Ask a homework question - tutors are online