*This preview shows
pages
1–3. Sign up
to
view the full content.*

Convex Optimization Overview (cnt’d)
Chuong B. Do
November 29, 2009
During last week’s section, we began our study of
convex optimization
, the study of
mathematical optimization problems of the form,
minimize
x
∈
R
n
f
(
x
)
subject to
x
∈
C.
(1)
In a convex optimization problem,
x
∈
R
n
is a vector known as the
optimization variable
,
f
:
R
n
→
R
is a
convex function
that we want to minimize, and
C
⊆
R
n
is a
convex set
describing the set of feasible solutions. From a computational perspective, convex optimiza-
tion problems are interesting in the sense that any locally optimal solution will always be
guaranteed to be globally optimal. Over the last several decades, general purpose methods
for solving convex optimization problems have become increasingly reliable and e±cient.
In these lecture notes, we continue our foray into the ²eld of convex optimization. In
particular, we explore a powerful concept in convex optimization theory known as
Lagrange
duality
. We focus on the main intuitions and mechanics of Lagrange duality; in particular,
we describe the concept of the Lagrangian, its relation to primal and dual problems, and
the role of the Karush-Kuhn-Tucker (KKT) conditions in providing necessary and su±cient
conditions for optimality of a convex optimization problem.
1
Lagrange duality
Generally speaking, the theory of Lagrange duality is the study of optimal solutions to convex
optimization problems. As we saw previously in lecture, when minimizing a di³erentiable
convex function
f
(
x
) with respect to
x
∈
R
n
, a necessary and su±cient condition for
x
*
∈
R
n
to be globally optimal is that
∇
x
f
(
x
*
)=
0
. In the more general setting of convex
optimization problem with constraints, however, this simple optimality condition does not
work. One primary goal of duality theory is to characterize the optimal points of convex
programs in a mathematically rigorous way.
In these notes, we provide a brief introduction to Lagrange duality and its applications
1

This
** preview**
has intentionally

**sections.**

*blurred***to view the full version.**

*Sign up* to generic diferentiable convex optimization problems oF the Form,
minimize
x
∈
R
n
f
(
x
)
subject to
g
i
(
x
)
≤
0
,i
=1
, . . . , m,
h
i
(
x
) = 0
, . . . , p,
(OPT)
where
x
∈
R
n
is the
optimization variable
,
f
:
R
n
→
R
and
g
i
:
R
n
→
R
are
diferen-
tiable convex Functions
1
, and
h
i
:
R
n
→
R
are
a±ne Functions
.
2
1.1
The Lagrangian
In this section, we introduce an arti±cial-looking construct called the “Lagrangian” which
is the basis oF Lagrange duality theory. Given a convex constrained minimization problem
oF the Form (OPT), the (generalized)
Lagrangian
is a Function
L
:
R
n
×
R
m
×
R
p
→
R
,
de±ned as
L
(
x, α, β
)=
f
(
x
)+
m
±
i
=1
α
i
g
i
(
x
p
±
i
=1
β
i
h
i
(
x
)
.
(2)
Here, the ±rst argument oF the Lagrangian is a vector
x
∈
R
n
, whose dimensionality matches
that oF the optimization variable in the original optimization problem; by convention, we reFer
to
x
as the
primal variables
oF the Lagrangian. The second argument oF the Lagrangian
is a vector
α
∈
R
m
with one variable
α
i
For each oF the
m
convex inequality constraints in
the original optimization problem. The third argument oF the Lagrangian is a vector

This is the end of the preview. Sign up
to
access the rest of the document.