This preview shows pages 1–3. Sign up to view the full content.
ISE 536–Fall03: Linear Programming and Extensions
November 24, 2003
Lecture 22: IPM, Path Following Methods
Lecturer: Fernando Ord´o˜nez
1
A few ideas from convex optimization
For a convex function
f
:
<
n
7→ <
, the point that minimizes
f
(
x
), satisﬁes
∇
f
(
x
) = 0, i.e.:
Assume now that there also is a function
g
:
<
n
7→ <
m
, and that you are interested in the
minimizer of
f
(
x
) constrained to
g
(
x
) = 0. How do you ﬁnd the point that solves:
min
f
(
x
)
s
.
t
.
g
(
x
) = 0
1.1
Newton’s method
To obtain the minimizer to optimization problems we need to ﬁnd
x
such that
h
(
x
) = 0 for
some system of equations. Newton’s method does just this!
•
If
x
∈ <
, to ﬁnd
h
(
x
) = 0 Newton’s method constructs the following iteration
x
k
+1
=
x
k

h
(
x
k
)
h
0
(
x
k
)
.
1
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document•
For
n
equations and
n
unknowns, that is if
h
:
<
n
7→ <
n
and
x
∈ <
n
, then Newton’s
method is:
x
k
+1
=
x
k

J
(
x
k
)

1
h
(
x
k
)
,
where
J
(
x
k
) is the
n
×
n
matrix of partial derivatives:
This is the end of the preview. Sign up
to
access the rest of the document.
 Spring '05
 YY

Click to edit the document details