CSE 543T Homework 2 Solutions
Exercise 2.1.6
First of all, notice that since f (x) 0, xi and ai , f (x) = 0 is clearly not the maximum for f (x).
Therefore, 0 < x < 1 i. The original problem is equivalent to maximizing f (x) xi s.t. 0 < xi < 1
i
and n xi
CSE 241 Algorithms and Data Structures
Fall 2010
Homework 4: Solutions
1. (5 pts) Thanks!
2. (15 points)
(a) (6 pts) The vertices are visited in the order S, A, C, D, E, B. In the breadth-rst search tree the
root is S with children A, C, D. A has child E,
Motivation for Gradient Methods
We have the closed-form information of
f(x)
How to utilize this information
What information can we get?
Why is gradient so important?
Easy to find the descending direction locally
Hard globally
The Rastrigin Function
CSE543T: Algorithms for Nonlinear Optimization
Lecture 2: Unconstrained Optimization
Definitions and Conditions
Unconstrained Optimization
Minimize f(x)
where x is defined in X
What do we know? (Assumptions)
Minimize f(x) but f(x) is:
Unknown
Determin
CSE 241 Algorithms and Data Structures
Fall Semester, 2010
Homework 3: Solutions
1. (10 pts) The algorithm to take the union of two skiplists uses the ideas of the merge method used within mergesort
to merge two sorted lists. The algorithm described will
CSE 241 Algorithms and Data Structures
Fall Semester, 2010
Homework 2: Solutions
1. (10 points) Hand simulation of hashing.
(a) Slots 1, 2, 3, 6, and 7 are null (i.e. an empty list). Slot 0 points to a list holding 51 followed by 22.
Slot 4 points to a li
How to handle constraints
From outside the feasible set
From inside the feasible set
Any way else?
6.252 NONLINEAR PROGRAMMING
LECTURE 9: FEASIBLE DIRECTION METHODS
LECTURE OUTLINE
Conditional Gradient Method
Gradient Projection Methods
A feasible di
In the last Chapter
minimize f(x)
subject to xX, where X is convex
Necessary condition for a local minimum:
f(x*)(x x*) >= 0 for any xX
Problems:
Not working for nonconvex problems
Difficult to implement in practice
In this Chapter
We will study a n
Summary of Lagrangian Theory
A necessary condition for constrained local
minimum
A unified treatment for equality and
inequality constraints
A dual view of constrained optimization
Constraints and variables are in certain sense
inter-changeable
A too
LEAST-SQUARES PROBLEMS
m
minimize
f (x) =
1
2
g (x)
2
=
gi (x)
1
2
2
i=1
subject to x
n,
where g = (g1 , . . . , gm ), gi :
n
ri .
Many applications:
Solution of systems of n nonlinear equations
with n unknowns
Model Construction Curve Fitting
Neural
Conjugate Gradient Methods applied
to Nonquadratic Problems
min f(x), where f(x) is a general function
use the same algorithm
xk+1 = xk + k dk
Where k = argmin f(xk + dk)
dk = - gk + k dk-1
Approximation: d0, d1, , gradually lose
conjugacy. Remedies: