slide8_handout-1

34 45 steepest gradient descent convergence let f rn r

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 0 f ( x0 + α d 0 ) = f = 0 + 2(−3α)2 + 0 + 3(−3α) = −3α 18α2 − 9α d 2 d α (18α ￿ α0 = 1 4 ￿ ￿ x1 = ￿ − 9α) = 36α − 9 = 0 0 −3 4 ￿ 33 / 45 Steepest Gradient Descent Example ￿ ￿x0 − x1 ￿2 = 3 4 > ￿ so we continue Second Iteration ￿ ￿ 2x 1 + x 2 ￿ ∇ f ( x) = 4x 2 + x 1 + 3 ￿ ￿ ￿3￿ 3 2(0) − 4 1 1) = − 4 ￿ d = −∇f (x = 0 4(− 3 ) + 0 + 3 ￿3 ￿4 9 9 4α ￿ f ( x1 + α d 1 ) = f = 16 α2 + 9 − 16 α − 9 8 4 −3 4 d 92 d α ( 16 α ￿ α1 = 1 2 ￿ ￿ x2 = ￿ + 3 8 −3 4 ￿x1 − x2 ￿2 = 3 8 9 8 − 9 16 α − 9) = 9α − 4 8 9 16 =0 ￿ > ￿ so we would keep going. 34 / 45 Steepest Gradient Descent Convergence Let f : Rn → R. If for some x0 : ￿ ￿ The sublevel set X0 = {x ∈ R : f (x) ≤ f (x0 } is bounded; and f is continuously differentiable on the convex hull of X0 then the steepest gradient descent algorithm initiated at x0 will converge to a solution for the KKT conditions for f Recall from last week that a solution for the KKT conditions can be either a local or a global optimum. But if your optimization problem is convex, then it is guaranteed to be...
View Full Document

Ask a homework question - tutors are online