Midterm07 - HACETTEPE UNIVERSITY DEPT. OF ELECTRICAL AND...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
HACETTEPE UNIVERSITY DEPT. OF ELECTRICAL AND ELECTRONICS ENGINEERING ELE 704 Optimization Midterm Examination, 17 April, 2007 Name : ID # : Question 1 2 3 Total Mark 20 55 35 110 Q1. (20pts) Thinking of the Gradient Descent, Steepest Descent and Newton’s Algorithms, (a) (10pts) What do you think the origin of these iterative methods for optimization is? (b) (10pts) Relying on this observation, give a rough sketch of how to find the update equations for i. (5pts) the Gradient Descent Algorithm, ii. (5pts) the Newton’s Algorithm. Your answers should be no longer than 3-4 equations and a few lines. A1. (a) These three algorithms can be considered to be derivated from the Taylor Series expansion of a function around point x, i.e. f ( x + Δx ) f ( x ) + T f ( x ) Δx + 1 2 Δx T H ( x ) Δx + residual where number of terms included depends on the algorithm. (b) i. Gradient descent algorithm utilizes the linear approximation, i.e. f ( x + Δx ) f ( x ) + T f ( x ) Δx . In order to construct an iterative algorithm which reduces the cost function at every iteration, the second term on rhs has to be negative, i.e. Δx should point a descent direction, T f ( x ) Δx < 0 . Maximum descent occurs at Δx = -∇ f ( x ) . Also including a step size parameter, the update equation becomes x k +1 = x k + α Δx k = x k - α f ( x k ) ii. Newton’s algorithm utilizes the quadratic approximation, i.e. f ( x + Δx ) f ( x ) + T f ( x ) Δx + 1 2 Δx T H ( x ) Δx . 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
The question of finding the Newton step Δx which decreases the cost function most can be found by setting the derivative of the rhs wrt Δx to zero which yields Δx = - [ H ( x )] - 1 f ( x ) . Similar to the Gradient Descent Method, adding a step size parameter, the update equation becomes x k +1 = x k + α Δx k = x k - α [ H ( x )] - 1 f ( x ) . Q2. (55pts) Assume that f ( x ) : R n R is a function which we would like to minimize using the Gradient Descent Algorithm . The positive, maximum and minimum eigenvalues of the Hessian of f , H ( x ) are m and M , respectively. Being strongly convex, the function f ( x ) has a unique minimum at
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 05/25/2011 for the course ELECTRONIC 704 taught by Professor Cenktoker during the Spring '11 term at Hacettepe Üniversitesi.

Page1 / 6

Midterm07 - HACETTEPE UNIVERSITY DEPT. OF ELECTRICAL AND...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online