Midterm07 - HACETTEPE UNIVERSITY DEPT OF ELECTRICAL AND...

Info icon This preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
HACETTEPE UNIVERSITY DEPT. OF ELECTRICAL AND ELECTRONICS ENGINEERING ELE 704 Optimization Midterm Examination, 17 April, 2007 Name : ID # : Question 1 2 3 Total Mark 20 55 35 110 Q1. (20pts) Thinking of the Gradient Descent, Steepest Descent and Newton’s Algorithms, (a) (10pts) What do you think the origin of these iterative methods for optimization is? (b) (10pts) Relying on this observation, give a rough sketch of how to find the update equations for i. (5pts) the Gradient Descent Algorithm, ii. (5pts) the Newton’s Algorithm. Your answers should be no longer than 3-4 equations and a few lines. A1. (a) These three algorithms can be considered to be derivated from the Taylor Series expansion of a function around point x, i.e. f ( x + Δx ) f ( x ) + T f ( x ) Δx + 1 2 Δx T H ( x ) Δx + residual where number of terms included depends on the algorithm. (b) i. Gradient descent algorithm utilizes the linear approximation, i.e. f ( x + Δx ) f ( x ) + T f ( x ) Δx . In order to construct an iterative algorithm which reduces the cost function at every iteration, the second term on rhs has to be negative, i.e. Δx should point a descent direction, T f ( x ) Δx < 0 . Maximum descent occurs at Δx = -∇ f ( x ) . Also including a step size parameter, the update equation becomes x k +1 = x k + α Δx k = x k - α f ( x k ) ii. Newton’s algorithm utilizes the quadratic approximation, i.e. f ( x + Δx ) f ( x ) + T f ( x ) Δx + 1 2 Δx T H ( x ) Δx . 1
Image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
The question of finding the Newton step Δx which decreases the cost function most can be found by setting the derivative of the rhs wrt Δx to zero which yields Δx = - [ H ( x )] - 1 f ( x ) . Similar to the Gradient Descent Method, adding a step size parameter, the update equation becomes x k +1 = x k + α Δx k = x k - α [ H ( x )] - 1 f ( x ) . Q2. (55pts) Assume that f ( x ) : R n R is a function which we would like to minimize using the Gradient Descent Algorithm . The positive, maximum and minimum eigenvalues of the Hessian of f , H ( x ) are m and M , respectively. Being strongly convex, the function f ( x ) has a unique minimum at x * with p * = f ( x * ).
Image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern