This preview shows pages 1–2. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Math 233 Hessians Fall 2001 and Unconstrained Optimization The Big Picture: Second derivatives, whether in single or multi variable calculus, measure the rate of change in slopes (i.e. the curvature of the function f). What makes problems harder in multivariable calc is that we have slopes in infinitely many directions (directional derivatives). So, we somehow need to examine how these infinite number of slopes change to help us determine the curvature and shape of the function f near critical points. This brings to mind something like second directional derivatives. We summarized the information about slopes by creating a vector of partial derivatives the gradient . In a similar way, we can help to summarize known information about the rate of change of slopes by creating a matrix of second partial derivatives the Hessian . So here is what we know: Function: ) , ( y x f Gradient: [ ] y x y x f f j f i f y x f = + = ) , ( Hessian: = yy yx xy xx f f f f f y x H ) , ( For example, take the function: 3 5 ) , ( xy y x f = then the gradient is j xy i y y x f 15 5 ) , ( 2 3 + = and the Hessian is = xy y y y x H f 30 15 15 ) , ( 2 2 (note that yx xy f f = , as it will almost surely ) We can evaluate these functions at specific points, for example when x = 3 and y = 2: j i f 180 40 ) 2 , 3 ( = (so it isnt a critical point) and  = 180 60 60 ) 2 , 3 ( f H . Some Matrix Theory: Suppose that we have an n row by n column (or square ) matrix M . Then M is: Positive Definite if the determinants of all of its principal submatrices (the submatrices made up of its first k rows and columns for k = 1, , n) are all...
View Full
Document
 Fall '07
 FACKLER

Click to edit the document details