This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: P r e l i m i n a r y d r a f t o n l y : p l e a s e c h e c k f o r fi n a l v e r s i o n ARE211, Fall 2007 LECTURE #18: TUE, OCT 30, 2007 PRINT DATE: AUGUST 21, 2007 (CALCULUS3) Contents 4. Univariate and Multivariate Differentiation (cont) 1 4.4. Multivariate Calculus: functions from R n to R m 1 4.5. Four graphical examples. 3 4.6. Taylor’s Theorem 13 4. Univariate and Multivariate Differentiation (cont) 4.4. Multivariate Calculus: functions from R n to R m We’ll now generalize what we did last time to a function f : R n → R m . In general, if you have a function from R n to R m , what is the notion of slope (or gradient or derivative)? Not suprisingly, it is a m × n matrix . The matrix which is the derivative of a function from R n to R m is called the Jacobian matrix for that function. Note well: I tend to talk about the Jacobian of a function, when what I mean is the Jacobian matrix. But this is potentially confusing. The Jacobian matrix has a determinant, which is called the Jacobian determinant. There are (respectable) books that use the unqualified word Jacobian to refer to the determinant, not the matrix. De Groot is one of these. So need to be aware of which is which. 1 2 LECTURE #18: TUE, OCT 30, 2007 PRINT DATE: AUGUST 21, 2007 (CALCULUS3) Example: A particular kind of function from R n to R n that we care about is the gradient function. Specifically, think of the gradient as being n functions from R n → R , all stacked on top of each other. The gradient of the gradient is the matrix constructed by stacking the gradients of each of these functions viewed as row vectors on top of each other. E.g., the first row will be the derivative of the first partial, i.e., ∇ f 1 ( · ). The derivative of the derivative of a function is called the Hessian of that function. The Hessian of f is, of course, the Jacobian of the gradient of f . To visualize the derivative and differential associated with f : R n → R m , it is helpful to think, as usual, of f as a vertical stack of m functions f i : R n → R , all stacked on top of each other. It is then natural to think of the derivative of f as a vertical stack of all the derivatives of the f i ’s. That is, f prime ( · ) ≡ Jf( · ) = ∇ f 1 ( · ) prime ∇ f 2 ( · ) prime . . . ∇ f m ( · ) prime , where each ∇ f i ( · ) is a column vector consisting of the partials of f i . In the special case of ∇ f : R n → R n , we have ∇ f prime ( · ) ≡ J ∇ f ( · ) ≡ Hf( · ) ≡ ∇ f 1 ( · ) prime ∇ f 2 ( · ) prime . . . ∇ f n ( · ) prime , where each ∇ f i ( · ) is the gradient of the i ’th partial of f ....
View Full Document
This note was uploaded on 08/01/2008 for the course ARE 211 taught by Professor Simon during the Fall '07 term at Berkeley.
- Fall '07