This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: ARE211, Fall 2007 CALCULUS3: TUE, OCT 30, 2007 PRINTED: NOVEMBER 8, 2007 (LEC# 18) Contents 4. Univariate and Multivariate Differentiation (cont) 1 4.4. Multivariate Calculus: functions from R n to R m 1 4.5. Four graphical examples. 3 4.6. Taylor’s Theorem 14 4. Univariate and Multivariate Differentiation (cont) 4.4. Multivariate Calculus: functions from R n to R m We’ll now generalize what we did last time to a function f : R n → R m . In general, if you have a function from R n to R m , what is the notion of slope (or gradient or derivative)? Not suprisingly, it is a m × n matrix . The matrix which is the derivative of a function from R n to R m is called the Jacobian matrix for that function. Note well: I tend to talk about the Jacobian of a function, when what I mean is the Jacobian matrix. But this is potentially confusing. The Jacobian matrix has a determinant, which is called the Jacobian 1 2 CALCULUS3: TUE, OCT 30, 2007 PRINTED: NOVEMBER 8, 2007 (LEC# 18) determinant. There are (respectable) books that use the unqualified word Jacobian to refer to the determinant, not the matrix. De Groot is one of these. So need to be aware of which is which. Example: A particular kind of function from R n to R n that we care about is the gradient function. Specifically, think of the gradient as being n functions from R n → R , all stacked on top of each other. The gradient of the gradient is the matrix constructed by stacking the gradients of each of these functions viewed as row vectors on top of each other. E.g., the first row will be the derivative of the first partial, i.e., ∇ f 1 ( · ). The derivative of the derivative of a function is called the Hessian of that function. The Hessian of f is, of course, the Jacobian of the gradient of f . To visualize the derivative and differential associated with f : R n → R m , it is helpful to think, as usual, of f as a vertical stack of m functions f i : R n → R , all stacked on top of each other. It is then natural to think of the derivative of f as a vertical stack of all the derivatives of the f i ’s. That is, f prime ( · ) ≡ Jf( · ) = ∇ f 1 ( · ) prime ∇ f 2 ( · ) prime . . . ∇ f m ( · ) prime , where each ∇ f i ( · ) is a column vector consisting of the partials of f i . In the special case of ∇ f : R n → R n , we have ∇ f prime ( · ) ≡ J ∇ f ( · ) ≡ Hf( · ) ≡ ∇ f 1 ( · ) prime ∇ f 2 ( · ) prime . . . ∇ f n ( · ) prime , where each ∇ f i ( · ) is the gradient of the i ’th partial of f . ARE211, Fall 2007 3 Now, returning to a general function f : R n → R m , think of the differential of ∇ f , i.e., df x ( · ) = Jf( x )( · ) as a vertical stack consisting of the differentials of the f i ’s at x , i.e., df x ( dx ) = Jf( x )( dx ) = Jf( x ) · dx...
View
Full
Document
This note was uploaded on 08/01/2008 for the course ARE 211 taught by Professor Simon during the Fall '07 term at Berkeley.
 Fall '07
 Simon

Click to edit the document details