{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

handout6

handout6 - 18.338J/16.394J The Mathematics of Innite Random...

This preview shows pages 1–3. Sign up to view the full content.

1 18.338J/16.394J: The Mathematics of Infinite Random Matrices Essentials of Finite Random Matrix Theory Alan Edelman Handout #6, Tuesday, September 28, 2004 This handout provides the essential elements needed to understand finite random matrix theory. A cursory observation should reveal that the tools for infinite random matrix theory are quite different from the tools for finite random matrix theory. Nonetheless, there are significantly more published applications that use finite random matrix theory as opposed to infinite random matrix theory. Our belief is that many of the results that have been historically derived using finite random matrix theory can be reformulated and answered using infinite random matrix theory. In this sense, it is worth recognizing that in many applications it is an integral of a function of the eigenvalues that is more important that the mere distribution of the eigenvalues. For finite random matrix theory, the tools that often come into play when setting up such integrals are the Matrix Jacobians , the Joint Eigenvalue Densities and the Cauchy-Binet theorem . We describe these in subsequent sections. Matrix and Vector Differentiation In this section, we concern ourselves with the differentiation of matrices. Differentiating matrix and vector functions is not significantly harder than differentiating scalar functions except that we need notation to keep track of the variables. We titled this section “matrix and vector” differentiation, but of course it is the function that we differentiate. The matrix or vector is just a notational package for the scalar functions involved. In the end, a derivative is nothing more than the “linearization” of all the involved functions. We find it useful to think of this linearization both symbolically (for manipulative purposes) as well as numerically (in the sense of small numerical perturbations). The differential notation idea captures these viewpoints very well. We begin with the familiar product rule for scalars, d( uv ) = u (d v ) + v (d u ) , from which we can derive that d( x 3 ) = 3 x 2 d x . We refer to d x as a differential. We all unconsciously interpret the “d x symbolically as well as numerically. Sometimes it is nice to confirm on a computer that 3 ( x + ǫ ) 3 x 3 x 2 . (1) ǫ I do this by taking x to be 1 or 2 or randn(1) and ǫ to be . 001 or . 0001. The product rule holds for matrices as well: d( UV ) = U (d V ) + (d U ) V . In the examples we will see some symbolic and numerical interpretations. Example 1: Y = X 3 We use the product rule to differentiate Y ( X ) = X 3 to obtain that d( X 3 ) = X 2 (d X ) + X (d X ) X + (d X ) X 2 .

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
When I introduce undergraduate students to matrix multiplication, I tell them that matrices are like scalars, except that they do not commute. The numerical (or first order perturbation theory) interpretation applies, but it may seem less familiar at first. Numerically take X=randn(n) and E=randn(n) for ǫ = . 001 say, and then compute ( X + ǫE ) 3 X 3 X 2 E + XEX + EX 2 . (2) ǫ This is the matrix version of (1). Holding X fixed and allowing E to vary, the right-hand side
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

Page1 / 35

handout6 - 18.338J/16.394J The Mathematics of Innite Random...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online