This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: EECS 227A: Nonlinear and Convex Optimization Fall 2009 Linear algebra/Analysis and Calculus review 1 Chapter 1 Linear Algebra 1.1 Matrices 1.1.1 Basics Nullspace The nullspace (or, kernel) of a m × n matrix A is the following subspace of R n : N ( A ) := { x ∈ R n : Ax = 0 } . Range and rank The range (or, image) of a m × n matrix A is defined as the following subset of R m : R ( A ) := { Ax : x ∈ R n } . The range is simply the span of the columns of A . The dimension of the range is called the rank of the matrix. As we will see later, the rank cannot exceed any one of the dimensions of the matrix A : r ≤ min( m,n ). It is equal to n minus the dimension of its nullspace. A basic result of linear algebra states that any vector in R n can be decomposed as x = y + z , with y ∈ N ( A ), z ∈ R ( A T ), and z,y are orthogonal. (One way to prove this is via the singular value decomposition, seen later.) Symmetric Matrices : A square matrix A ∈ R n × n is symmetric if and only if A = A T . The set of symmetric n × n matrices is denoted S n . Orthogonal matrices. A square, n × n matrix U = [ u 1 ,...,u n ] is orthogonal if its columns form an orthonormal basis. The condition u T i u j = 0 if i 6 = j , and 1 otherwise, translates in matrix terms as U T U = I n with I n the n × n identity matrix. Unitary matrix: An n × n matrix U is unitary if UU * = U * U = I where U * is the transpose of the conjugate of U . Normal matrix: An n × n matrix A is normal if AA * = A * A 2 EECS 227A Fall 2009 1.1.2 Eigenvalue decomposition A fundamental result of linear algebra states that any symmetric matrix can be decomposed as a weighted sum of normalized dyads that are orthogonal to each other. Precisely, for every A ∈ S n , there exist numbers λ 1 ,...,λ n and an orthonormal basis ( u 1 ,...,u n ), such that A = n X i =1 λ i u i u T i . In a more compact matrix notation, we have A = U Λ U T , with Λ = diag ( λ 1 ,...,λ n ), and U = [ u 1 ,...,u n ]. The numbers λ 1 ,...,λ n are called the eigenvalues of A , and are the roots of the charac teristic equation det( λI A ) = 0 , where I n is the n × n identity matrix. Eigenvalues and eigenvectors satisfies Au i = λ i u i , some other properties of eigenvalues • det ( A ) = Q n i =1 λ i • Tr ( A ) = ∑ n i =1 λ i For arbitrary square matrices, eigenvalues can be complex. In the symmetric case, the eigenvalues are always real. There are only n (possibly distinct) solutions to the above equation. It is interesting to see what the eigenvalue decomposition of a given symmetric matrix A tells us about the corresponding quadratic form, q A ( x ) := x T Ax . With A = U Λ U T , we have q A ( x ) = ( U T x ) T Λ( U T x ) = n X i =1 λ i ( u T i x ) 2 ....
View
Full Document
 Spring '11
 Staff
 Linear Algebra, Matrices, Orthogonal matrix, λi λi

Click to edit the document details