This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Summary of Matrix
Theory In the text, we assume you are already somewhat familiar with matrix theory
and with the solution of linear systems of equations. However, for the purposes
of review we present here a brief summary of matrix theory with an emphasis on the results needed in control theory. For further study, see Strang (1988)
and Gantmacher (1959). 0.1 Matrix Definitions An array of numbers arranged in rows and columns is referred to as a matrix.
If A is a matrix with m rows and n columns, an m x n (read “m by n”) matrix,
it is denoted by all 012 ‘ ' ' “In
5121 a22 ' ’ ' “2n A = , , . , (C1)
aml amZ ‘ ' ‘ amn where the entries aij are its elements. If m = n, then the matrix is square;
otherwise it is rectangular. Sometimes a matrix is simply denoted by A = [aij].
If m = 1 or n = 1, then the matrix reduces to a row vector or a column vector,
respectively. A submatrix of A is the matrix with certain rows and columns
removed. 837 838 Appendix C Summary of Matrix Theory Commutative law for addition
Associative law for addition Associative law for
multiplication C.2 Elementary Operations on Matrices If A and B are matrices of the same dimension, then their sum is deﬁned by
C = A + B, (C2) where CH = 61,] + bij. (C3) That is, the addition is done element by element. It is easy to verify the following
properties of matrices: A+B=B+A, (C4)
(A+B)+C=A+(B+C). (C5) Two matrices can be multiplied if they are compatible. Let A = m X n and
B :11 X p. Then the m X p matrix c 2 AB (C6) is the product of the two matrices, where Cij = Zaikbkj. k:l Matrix multiplication satisﬁes the associative law
A(BC) = (AB)C, (C8)
but not the commutative law; that is, in general, AB 9e BA. (C9) Transposition Section C.5 Determinant and Matrix Inverse 839 0.3 Trace The trace of a square matrix is the sum of its diagonal elements: traceA= 2a)). (C.10)
i=1 C.4 Transpose The n x m matrix obtained by interchanging the rows and columns of A is called
the transpose of matrix A: all (121 . . . am]
AT 012 “22    amZ
“In 0271   ~ amn A matrix is said to be symmetric if AT = A. (Cll)
It is easy to show that
(AB)T = BTAT, (C12)
(ABC)T = CTBTAT, (C13)
(A+B)T = AT +37) (C14) 0.5 Determinant and Matrix Inverse The determinant of a square matrix is deﬁned by Laplace’s expansion: detA: Zaijyij fOI‘ anyi = 1,2,...,n, jzl where yij is called the cofactor and where the scalar det MU is called a minor. MU is the same as the matrix A
except that its ith row and jth column have been removed. Note that Mi} is 840 Appendix C Summary of Matrix Theory Identity matrix Inversion always an (n — 1) x (n — 1) matrix and that the minors and cofactors are identical except possibly for a sign.
The adjugate of a matrix is the transpose of the matrix of its cofactors: adj A = [34,]? (C17)
It can be shown that
A adj A = (det A)I, (C18)
where I is called the identity matrix:
1 O . . . . . . 0
0 1 0 . . . 0
I = : 2 . : ’
0 . . . . . . 0 1 that is, with ones along the diagonal and zeros elsewhere. If detA 7e 0, then
the inverse of a matrix A is deﬁned by _ ad' A
A 1 = delt A (C19)
and has the property that
AA1 = AlA = 1. (C20) Note that a matrix has an inverse—that is, it is nonsingular—if its determinant is nonzero.
The inverse of the product of two matrices is the product of the inverse of the matrices in reverse order:
(AB)1 = 131A—1 (C21) and
(ABC)—1 = C‘1B_1A‘1. (C22) 0.6 Properties of the Determinant When dealing with determinants of matrices, the following elementary (row or
column) operations are useful: 1. I_f any row (or column) of A is multiplied by a scalar a, the resulting matrix
A has the determinant detA = a detA. (C23)
Hence
det(¢xA) = or" det A. (C24)
2. If any two rows (or columns) of A are interchanged to obtain A, then
detA = — detA. (C25)
3. If a multiple of a row (or column) of A is added to another to obtain A, then
detA = detA. (C26) Diagonal matrix Section C.8 Special Matrices 841 4. It is also easy to show that
detA = detAT (C27) and
det AB = detA detB. (C28) Applying Eq. (C28) to Eq. (C20), we have that
detA det A1 = 1. (C29) If A and B are square matrices, then the determinant of the block triangular
matrix det:13 g] = detAdetB (C30) is the product of the determinants of the diagonal blocks. If A is nonsingular,
then my: 3] = detAdet(D — CA'IB). (C31) Using this identity, the transfer function of a scalar system can be written in
a compact form: sI — F G
det _H J
G =H I~F‘1G J:— C32
(3) (s ) + det(sI — F) ( )
C.7 Inverse of Block Triangular Matrices
If A and B are square invertible matrices, then
—1
A C A’1 —A’1CB_1
[0 Bi ii i. 0.8 Special Matrices Some matrices have special structures and are given names. We have already
deﬁned the identity matrix, which has a special form. A diagonal matrix has
(possibly) nonzero elements along the main diagonal and zeros elsewhere: (111 0
£122
A = a33 . (C34) 842 Appendix C Upper triangular matrix Summary of Matrix Theory A matrix is said to be (upper) triangular if all the elements below the main
diagonal are zeros: all 012 ‘ " “In
0 6122
A = E 0 E . (C35)
0 E 0 0 m 0 am, The determinant of a diagonal or triangular matrix is simply the product of its
diagonal elements. A matrix is said to be in the (upper) companion form if it has the structure ... —an
1 0 0 A. = 0 1 0 0 . (C36)
0 1 0 Note that all the information is contained in the ﬁrst row. Variants of this form are the lower, left, or right companion matrices. A Vandermonde matrix has
the following structure: 1 a1 a? n. af_1
2 nl
a2 a2 . . . (12
A = . , . .  (C37)
1 an a3 (12—1 0.9 Rank The rank of a matrix is the number of its linearly independent rows or columns. If the rank of A is r, then all (r + 1) x (r + 1) submatrices of A are singular,
and there is at least one r x r submatrix that is nonsingular. It is also true that row rank of A 2 column rank of A. (C38) 0.10 Characteristic Polynomial The characteristic polynomial of a matrix A is deﬁned by a(s) é det(sI — A) = s" + 01s"_1++ an_1s + a,,, (C39) Section C.12 Eigenvalues and Eigenvectors 843 where the roots of the polynomial are referred to as eigenvalues of A. We can
write 61(5) = (s — M)(s — A2) ‘ ~ ' (s — A"). (C40) where {M} are the eigenvalues of A. The characteristic polynomial of a com
panion matrix [e.g., Eq. (036)] is a(s) = det(sI — AC) 1 =sn+a1s"’ +...+an_1s+an. C.11 Cayley—Hamilton Theorem The Cayley—Hamilton theorem states that every square matrix A satisﬁes its
characteristic polynomial. This means that if A is an n x n matrix with charac
teristic equation a(s) , then a(A) 2 A" + alA"_1++ a,,_1A + anl = 0. (C42) C.12 Eigenvalues and Eigenvectors
Any scalar A and nonzero vector v that satisfy
Av = Av (C43) are referred to as the eigenvalue and the associated (right) eigenvector of the
matrix A [because v appears to the right of A in Eq. (C.43)]. By rearrangmg
terms in Eq. (C.43) we get (AI — A)v = 0. ((344)
Because v is nonzero, we have
det(AI — A) = 0, (C45) so A is an eigenvalue of the matrix A as deﬁned in Eq. (C43). The normalization of the eigenvectors is arbitrary; that is, if v is an eigenvector, so is av. The eigenvectors are usually normalized to have unit length; that is, v H2 = vTv = 1.
If wT is a nonzero row vector such that wTA = AwT, (C46) then w is called a left eigenvector of A [because WT appears to the left of A in
Eq. (C.46)]. Note that we can write ATw 2 AW (C47) so that w is simply a right eigenvector of AT. 844 Appendix C Summary of Matrix Theory 0.13 Similarity Transformations Consider the arbitrary nonsingular matrix T such that A = T‘lAT. (C48) The matrix operation shown in Eq. (C48) is referred to as a similarity trans
formation. If A has a full set of eigenvectors, then we can choose T to be the
set of eigenvectors and A will be diagonal. Consider the set of equations in statevariable form: x 2 FX + Gu. (C49)
If we let
T5 = x, (C50)
then Eq. (C49) becomes
Té =FT§ +Gu, (C51) and premultiplying both sides by T‘1 , we get 5' = T‘1FT§ + T‘1Gu = is + Cu, (C52)
where
F = T‘lFT,
G = T1G. (C53) The characteristic polynomial of i‘ is det(sI — F) = det(sI — T‘IFT) = det(sT_1T — T‘lFT) = det[T_1(sI — F)T] = detT_1det(sI — F) det T. (C54)
Using Eq. (C29), Eq. (C54) becomes det(sI — F) = det(sI — F). (C55) From Eq. (C55) we can see that F and F both have the same characteristic
polynomial, giving us the important result that a similarity transformation does not change the eigenvalues of a matrix. From Eq. (C50) a new state made up
of a linear combination of old state has the same eigenvalues as the old set. Section C.15 Fundamental Subspaces 845 0.14 Matrix Exponential Let A be a square matrix. The matrix exponential of A is deﬁned as the series A: 1 22 A3t3
e =I+At+jAt + 3' +~. (C56) It can be shown that the series converges. If A is an n x n matrix, then e’" is
also an n x n matrix and can be differentiated: —e = AeA’. (C57) Other properties of the matrix exponential are
eA‘IeAtz = eA(’1+t2) (C58) and, in general,
eAeB 75 eBeA. (C59) (In the exceptional case where A and B commute——that is, AB = BA—then
eAeB = eBeA). C.15 Fundamental Subspaces The range space of A, denoted by rR(A) and also called the column space of
A, is deﬁned by the set of vectors x where for some vector y. The null space of A, denoted by .N (A), is deﬁned by the set
of vectors x such that Ax = 0. (C61) If x e .N (A) and y e .R(AT), then yTx = 0; that is, every vector in the null
space of A is orthogonal to every vector in the range space of AT. 846 Appendix C Summary of Matrix Theory 0.16 SingularValue Decomposition The singularvalue decomposition (SVD) is one of the most useful tools in
linear algebra and has been widely used in control theory during the last three decades. Let A be an m x n matrix. Then there always exist matrices U, S, and
V such that A = UsvT. (C62)
Here U and V are orthogonal matrices; that is,
UUT = 1, WT = 1, (C63) S is a quasidiagonal matrix with singular values as its diagonal elements; that IS,
2 0
s_[0 0], (C64) where 2 is a diagonal matrix of nonzero singular values in descending order:
aleZZZa,>0. (C65) The unique diagonal elements of S are called the singular values. The maximum
singular value is denoted by 6 (A), and the minimum singular value is denoted
by g (A). The rank of the matrix is the same as the number of nonzero singular
values. The columns of U and V, U=[u1 uz um], V=[U1 U2 Un], (C66) are called the left and right singular vectors, respectively. SVD provides com
plete information about the fundamental subspaces associated with a matrix: N(A) = span[ vr+1 ur+2 .. . on]
eR(A) = span[u1 uz ... ur]
.R(AT) = span[ U1 U2 . . . Ur]
N(AT) = span[u,+1 u,+2 um ]. (C67) where N denotes the null space and 3 denotes the range space respectively.
The norm of the matrix A, denoted by A2, is given by Al2 = 6(A). (C68)
If A is a function of a), then the inﬁnity norm of A, Aoo, is given by A(jw)lioo = muaX6(A). (C69) Section C18 Matrix Identity 847 0.17 Positive Definite Matrices
A matrix A is said to be positive semideﬁnite if xTAx z 0 for all x. (C70) The matrix is said to be positive deﬁnite if equality holds in Eq. (C70) only for
x = 0. A symmetric matrix is positive deﬁnite if and only if all of its eigenvalues
are positive. It is positive semideﬁnite if and only if all of its eigenvalues are
nonnegative. An alternate method for determining positive deﬁniteness is to test the
minors of the matrix. A matrix is positive deﬁnite if all the leading principal
minors are positive, and it is positive semideﬁnite if they are all nonnegative. 0.18 Matrix Identity
If A is n X m matrix and B is m x n matrix then
det[I,, — AB] = det[Im — BA] (C71) where 1,1 and Im are identity matrices of size n and m, respectively. ...
View
Full Document
 Spring '10
 sim
 Linear Algebra, Matrices, Invertible matrix, Matrix Theory

Click to edit the document details