This preview shows pages 1–12. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: (r Linear Algebra Review Dr. Kenneth R. Muske
Department of Chemical Engineering Villanova University
Villanova, PA 19085 ChE 8579 Class Handout January 13, 2010 1 Introduction This document is intended to' serve as an overview of the linear algebra concepts that you will need
in this course. More detailed discussion of these topics may be found in the book b Stranga Linear
Algebra and its Applications, Harcourt Brace Jovanovich, 1988 or other advanced linear algebra
texts. ' ' ' ' ' R ' ' I I ' 2 'Matrices, Vectors, and Scalars Consider the following set of linear algebraic equations ﬁll'31 + 612932 +    + aimmm = bi
(121661 + 022$2 + . . . + azmivm = » 52
(1)
animi + tin.2532 + . . . + anmxm. = bn
in which there are n equations in the m unknowns {931, 2:2, . . . ,ccm}. This linear set of equations can be expressed in matrix form as An: = b in which A is the matrix of coefficients and :r is the
vectorof unknowns. «$11 0:12 . aim $1 ‘ . bl 021 am . . . agm 532 52
A = . . _ ' . 7 a: = . a b :2  anl lg5112 .   anm 53m bn Matrices. Matrices are denoted by capital letters. The coefﬁcients in a matrix are denoted by
lowercase letters with a double subscript that indicates the location of the coefﬁcient in the matrix.
For. example, 0512 indicates the coefﬁcient in the ﬁrst row and second column of the matrix A. A
matrix can also be designated by a collection of its elements. 'AE{a1j},i=1,2,_...,n;j=1,2,...,m Vectors. A vector is a matrix with a single column. Vectors are denoted by IOWercase letters with
no subscript. In the preceding example, m and b are both vectors. The coeiﬁcients in a vector are
denoted by lowercase letters with a single subscript that indicates the location of the coefﬁcient in
the column. For example, .52 indicates the coefﬁcient in the second row of the vector 1). A matrix
can be considered a collection of vectors in which the superscript 2' indicates the vector corresponding to the ith column of the matrix. Row
vectors, which are matrices with a single row, are denoted by lowercase letters with a superscript
T indicating the transpose of the column vector. 5T :[b1 52 a] Scalars. Each of the coefﬁcients in a matrix or vector is a scalar. A scalar can be viewed as a
matrix with a single row and column or a vector with a single row. Scalars that are not coeﬂicients
of a matrix or vector are typically denoted by Greek letters. 2.1 Mathematical Deﬁnition of Matrices, Vectors, and Scalars Matrices, vectors, and scalars are deﬁned by the following mathematical expressions that indicate
the size and type of coefﬁcients. ' ' ' A E iRnxm; A is a matrix with 77. rows and m columns of real numbers
b E ERR“ E b 6 ER”; 5 is a vector with 10. rows of real numbers or 6 313.1“ E O: 6 ER; 0: is a real number In this class we will consider real and complex numbers. Complex numbers are denoted by C. A
complex scalar is deﬁned by ﬂ 6 C. A vector 2 of m complex numbers is deﬁned by z E C“. 3 Matrix Operations The following operations are deﬁned for matrices. Note that these operations also apply to vectors
which can be viewed as matrices With a single column. Matrix Equality. Two matrices of the same size, A E anxm and B E ﬁtnxm, are equal if and
only if their corresponding coefﬁcients are equal. A=B ifandonlyif aéj“Jbij forall im1,...,n and j=1,...,m Matrix Addition. The operation of matrix addition is a matrix'valued function denoted by
f(A,‘B) =‘A — B in which the addition function is deﬁned as f : (illnxm x 32mm) —> anxm. This
deﬁnition is the mathematical way to state that matrix addition operates on two matrices that
must be the same size and the result is a matrix of the same size as the two matrices. 1’“. a. Matrix addition is a component—wise addition between two matrices with the same number of
rows and columns. C=A+B, cijzoij+bij forall i=1,...,n and j=1,...,m
For example,
123 + 1 01 u 224
45 6 —101 u 357 The commutative and associative laws of addition apply to matrix addition. A+BmB+A
m+m+0=A+m+o) Commutative Associative Matrix Multiplication. The operation of matrix multiplication is a matrix valued function
denoted by f (A, B) = AB in which the multiplication function is deﬁned as f : (anxm >< .SRmXP) —>
W”. This deﬁnition is the mathematical way to state that matrix multiplication operates on two
matrices in which the number of columns of the first matrix must beequal to the number of rows of
the Second matrix and the result is a matrix with the same number of rows as the ﬁrst matrix and
the same number of columns as the second matrix. If the number of columns of the first matrix
is equal to the number of rows of the second matrix, the matrices conform. Multiplication is only
deﬁned for matrices that conform or are conformal. Matrix multiplication is the summation of the component—wise multiplication of the coefﬁcients
in a row of the ﬁrst matrix by the coefficients in a column of the second matrix. Til,
cij=Zaikbkj forall i=1,...,n and j=1,...,m C=AB,
{5:1
For example,
1 0
1 2 3 1 2
456 01 = 45
0 0 The associative and distributive laws of multiplication apply to matrix multiplication.
man=mmo A(B+C)=AB+AO (B+C)A:BA+CA Associative
Distributive — premultiplication by A Distributive — post—multiplication by A
Matrix multiplication isnot commutative even if the matrices conform in both directions, AB = C does not imply BA = 0
AB = 0 does not imply that A = O or B x 0 Multiplication by a Scalar. The operation of matrix multiplication by a scalar is a matrix valued function denoted by f (a,A) = ozA in which the multiplication function is deﬁned as f : V _ _ _ _ ~..nx.,.—.... . l
l
i
l (3?. x iRnxm) —> Elinxm. Multiplication of a matrix by a scalar is a component—Wise multiplication of
each coefﬁcient in the matrix by the scalar. 010,11 Odalg aalm 06(121 m15:22 0662771
05.4 = An: : _ ' dram aang Oaanm Scalar multiplication is deﬁned for every matrix and has the following properties. I a(AB) = (01A)B r: A(QB) : (ABM
a(A+B) = cuA+0aB : (flFEM The negative of a matrix is deﬁned as the multiplication of the matrix by —1.
“A = (41m = A(—1) Matrix ’I‘ranspose. rlf‘he transpose of a matrix is a matrix valued operation denoted by f(A) = AT
in which the transpose operation is deﬁned as f : (W‘xm) ——> Efﬁmm. The transpose of a matrix is
formed by interchanging the rows and the columns of the matrix. T
6511 a12 a1m a11 a21 am
G21 0022 32m _ G12 G22 (Ina
an} (1912 Grim aim a2m anm
For example,
T
1 2 __ 1 3
3 4 2 4
The matrix transpose has the following properties.
(A + B)T 2 AT + BT (AB)T = BTAT 4 Special Matrix Multiplication Operations Special cases for matrix multiplication are illustrated in this section. Vector—Vector Multiplication The special case of vector—vector multiplication is a scalar valued
function denoted by f (2:,y) = asTy deﬁned When the vectors x and y have the same number of
rows. Vector—vector multiplication is the sum of the componentwise multiplication of each of the
corresponding coeﬂicients in the vectors. ' 91 En \.\:‘ The result is called the dot or vector product. If try 2 0, the two vectors a: and y are deﬁned to
be orthogonal. MatrixVector Multiplication The special case of matrixvector multiplication is a vector valued
function denoted by f (A, 3:) = A3: deﬁned when the number of rows of w is equal to the number of
columns of A. rl‘he result is a vector with the same number of rows as A. Each row of the resulting
vector is the vector product of the corresponding row of A with the vector 3:. .m alt $1 A$=Z$ia13 Where ‘4': [31:a21‘niam]: all: 23':
i=1 am' 37m VectorMatrix Multiplication The special case of vectormatrix multiplication is a vector valued
function denoted by f (at, A) = :cTA deﬁned when the number of rows of a: is equal to the number of
revvs of A. The result is a row vector with the same number of columns of A where each component
is formed by the vector product of the corresponding column of A with the vector 3:. I an: $1 ITA =.[;rTa1,mTa2,...,wTam] , where A = [a1,a2,...,am], a” = f , cc : 5 Special Classes of Matrices Square Matrix. A square matrix is a matrix with the same number of rows and columns. Symmetric Matrix. A symmetric matrix is a square matrix that is equal to its tranSpose, A 2 AT.
A skewsymmetric matrix is a square matrix that is the negative of its transpose, A m —AT. Diagonal Matrix. A diagonal matrix is a square matrix in which all of the components oil the
main diagonal are zero. A diagonal matrix is symmetric. The diagonal matrix D is as follows. dll 0 0 I 0 dm 0
D: . . , .
O 0 drm ’I‘ricliagonal Matrix. A tridiagonal matrix is a square matrix in which all of the components off
the main diagonal, the diagonal above the main diagonal, and the diagonal below the main diagonal
are zero. The tridiagonal matrix T is as follows. ' ' $11 1512 0 0 . . . 0 1521 $22 7523 0 '  9 0
T = 0 i132 $33 $34 . . . 0 0 . . . 0 tnﬂ—l .tnn Triangular Matrix. A triangular matrix is a square matrix in which all of the elements below
the main diagonal or all of the elements above the main diagonal are zero. If all of the elements
below the main diagonal are zero, the matrix is upper triangular. If all of the elements above the
main diagonal are zero, the matrix is lower triangular. The upper triangular matrix U and lower
triangular matrix L are as follows. 11.1] U12 . . . ’Liln £11 0 0
0 1122 “2n lgl 522 U
U 2 . a L =
0 0 . . . um inl Eng . . . 17m Banded Matrix. A banded matrix is a matrix containing one or more diagonal bands of nonzero
elements. _ Sparse Matrix. A sparse matrix is a matrix in which the majority of matrix elements are zero. 6 Special Matrices Identity Matrix. The identity matrix, denoted by I, is a diagonal matrix that has ones on the
diagonal with the following property. 1 0 0
0 l 0 1119523}: . : _' : , AI=IA=A
0 0. l Null Matrix. The null matrix, denoted by 0, is comprised of all zeros with the following properties
where A ~— A is deﬁned to be A + (—A). A—A=—A+A=O
A+030+A:A
0A:0 A0=0 7 Matrix Rank The rank of a matrix is the number of linearly independent row vectors in the matrix or the number
of linearly independent column vectors in the matrix. For every matrix, the number of linearly
independent rows and the number of linearly independent columns are the same. Linear Independence. A set of vectors {iii}, i = 1, . . . ,n is linearly independent if and only if
the only solution to (1101 + «12?? + . . . + any” = 0 (2)
is a1 = a2 = . . . = an 2 O. The expression in Eq. 2 is referred to as a linear combination of the set of vectors If the set of vectors are linearly independent, one vector in the set cannot be 6 expressed as a linear combination of the others. To demonstrate this statement, assume that the
vectOr '01 can be expressed as a linear combination of the other vectors in the set as follows. 71
'01 = 213in
1:? Let a1 = —1 and a, = (3,, z' z 2, . . . ,n. The expression in Eq. 2 is then zero with a, 7é 0. Therefore,
this set of vectors cannot be linearly independent. A set of vectors that are not linearly independent
are referred to as linearly dependent. Rank Condition for a Solution to a Linear System of Equations. An alternate way of
viewing the linear system of equations Am 2: b in Eq. 1 is as a linear combination of the vectors
comprising the columns of A. all .312 (Elm
m (121 £122 (12m
Ax=2cﬂa= 231+ 32+ + xmzb
i=1
anl an? “mm The linear set of equations Ax = b has a solution if and only if
rank(A) = ra_nl<([Ab]) _ . (3) in which [Alb] is the matrix constructed by appending the vector b to the column vectors of the
matrix A. If the linear system of equations has a solution, then there must be some linear com—
bination of the column vectors of A that is equal to 13. Therefore, the matrix [ALB] must have the
same rank as A since it must have the same number of linearly independent columns as A. If the
ranks are not equal, then I) cannot be expressed as a linear combination of the column vectors of
A. Therefore, there is no vector :1: (or values of that satisfy the equation Ar = b. ' Assume A has 771 columns and that rank(A) = rank([Ab]). In this case, there are m unknowns and rank(A) independent equations. Therefore, if ra11k(A) = m, then the solution is unique. If rank(A) < m, then there are inﬁnity many solutions. ’ '
Consider the following examples. 1 O 0 A: 0 1 ,b: 0 ,rank(A)=2, rank([Ab])=3
0 0 1  ' ' Since the bottom row of A is zero, there is no linear combination of the columns of A that can
result in the non—zero value in the bottom row of b. Therefore, there is no solution to An; = b. 1 U l
A = 0 1 , b = 0 , rank(A) = 2, rank([Ab]) a 2
0 0 0 ' ' In this case, a: solution exists (m = [1 0F) and it is unique since rank(A) x 2 = m l 1 1
A = 0 0 , b = 0 , rank(A) = 1, rank([Ab]) = 1
0 0 0 Since rank(.A) : 1 < 771, there are inﬁnitely many solutions. Each of these solutions can be ex—
pressed as it = [as 1 — d]T in which a E 3%. ' 8 Linear Spaces Another way to state the rank condition in Eq. 3 is that a solution to Am a b exists if and only if
b is contained in the column space of A. The column space of A is the vector space formed by the
column vectors of A. ' Vector Space. A vector space, V, of the set of vectors {vi}, 2' x 1,. .. ,n contains the vectors
from all possible linear combinations of the set {vi}. (11111 + 0.2112 + . . . + any“, for all possible values of (1,, i: 1, . . . ,n The vectors from all possible linear combinations of the set is referred to as the linear subspace
that is spanned by the vectors Therefore, another alternative statement to the condition in
Eq. 3 is that a solution to A2: = b exists if and only if b is contained in the linear subspace spanned
by the columns of A. ' ' Dimension of a Vector Space. The dimension of a vector Space V is the minimum number of lin—
early independent vectors required to span the vector space. If the set of vectors {vi}, 2' = 1, . . . ,n
are linearly independent, then the dimension of the vector space spanned by is n. We can
make the following statements about the matrix A. If rank(A) = m, then the column vectors are
linearly independent and the column space has dimension m. If'rank(A) = p < m, then the column
vectors are linearly dependent and the column space has dimension p. If rank(A) = n, then the
row vectors are linearly independent and the row space has dimension n. If rank(A} = p < n, then
the row vectors are linearly dependent and the row space has dimension p. Range. The range of a matrix is the vector space spanned'by the column vectors of A. It is deﬁned
by the following mathematical expression. range(A) E = Az, for allz G Wu} (4) The expression in Eq. 4 means the set of all vectors a formed by m 2 A2 for all z. The range of
a matrix is equivalent to the column space of that matrix. The dimension of the range of A is the
rank of the matrix A. 7 Null Space. The null space of a matrix is the vector space deﬁned by the following mathematical
expression. minor) 2 {an x 0} (5) The expression in Eq. 5 means the set of all vectors a: such that Ax = 0. The dimension of the null
space of A is m — rank(A). If the null space of A has dimension zero, then the column vectors of A
are linearly independent, the column space of A is dimension m, rank(A) m m, and A3: = 0 if and
only if a: = 0. If the dimension of the null space of A is greater than zero, then the column vectors
are linearly dependent. f/‘\_ ../’"‘.\
 i 9 Eigenvalues and Eigenvectors The eigenvalues and eigenvectors of the square n X n matrix A satisfy the equation Ami = Ami for :12" # 0 There are n eigenvalues for a n X 71 matrix, although they may not all be distinct. The eigenvalues
of A are the set of scalars {Alrank(A — IA) < N onsingular Matrix. The square matrix A is a nonsingular matrix if and only if /\ m 0 is not
an eigenvalue of A. If A = 0 is an eigenvalue of A, the matrix A is a singular matrix. Matrix Inverse. If A is a nonsingular matrix, the inverse of A, denoted by AA, exists and is
deﬁned such that AA‘1 = AWIA = I. The eigenvalues of the matrix A“1 are 1/)” where are
the eigenvalues of the matrix A. I Orthogonal Matrix. An orthogonal matrix is a nonsingular matrix in which the transpose of the matrix is its inverse, AT = At1 or ATA = AAT = of. Note that the columns of an orthogonal ma~ trix must be orthogonal vectors. Two vectors '3 and y are orthogonal if and only if :rTy = yTzr= O. 10 Existence of a Unique Linear System Solution For the n X 71 linear system An: = b, the following statements are equivalent.
0 rank(A) = n
o The columns of A are linearly independent
0 The rows of A are linearly independent
0 The dimension of range(A) is n
o The dimension of null(A) is O 
o A m 0 is not an eigenvalue of A
t o A is nonsingular o A"1 exists
‘ 0 A3: = b has the unique solution 33 = A—lb for all 5 o Arc = 0 has only the zero solution a: = O. 11 Vector and Matrix Norm The norm of a vector or a matrix is a scalar measure of the size of the vector or matrix. Vector Norms. All vector norms are deﬁned by the following relationship 71 1/19
“33H? 2 lwilp)
i=1 where p is an integer indicating the type of vector norm speciﬁed. There are three commonly
applied vector norms. ' 71
lnorm: = Z sum of the component magnitudes
i=1 ' Tl
2—norm: “mug: Euclidean distance
i=1 oownorm: “canoe = max lagL magnitude of the largest component
I .
The 2—norm is assumed to be the default vector norm when no value of p is speciﬁed: E Matrix Norms. There are a number of ways to deﬁne the matrix norm “A”p for a given integer
p. The most common way is the induced matrix norm in which the matrix norm is deﬁned by the corresponding vector p—norm on the matrix~vector product Am. The induced matrix norms for the
three commonly applied vector norms are as follows. TL
l—horm: AH1 it aij, maximum column sum
2 _ 7W1
A
2—norm; Af2 2 max mug, spectral norm 3* Mb 771
oo—norm: MANGO = max: latﬂ, maximum row sum 'L . 3:1
The 2wnorm is assumed to be the default matrix norm when no value of p is speciﬁed: E The spectral or 2—norm of a square matrix A has the following minimum bound
IIAHZ a [Amid IIA”1II2 2 Il/Aml where Am“ is the eigenvalue of A with the maximum magnitude and Ami“ is the eigenvalueof A
with the minimum magnitude. 12 Condition Number
The condition number of a square matrix is deﬁned as the ratio
comm} = HANNA—1H 2 lAmaxl/lAminl which has a minimum bound of the ratio of the largest to the smallest eigenvalue of A. Note that
the condition number has a minimum value of 1 which occurs when all of the eigenvalues of A have
the same magnitude and a maximum value of 00 which occurs when the matrix A is singular (when
zero is an eigenvalue of A). 10 13 Positive Deﬁnite Matrices The square matrix A is positive deﬁnite if rTAa: > 0 for all 2: at O. A positive deﬁnite matrix ._ is denoted as A > 0. The matrix A is positive semideﬁnite, A 2 0, if mTAzr 2 0 for all :1: 75 0. The matrix A is negative deﬁnite, A < 0, if rTAsc < O for all :r: 75 0. The matrix A is negative
semideﬁnite, A g 0, if xTAzr g 0 for all :2: # 0. Each eigenvalue of a real positive deﬁnite matrix is
a strictly positive real number (a real number greater than zero). Each eigenvalue of a real positive
semideﬁnite matrix is a nonnegative real number. A real positive deﬁnite matrix has an inverse. If
matrix A is positive deﬁnite and matrix B is positive semideﬁn'ite, then the matrix A+B is positive
deﬁnite. If matrix A is positive deﬁnite and matrix C E lﬁnxm has rank m, then the matrix CTAC
is positive deﬁnite. If matrix A is positive deﬁnite and matrix C E ﬂinxm has rank p < m, then the
matrix OTAC is positive semideﬁnite. 14 Similar Matrices Two square matrices A and B are similar if there exists a nonsingular matrix S such that B =
S‘IAS. This expression is referred to as a similarity transform. Similar matrices have the same
set of eigenvalues. Diagonalizable Matrices. The square matrix A is diagonalizable when it is similar to the diagonal
matrix A which contains the eigenvalues of A on its diagonal. The square matrix A is diagonalizable
if and only if the eigenvectors of A are linearly independent. If the matrix A is diagonalizable, the
similarity transform using 8' x {931 $2 . . . can], in which. are the set of eigenvectors of A, results
in a diagonal matrix of its eigenvalues denoted by the matrix A. s1As = S_1[A:r1 A332 Ann] = salami A2222 Anger] = S_1[331 $2 ... 32”]A = 3—131: 2 A 15 Vector Differentiation The following dilferentiation formulas apply when taking the derivative with respect to the vector
3: E 9%”. Vector Differentiation of Functions. ﬁe
8x1
60; a: . .
a; ) : Vmoztr) 2 f , gradIent vector of the scalar functlon 04(22)
ﬁe
8m”
8204 620: 82a
. a? 8x16m2 ‘ ' ‘ amamn
32am) We“ is —a ‘92“
8 ' ' ' 3 n . . .
8 2 — View) — $2 31 am? £2 E , Hessian matrix of the scalar function (2(59)
m :
82a 82a: 82a
Sandal axnamg ‘ ' ' W 11 @211 if; 8331 3x2 ‘ ' ' 35911 f1 a if; Q12 _ _ _ 224
ﬁx) 2 V . a: = 8:“ 6322 83“ Jacobian matrix of'the vector function f = _
82: mﬂ ) 3  7
' ' fn($)
8 TL 6 ‘11. a T}. aaTx 3me
33: =a” ('33: zgx
6;?ng, amgfw «\F LN} %« m Pew mwbev ) m m m1 «w {v.5}.
uTwa muERSE It)? 23: maﬁﬁ WW \3 M3539” 9035553?"
uh REAL swam .mx EA] \3 ?®3~bEF_ \FF
THERE EX\5~TS H REM. Newsmé MTM 5““ RT“ [M = [m EMT ‘EKBEN‘QR'LSQ‘F R 35mm MTX ﬁkﬁ ML ‘2’qu \FF
THE Mix 132 “EwﬁwﬁﬁzwwE’DEP
"EWENVAL‘i 323?? A 5%;Mm mwa ARE. ﬁLL "m C)
W? “ENE: mm * F'an is}? ALL Ewammwgﬁf; .ﬁﬂuﬁaﬁé “ME:
DE“? a? TM»: me. 12 ...
View
Full
Document
This note was uploaded on 09/30/2010 for the course ME 7000 taught by Professor Dr.sullivan during the Spring '10 term at USC.
 Spring '10
 Dr.Sullivan

Click to edit the document details