This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: 448 ST .0114 BE 3 2 ST 4831 PS 1 1 ST 452.711 95 2 2
ST 13.656 PS 1
ST .165 PS
sT .209 PS
sT .282 1)
ST .1211 P
on SE Tv 2
33
44 555 366 RS M1 TO N5 llll. un.1~u.nn.. mu”. .., . (1982), Graybill (1983), or Hadley (1961). . covariance between x1 and x2,COV(x1,
‘ and the regression coefﬁcient ,821. ‘ row or column vector.
‘ indicates one row and c
I column vector is r X 1. and a’ and I" are row vect
prime superscript distingui unneborg and Abbott (1983), Searle , SCALARS, VECTORS, AND MATRICES en scalars, vectors, and matrices. A single
eferred to as a scalar. For instance, the
x2), is a scalar, as is the number 5 When two or more scalars are written in a row or a column, they form a ow vector is 1 X c, where the 1 is the number of columns (elements); the order of a
Three examples of vectors are The order of a r ors of orders 1 X 4 and 1 X 2, respectively. The
shes row vectors (e.g., a’) from column vectors 449 50 MA'i‘Kix ALGEBRA RLVIEW
4 (e g E). The prime stands for the transpose operator, which I Will dehne later. . , ( ‘ ‘
A matrix is a group of elements arranged into rows and columns. ManICCS (”50 die [Cplcsenled by boldface Syllll)()l5. ”16 ()lthl of d nldtllx 15
I ‘ ' ' V ml)Cr 0‘
L [ed l) I C WllClC I IS the numl)t:t 0t IOWS And ( l5 lllC nu
lndl a X , . _ , ' I \
C lumnS. [>116 OldCl r X C 3 ISO '5 [he lmcns‘lOll of {he IIMIHX. A5 delllplcb,
.0 I ‘ d . consider
0 712
S : [3'11 512], F : y” (l . .1 ‘ .y . 7 . 'x.
The dimension of S is 2 X 2, whereas I‘ fis a 3 ‘X .. graft; vector is a 1 X (‘
‘ ' ' ‘ ' ‘ ecial cases 0 matrices. .
Vectors and scalars are sp . . H . t» A mamx
1 trix a column vector is an r X 1 matrix, and a scalar is a 1 X11 d .f
n a i 4 ' " an I
TW) matrices say S and T are equal if they are of the sameforrtfer 1 45
i , ‘ r '
l ‘ ' t t 0 or a 1
every element 51 of S equals the corresponding elemen d ,+* W “m.
/ V r x A . A 4 .
and j’s. For instance, here 5 and l are equal, but 5 an a ‘ # 511 0 T ; [5H 0]
b _ 521 3'22 i "'1‘ 333 S 7 s” 0 T" : [S11 ll ]
_ 3'21 322 . () .s S ¢ T“. [0" "21 7* 0 [~/
I MATRIX OPERATIONS ) )l .
I ddd {WU 0r ”TOTC ”Id“ lLCS, ”le ”lust he ()1 the same dlmCllsIC 1] 0r ()(lCl
0 Int. l qu d t
l H: lc‘ulllll lIldIlIX l5 f the bdlllC OldLl, Wllll CdLll ClC ll 1 l l the
l 5 g 0 .
Sun] 0‘ [he LOIICspondlng LlLlllCIlls l” Cdbll 0‘ the mdtllLCb dddbd lOgCIIlCl. [311 :81.) I: [I 0]
B : l/izi B’Zl, 0 l
a,H m2]
.821 [£33 + 1 For instance, B+I=[ MAi'iux OPERATIONS 451 Two useful properties of matrix ad dition, [or any matrices S, T, and U of
the same order, are as follows: LS+T=T+S
ZJS+D+U:S+U+U) Two matrices can be multiplied only when the number of columns in the
ﬁrst matrix equals the number of rows in the second matrix. If this condition is met the matrices are said to be conformable with respect to
multiplication. If the first matrix has order a X b and the second matrix has order I) X c, then the resulting product of these two matrices is a matrix of
order a X 1'. The I] e.
elements of the 1th row of the ﬁrst S T
[lit A'12 ‘ Ali/v] ’11 [h
“‘TT5“ I 7 I,
‘21 522 ‘ 3'1}: i" 7‘
I I I [l [
. , [)2 I)
ul Aill suh L
U
“11 “12 ”it
”21 “22 ”2t
llul ”(12 “at es T, labeled U, start with the ﬁrst row
of S, which is enclosed in a box. Each element in this row is multiplied by the corresponding element in the ﬁrst column of " l, which is also enclosed
in a box. The sum of these b products equals u“ of U. ln other words
“11 : Sii’ii T 512’21 + +"ih’hi . This may be stated more generally as I)
“1/ 2 Z SIAIA/
A—' for each element of Us MATRIX ALUEBKA xuvrtw
452 For example, 0 .312 0 m
B = .821 0 323 , 'l : "2
0 0 0 713
3 X 3 3 X1
312712
Bn = [321711 + 323713
0
3 X l (The term Bn appears in the latent variable model.) K _ S T .nd U
Some properties of matrix multiplication for any matrices , , a
tha‘ are conformable are as follows: 1. ST at TS (except in special cases). 2. (.LT)U = S(TU). 3. S(T + U) 2 ST + SU. 4. ((5 + T) = ('S + (T (where c is a scalar). These properties come into play at many points throughout the book. T2:
order in which matrices are multiplied is importantd‘l‘or 1h]; :ieals:or
I i I I r u x  s C ‘
’ ' ' 4 ' [rout/an by a matrix are istingu1s
remu/(l [nation and postmulup . . .
I‘) t ncepif U  ST we can say that U results from the premultiphcation of
ms a ' , — , ,.
" ‘ ‘ ' tionobey l.
l b b or the postmultiphca _ . , ‘ . e
The transpose of a matrix interchanges its rows and colltlimns. Title
. . . , . m
transpose of a matrix is indicated by a prime () symbol fo ow g
matrix. As an example, consider I and F : ’ ‘ ‘ " second
The first row of I‘ is the ﬁrst column of F , the second row fislthe 3 X 2
column and the third row is the third column. The order 0 dis mm);
1 ereas the order of I" is 7 X 3. The transpose of an a x b or cr n
W) ‘  leads to a b X a matrix. MA l'RIX OPERATIONS 453 Some useful properties of the transpose operator are listed below: I. (S’)’ = S.
2. (S + T)’ = S’ + T’ (where S and T have the same order).
3. (ST)’ = T’S’ (where matrices are conformable for mu] tiplication).
4. (STU)’ = U’T’S’ (where matrices are conformable for multiplication). Some additional matrix types and matrix operations are important for square matrices. A square matrix is a matrix that has the same number of
rows and columns. An example of a square matrix is Yit 0 713
F = 0 Y22 0
0 Y32 733 The dimension of the F matrix is 3 X 3. The trace is deﬁned for a square matrix. It is the sum of the elements on
the main diagonal. For an n X :1 matrix S, tr(S) = Z s” ll Properties of the trace include: I. tr(S) : tr(S’). 2. tr(ST) = tr(TS) (if T and S conform for multiplication).
3. tr(S + T) = tr(S) + tr(T) (if S and T conform for addition). The trace appears in the ﬁtting functions and indices of goodness of fit for many structural equation techniques. If all the elements above (or below)
are zero, the matrix is triangular. For
models (which I discuss in Chapter 4)
The B matrix contains the coeﬁicie
latent variables on one another. To il the main diagonal of a square matrix
instance, the B matrix for recursive
may be written as a triangular matrix.
nts of the effects of the endogenous
lustrate, one such B matrix is 0 0 0
B : [321 0 0
(in .832 0 454 MATRIX ALUEBRA REVIEW
Note that in this case the main diagonal elements are zero. However,
triangular matrices may have nonzero entries in the main diagonal. A diagonal matrix is a square matrix that has some nonzero elements
along the main diagonal and zeros elsewhere. For instance, 85, the covari
ance matrix (see Chapter 2), of the errors of measurement for the x
variables, commonly is assumed to be diagonal. For 8,, 82, and 83, the
population covariance matrix 85 might look as follows: VAR(5,) 0 0
o, = 0 VAR(82) 0
0 0 VAR(83) The zeros above and below the main diagonal represent the assumption that
the errors of measurement for different variables are uncorrelated. A symmetric" matrix is a square matrix that equals its transpose (e.g.,
S = S’). The typical correlation and covariance matrices are symmetric
since the 1'] element equals the ji element. For instance, VARUl) (‘oV(x,,.x2) covti‘hh)’
2 = C()V(.\‘2,x,) VAlUxZ) (‘()V(.x‘l,x3)
(‘()V(x3,xl) C()V(x3,x2) VAR(x3) For all the variables, the covariance of x, and x, equals the covariance of x]
and x,. Sometimes symmetric matrices, such as E, are written with blanks
above the main diagonal because these terms are redundant. An identity matrix, I, is a square matrix with ones down the main diagonal and zeros elsewhere. The 3 X 3 identity matrix is l () 0
I 2 U l (l
0 (l 1 Properties of the identity matrix, I, include the following: 1. IS = SI : S (for any I and S conformable for multiplication). MATRIX OPhRA'I‘IONS 4
55 A vector that consists of all ones is a unit vector' UnthCClOI‘ products have some interesting properties. If you premultiply a
matrix by a conformable unit vector, the result is a row vector whose
elements are the column sums of the matrix. For example 4
1’T=[1 1 1}0 1:“, 2]
2 Postmultiplying ' ' ' '
a matrix by a conforming ' unit vector le‘ ' '
vector of row sums: dds [0 d Wlumn T1 ll
Nob
1
_
f—1
._‘._‘
I—J
ll
k» F T . . .
Lmutually, if we both premultiply and postmultiply a matrix by conforming
vectors, a scalar that equals the sum of all the matrix elements results 1"“ [ 4 ‘1 1
= 1 1 1 0 :
l 2 i [I] [8] Using unit vectors and some of the other matrix properties we can
compute the covariance matrix. Consider X an N X 1) matrix of NiobserVi
[1011‘ ' " ‘ I‘— s for p variables. The 1 X p row vector of means for X is formed as ll)” The .1 . . .
COlundeViatton fort? of X requtres subtracting from X a matrix whose
is consist o N X l vectors of th
. ‘ e means for the corres d'
v. , ‘ ‘ ‘ pon in
f1:Sables :11 X. So every element in the first column equals the mean of mi
' v; ‘ ‘
ma e in X, every element in the second column equals the mean of 456 MATRIX ALGEBRA REVIhW MATRIX ()PERA'l‘lONS 457
‘ , i, .This matrix Of means is The last line is the unbiased sample covariance matrix and contains the
the second column Of X, ‘md so on variances of the variables in the main diagonal and the covariances of all
I i I’X pairs of variables in the offdiagonal elements. All covariance matrices are
N square and symmetric.
h' l hen subtracted from X forms deviation from the mean scores: Suppose that 1 form a diagonal matrix from the main diagonal of S:
w 1c 1, w ’
X— 1(L)I’X var(xl) 0 0
N D = 0 var(x2) 0
If the preceding deviation score matrix is represented by Z, then the P X P 0 0 V3r(X3)
unbiased sample covariance matrix estimator 5 1s
1 where var(x,) represents the sample variance of x" The square root of D,
S : (mil/Z represented as [)1/2, has standard deviations in its diagonal:
N _
. .. 4 ’ 1/2
A numerical example illustrates these calculations. [var(xl)] 0 0
2 3 1 D“ = o [more 0
—l 1 1 0 0 [var()c3)]l/2
x z 0 4 2
_1 0 0 and [)"1/1 has the inverse of the standard deviations in its main diagonal.
2 3 1 If S is postmultiplied by 04/2, the ﬁrst column is divided by the standard
1 1 deviation of xl, the second column is divided by the standard deviation of
(LP/x : (1)“ 1 1 1 i _(l) 4 2 = [0 2 1] x2, and the third column is divided by the standard deviation of x}:
N 4
$1 0 0 [var(x ”1/2 cov(xl, x2) cov(xl, X3)
1 1 2 1 2
l 0 2 i [var(x2)] / [var(x3)] /
l 1 _ 0 2 cov(x,,x) cov x ,x
.(‘)l’x= 1 to 2 11 0 2 1 so \ want“ #342
N ] 0 2 l [var(x1)] [var(x3)] /
0 cov(,\’3,xl) c0v(x3,x2) 1/2
2 1 g 1/5 1/2 [mid/‘3”
l *1 ‘1 0 [Var(x,)] [var(x2)]
Z:X’l(*)llx: 0 2 1
_1 v2 *1 Premultiplying SD 1/2 by I) 1/2 leads to
V'dri X1) COWX" X1) COW/‘1’“) 1 “0‘4le ‘2) COV(XIv 11)
s _ ( ——1——)Z'Z : (Kn/(X2, XI) var(,\‘2) COViXZv X3) [var(xl)var(,\2)]l/2 [var( xl)var(/\,)]l/2
( N — 1) C()V(.\'3, X1) cowl3» X2) Vaf(X3) l) 1/25” 1/2 = m, :(Wiﬁl;:ﬂcﬁ 1 ¥_COV(X2‘ A797,:
' [var(.x2)var(.il)] /’ [var(x2)var(.\3)]l/‘
l 6 5 1 cov(.r),x,) cov(x3,,xz)
: — 5 10 4 <  1/72, 1/2 1
3 ] 4 2 ““00”“ ‘1” [var(x‘)var(xl)] MA'I'RlX ALGEBRA RtiVllLW
458 . . . . ‘. ,' onal
The resulting matrix is the sample correlation matrix Wnathc'r‘iﬂ diaguus
' ' ‘ ' ' d x varia es. iese res
elements equal to the correlations.ofltfliifIZICanariu/mc matrix S is p“:— and
4 " ' dimension matrix. 1 . _ .
generdhli": liedngy 1) V2 where D ”2 is the diagonal matrix With the
postmu ip 7 ' ‘ ' ' ' is the
t dard deviations of x in its diagonal, then the resulting matrix
s an ' .
.. 1 le correlation matrix. . .~ ‘ .. .~ dwom.
5drThe nonnegative integer power of square mdlrlLCb occurs in thef “in“ a )sition of etl‘ects in path analysis. lt is defined as the number 0 P‘ ‘ g . p ‘ . ' . ‘
matrix is multiplied by itself. For instance, ' 0
0 all 0 a; 0 B12 , rim/321 1871 0 —. [ill 0 18M 0 O :Bllﬁll q , q ) I L l
I (I (“I 5 UdlC “Hunt :3 d Stdldl Udnllly Clelb LJHLd ”TC aClLlnl H “I
g) ,1 I S V i S l l .. . t ) h
f » [prlCSCHtL d5 's I 0 (Cl . ll llC Ldb‘. 0 d ... X 2 [TTdUIX t C
0 7 dClCllllllldHl 15 = 511322  512571 If S is a 3 X 3 matrix, the determinant is , , . , s.
‘ . ' ' +5 .5, s“ 313.322.“
‘ ‘ ' ' 5 5 5 533 ll 3.
321 5n 323 511322 33 12 2t l Siiszisn T Sit“23532 ' d) f S increases, the formula for the determinant becomes Flor:
A5 thc'or d 0 L i' a eneral rule for calculating determinants or
complicamd: Thcic ‘5 )rdegr To explain this rule, the concepts of a riimor
Square mam" 0f ddnil )(be defined. The minor of a matrix is the determinant
mfmlil cafalt‘rliy onlftZined when the ith row and jth column of a matrix are
0 t e m removed, Consider the following matrix S. MATRIX OPERATIONS 459 The minor with respect to 5”, represented as S”, is ls l‘ 322 323 _ _
ll ‘ 332 333 ' 522533 523332
The minor of x22 is
s s
it 13
822i — 331 s33 : Sit‘33 “ sllsll The cofactor of the element A“! is defined as (—1)”/ times the minor
of 51/, If
Clj=(—) 1'5 ill The cofaetors of each element of matrix S placed in the appropriate Ijth
location creates a new matrix: +Slli _. Slli +SU
_Szl +lszzl —ISB
+S3li ‘iS32l +SB3 The determinant of a matrix can be found by multiplying each element in
any given row (column) in the S matrix by the corresponding cofactor in
the preceding matrix. This is then summed over all elements in the row
(column). For example, if we do this for the first row of S, we obtain SiilSiil _ 312lSizl + SIJiSUi : 311(322533 “ 323332) *512(32i5n _ 323531) +513(321532 _ 322311)
= 511322333 _ 311323532
Tsizszisn + 312523351
+313321332 “ 313322331
2 311322333 — 312321333
+3'i232353i — 313322331
+‘S‘13321332  311323332 Note that this formula is identical to the earlier formula for the determinant 460 MATRIX ALUEBRA REVIEW of a 3 X 3 matrix. Slightly dill'erent arrangements of terms occur depending
on the row or column selected for expansion. However, regardless of which
formula is chosen, the determinant will be the same. Useful properties of the determinant for square and conformable S and
T and a scalar c include: I. S’ z S. 2. If S = CT, then S = e"T (where q is the order of S).
3. If S has two identical rows (or columns), S = 0. 4. [ST] = ST. The determinant appears in the fitting functions for the estimators of
structural equations. It also is useful in finding the rank and inverse of
matrices. The inverse of a square matrix S is that matrix S that, when S is pre
or postmultiplied by S 1, produces the identity matrix, I: SS 1 = S lS = I
The inverse of a matrix is calculated from the adjoint and the determinant of a matrix. The adjoint of a matrix is the transpose of the matrix of
cofactors defined earlier. Using the 3 X 3 S matrix, the adjoint of S is I +lsiil Tlszil + Slli
adjS= _SIZI +iszzl 'lsszl
+isnl T Sui +isnl
The inverse matrix is
1
S I: —(adjS)
S To illustrate the calculation of the inverse, consider the simple case of a
twovariable covariance matrix: S [20 10
10 20 ISI = (20)(20) ‘ (10)(10)
= 400 ~ 100 = 300 . 20 , 10
Matrix of cofactors of S = —10 20
use 20 #10
“1‘ ‘ —10 20 MATRIX OPERATIONS 51:; 20 —10
300 —10 20 MUIleIyng S ’1 b  ' ‘
ySyieldsazxgd . Note that the inverse S ”Id 1 entity mamX. determinant, it is called a 3. are the following: 1. (S’)”=(S ‘).
2. (ST) l=T ls ‘;(STU)"'=U"T“S“ In manipulating the latent variable e
inverses. In addition the inverse ap
functions and in several other topics. Another important pro quations, we sometimes need to take
pears m explanations of the fitting I. rank(S) g min(u, b), where u i'
of columns. 2. rank(ST) s min[rank(S), rank(T)]. 1c roots. (Often such an equation is
om this practice so that k is not
ch use the same symbol.) represented as Ax = Xx. I depart fr
confused with the factor loadings whi MATRIX ALUIJIRA Rlelliw
462 The preceding equation may be rewritten as
Su — eu = 0
(S ~ eI)u : 0
Only if (S ~ eI) is singular, does a nontrivial solution1 for this equation
exist. If (S — e1) is singular, then
[S .4 all = 0 Solving this equation for e provides the eigenvalues. . ! ix.
To illustrate, suppose that S is a 2 X 2 correlation ma r . , 1.00 0.50
z 0.50 1.00 The (S 7 61) matrix is 1.00 ~ 6 0.50
S ' 61: 0.50 1.00 _ e The determinant is
is 7 el] 2 (1.00 , e)2 — 0.25 = e2 , 243 + 0.75 5 [(1 t llvdlUCS [0 ””5 2 X 2
ll“: [W0 1.5 dnd 0.5, drC th Clgc \
Solution ) , 0 0
(.Ollcldllon llldIHX. Ld‘ah Clgcllvdluc, 6‘ lldb (1 Set 0i ClgLTlVLLl '5, u, (.155 (al‘
dlcd Wllll ll [0] delllplL, ”lb 6’ 0‘ 1.5 lCddb [0 ”TC iOIlUWIllg (Siel)u=0 1.00 — e 0.50 110] Z 0
0.50 1.00 — e u2 0.50 #050 M2
,, 0.50111 + 0.50141 2 (l r050 °”H‘“l=0 0.5011l » 0.50112 2 0 "~ ' * that u is a nonzero
ld xist if u = 0, As specmed hue, I assume
lA trivial solution for e wou e vector. MATRIX optakA'i'ioNs 463 From this you can see that uJ = u2 and that an inﬁnite set of values would
work as the eigenvector for the eigenvalue of 1.5. Though the eigenvalues for the preceding example and for all real
symmetri“ matrices are real numbers, this need not be true for nonsymmet ric matrices. When the eigenvalue is a com lex number, say 2 = a + 1b,
where a and b are real constants and i = v —1 , we commonly refer to the
modulus or norm of 2 which is (a2 + lazy/2 Some useful properties of the eigenvalues for a symmetric or nonsym
metric square matrix S are l. A I) X 1) matrix S has b eigenvalues (some may take the same value).
2. The product of all eigenvalues for S equals IS]. 3. The number of nonzero eigenvalues of S equals the rank of S. 4. The sum of the eigenvalues of S equals the US. Eigenvalue and eigenvectors play a large role in traditional factor analyses.
In this book they are useful in the decomposition of etl‘ects in path analysis
discussed in Chapter 8. Quadrant" forms are represented by x’Sx (l x b)(b >< b)(b x1) which equals 464 MATRIX ALowitA RliVlliW Occasionally, structural equation models analyzed with the LISREL
program (J‘oreskog and Sbrbom 1984) may report that a matrix is not
positivedeﬁnite. For instance, suppose that we analyze the following sample covariance
matrix S: 7 3 4
3 2 1
4 1 3 S is not positivedefinite, since x’Sx is zero for some x #= O (e.g., x’ =
[l —1 ~ 1]). Indeed, S is singular (S = 0), and singular matrices are not
positivedeﬁnite. Consider the following three matrices: i? ll l‘i llsl l3 3] Assume that these are estimates of the covariance matrix of the distur
bances from two equations. None is positivedeﬁnite. The failure of the ﬁrst
two to be positivedeﬁnite indicates a problem. In the first case the
covariance (: 3) and variances (2 and 1) imply an impossible correlation
value ( = 3/ J27) The middle matrix has an impossible negative disturbance
variance (= ~ 2). Whether the nonpositive detinite nature of the last matrix
is troublesome depends on whether the variance of the ﬁrst disturbance
should be zero. Identity relations (e.g., 7h = 112 + 7h) or when measurement
error is absent (e.g., x, 3&1) are two situations where zero disturbance
variances make sense. However, when zero is not a plausible value, then the
analyst must determine the source of this improbable value The vec operator is the operation of forming a vector from a matrix by
stacking each column of a matrix one under the other. For instance: 0
vecB ve[0 [312] 321
t = C :
ﬁll 1 1812
l The vec operator appears in Chapter 8‘ MATRIX omaaA'riONs
465 A Kronetker product (or a direct TU" X '1), is defined as product) or two matrices, 5(1) X 4) and 311T    squ
S o T = '
splT    SMT
Each element of the left ' '
n matrix, S, ' ‘ '
3”] A” of these wbmdlr. \ is multiplied by T to form a submatrix example is: l
[711 Y121® .812 2 Y“ 711312 Yiz 717.8”
[321 1 Y B i
‘1 21 711 YlZBZl Yiz Kronecker’s products appear in the formul errors of indirect elTects in Chapter 8 35 for the asymptotic Standard ...
View
Full Document
 Spring '10
 SunjingHong

Click to edit the document details