This preview shows pages 1–10. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Chapter 1 Vectors, Tensors and Linear
Transformations 1 Consider a vector 1: in the plane 1R2 written in terms of its components :3 and 2
I I :1: = mlel + $262 : miei (1.1) The vectors el and eg in (1.1) form what is called a basis of the linear vector
space R2, and the components of a vector :1: are determined by the choice of the
basis. In Fig. 1.1, we have chosen (81,62) to be an orthonormal basis. This
term simply means that 61 and e; are each of unit length and are orthogonal (perpendicular) to each other.
Fig. 1.1 Netice that in (1.1) we have used a superscript (upper index) for components
and a subscript (lower index) for basis vectors, and repeated indices (one upper
and one lower) are understood to be summed over (from 1 to the dimension
of the vector space under consideration, which is 2 in the present case). This is
called the Einstein summation convention. The orthonormal basis (61, 62)
is called a reference frame in physics. Note also that we have not used an
arrow (or a boldface type) to represent a vector in order to avoid excessive
notation. Now consider the same vector as with components at” with respect to a r» rotated orthonormal frame (61,652): a: : acme; (1.2)
It is simple to show that
(217’)1 = (cos 6)a:1 + (sin 6)r2 , (a:')2 2 (— sin 6)r1 + (cos 6)cr2 (1.3) 1 I?” ’ ” W “'2?
‘3. H {X l
7s 73/3” 1 ” r l n.4i____..l__.>
a e, 1 l
/ :3
.1 ,ﬁ
‘* g _,. ./ /— y;:.x”r\ All
2: ,ﬂﬂj‘/Ii 5‘ l a “We”? ' #5. w  _. o ’ 9% 2 CHAPTER 1. VECTORS, TENSORS, LINEAR TRANSFORMATIONS Verify (1.3) by cOnsidering the geometry of Fig. 1.2. Eq.(1.3) can also be obtained by working with the basis vectors, instead of
the components, directly. It is evident that el = (cos 6)e'1 — (sin 6)e'2 , 62 = (sin 6)e'1 + (cos 6)e'2 . (1.4) Thus, (1.1) and (1.2) together, that is, imply that . 9  r) 9
{1:1 cos 661 —~ 3:1 31119653 + 93‘ s1n6e'1 + :13“ cos 6e"2 = :r’le'l + :c’“e’2 . (1.5) Comparison of the coefﬁcients of 6’1 and 6’2 on both sides of (1.5) immediately
yields (1.3). Let us now write (1.3) and (1.4) in matrix notation. Equations (1.3) can be
written as the single matrix equation c036 — sin6
($1.90”) = (231,112) , (16) sin 6 cos 6 while Eqs (1.4) can be written as 61 cos 6 — sin 6 ea
2 (1.7)
62 sin 6 cos 6 (3’2
Denote the 2 x 2 matrix in both (1.6) and (1.7) by
. a]L (12 cos 6 — sin 6 a9 a; sin 6 cos 6 where the lower index is the row index and the upper index is the column index.
Eqs.(1.6) and (1.7) can then be compactly written using the Einstein summation
convention as {Eli =a1zj, (1.9) 61': a e}. (1.10) r) sum Note again that repeated pairs of indices, one upper and one lower, are summed
over. Eqs.(1.9) and (1.10) are equivalent, in the sense that either one is a conse
quence of the other. By way of illustrating the usefulness of the index notation,
we again derive (1.9) and (1.10) as follows: _ j ,_ j i I_ lil
m—meJ—majei—m ei . (1.11) The last equality implies (1.9)
In matrix notation, (1.9) and (1.10) can be written respectively as (note the
order in which the matrices occur): cc' : (11A, (1.12)
e = Ae' , (1.13) where A is the matrix in (1.8), e, e’ are the column matrices in (1.7), and 9:, .r’
are the row matrices in The above equations imply that / w :r = :c'A_1 , (1.14)
e’ = A—le, (1.15) where A‘1 is the inverse matrix of A. The above transformation properties
for basis vectors and the components of vectors are satisﬁed for any invertible
matrix. These are summarized in the following table for a general vector ’0 = ' _ I‘ I
Wei—71‘s,. Table 1.1 In all the indexed quantities introduced above, upper indices are called con
travariant indices and lower indices are called covariant indices. These
are distinguished because tensorial objects involving upper and lower indices
transform differently under a change of basis, as evident from Table 1.1. Note that the matrix A [given by (1.8)] satisﬁes the following conditions: A—1 = AT, (1.16)
det(A) : 1 , (1.17) where AT denotes the transpose of A. In general, an n x n matrix that satisﬁes
(1.16) is called an orthogonal matrix; While one that satisﬁes both (1.16)
and (1.17) is called a special orthogonal matrix. Both of these types of
matrices are very important in physics applications. Special orthogonal matrices
represent pure rotations, while orthogonal matrices represent pure rotations, 4 CHAPTER 1. VECTORS, TENSORS, LINEAR TRANSFORMATIONS inversions, or rotations plus inversions (in n—dimensional space if the matrix
is n X The set of all n X n orthogonal matrices forms a group, called the
orthogonal group The set of all n x n special orthogonal matrices forms
a subgroup of 0(n), called the special orthogonal group 50(n). Rotations and inversions are speciﬁc examples of linear transformations on
vector spaces. A matrix A represents a linear map (transformation) (also
denoted by A) on a vector space V. Mathematically we write A : V ——> V. The
property of linearity is speciﬁed by the following condition: A(a:r + by) = aAcc + bAy, for all a, b E R, :r,y E V, (1.18) where R denotes the ﬁeld of real numbers. Property (1.16) in fact follows from the following equivalent deﬁnition of an
orthogonal transformation: An orthogonal transformation is one which leaves
the length of a vector invariant. This can be seen as follows. Suppose a linear
transformation A sends a vector v E V to another vector 1)’ E V, that is,
11’ 2 A1), or, in terms of components, 1)’ i r: CLiUj . On the other hand, the square J
of the length of v is given in terms of its components by [Ing = Zvj'uj. (1.19)
3'
Then orthogonality of A implies that
Zvjnj = Zv'iv'i = Zagazvjvk
j i 1'
: Za§(AT)§vjuk = 2(AAT)§tka . (1.20)
k k
Comparing the leftmost expression with the rightmost we see that (A2471);c = 61“ J 7
or AAT : 1. Property (1.16) then follows.
We recall the following properties of determinants: det(AB) : det A det B ,
det(AT) = det(A) . (1.21)
(1.22) If A is orthogonal, we see that (det(A))2 = 1, which implies that det(A) =
:l:1. Orthogonal matrices with determinant equal to —1 represent inversions or
inversions plus rotations. An inversion changes the orientation (handedness)
of a coordinate system, which a pure rotation never does (see Fig. 1.3). Fig. 1.3 Exercise 1.2 Write down explicitly the 3 x 3 orthogonal matrices representing
rotations in 3—dimensional Euclidean space by 1) 45° about the z—axis, 2) 45°
about the m—axis, and 3) 45° about the y—axis. .3
(ix E" ’i‘ T é
a. _ , e; e 2:“ i , he ~ .ﬁ ‘4»: _, eta ’3
i .gg 1
L
r? \\[~: I
22" ‘\ ‘5’ “xvRN. x {<9 *i 2’
‘4 gr“) .3" x ‘9” j _ ﬁg K E‘ ii‘f’c 5
2?; E.ﬂ‘l.[f.a’ j. V Figs.1.4a, 1.4b, 1.4c Show that 0(2) is a commutative group, that is, any two
2 x 2 matrices in 0(2) commute with each other. How does one obtain a concrete matrix representation of an abstract
linear transformation A : V ——> V ? The answer is that a particular matrix
representation arises from a particular choice of basis {6.} for V. (Now we can
be more general and consider an n—dimensional vector space V). For any a: E V
given by a; = miei, the linearity condition (1.18) implies that A(:r) = miA(e1) . (1.23) Thus the action of A on any a: E V is completely speciﬁed by A(ei),i = l, . . . ,n.
Since A(ei) is a vector in V, it can be expressed as a linear combination of
the ei, that is A(ez) = agej , (1.24)
where the a? are scalars in the ﬁeld 1R. Similar to (1.8) the quantities agﬂ 2
1,.,.,n, j = 1,...,n, can be displayed as an n x n matrix
at at at
. a}2 a3 a?
(a?) = . (1.25)
a}. at at Now suppose that, under the action of A, a: E V is transformed into 23’ E V. So (8' = A(2:) 2 A(riez~) 2 miA(ez) = riazej :— asjagei , (1.26) where in the last equality we have performed the interchange H j) since both
i and j are dummy indices that are summed over. Eq.(1.l'26) then implies a” : aﬁmj, (1.27) which is formally the same as (1.9). Thus we have shown explicitly how the
matria: representation of a linear transformation depends on the choice of a basis set.
Let us now investigate the transformation properties of the matrix represen—
tation of a linear transformation A under a change of basis: eg 2 sie} . The required matrix a’ is given by [(c.f.(1.24)]
A(e’.) 2 age; . (1.28) ’L 6 CHAPTER 1. VECTORS, TENSORS, LINEAR TRANSFORMATIONS Using the equation for e; in terms of ej in Table 1.1, we have A(e§) = A((s‘1)§el) = (5‘1)§A(el) = (s—lﬂafek : (s‘Uﬁafsieg , (1.29)
where in the second equality we have used the linearity property of A, in the
third equality, Eq.(1.24), and in the fourth equality, Eq.(1.10). Comparison with (1.28) gives the desired result: (1.30) In matrix notation (1.30) can be written I a 2 s”1as . (1.31) The transformation a —> a’ given by (1.31) is called a similarity transforma
tion. These transformations are of great importance in physics. Two matrices
related by an invertible matrix .9 as in (1.31) are said to be similar. According to (1.30) the upper index of a matrix (a2) transforms like the
upper index of a contravariant vector vi (cf. Table 1.1), and the lower index of
(a?) transforms like the lower index of a basis vector 6, (c.f. Table 1.1 also). In
general a multi—indexed object ' _ 11M]; (with 7" upper indices and 3 lower indices) which transforms under a change of
basis (1.32) is called an (r,s)type tensor, where 7' is called the contravariant order and
s the covariant order of the tensor. Thus a matrix ((1%) which transforms as
(1.30) is a (1,1)4type tensor, and a vector vi is a (1,0)type tensor. (r, s)—type
tensors with r 75 O and s 75 0 are called tensors of mixed type. A (0,0)—type
tensor is a scalar. The term “covariant” means that the transformation is the
same as that of the basis vectors: (e'h = (8—516; , while “contravariant” means that the indexed quantity transforms according to
the inverse of the transformation of the basis vectors. Let us now review the notions of scalar products and cross products.
The scalar product of two vectors u and v (in the same vector space) expressed
in terms of components (with respect to a certain choice of basis) is deﬁned by (11,11) 2 71 ~11 5 6,711in = Zuivi = min, , (1.33) where the Kronecker delta symbol 6“ is a (O,2)—type tensor, and can be
regarded as a metric in Euclidean space 1R3. Loosely speaking, a metric in
a space is a (O, 2)—type symmetric tensor ﬁeld which gives a prescription for
measuring lengths and angles in that space. For example, the norm (length) of
a vector v is deﬁned to be HUN 5 v (Dav) = ((2171112 Referring to Fig. 1.5, in which the basis vaector 61 is chosen to lie along the
direction of if, the angle between two vectors u and v is given in terms of the
scalar product by (1.34) u ‘ v = (it cos 6, usind)  (v, 0) 2 an cosd, (1.35) where u and 7) represent the magnitudes of the vectors u and 1:, respectively.
The cross product (also called the vector product) can be deﬁned in terms of
the so—called LeviCivita tensor, which is a (1, 2)—type tensor given as follows: 0 , z", j, It not all distinct ; 8% = +1 , is an even permutation of (123) ; (1.36)
—1 , (ijk) is an odd permutation of (123) .
The cross product of two vectors A and B is then deﬁned by
A X B = (EijkAjBk) 82‘ (1.37) The cross product gives a Lie algebra structure to the vector space 1R3, since’ this product satisﬁes the so—called Jacobi identity: Ax(BxC)+Bx(CxA)+C><(A><B)=O. (1.38) The metric 61] and its inverse (W can be used to raise and lower indices of
tensors in R3. For example, (1.39) k
Ekzm = 6kn5nlm = E [771. . In fact, because of the special properties of the Kronecker delta, one can use it
to raise and lower indices “with impunity”: Eijk = Eijk = Eijk = ... . (1.40) Use the deﬁnition of the cross product [(1.37)] to show that AxB:—B><A. (1.41) 8 CHAPTER 1. VECTORS, TENSORS', LINEAR TRANSFORMATIONS Use (1.37) to ShOW that €1X62263, €2X63261, 63X€1=€2. The Levi—Civita tensor satisﬁes the following very useful contraction prop '
erty: Ekij Eklm = 52'15jm * 5im5jz  (143) Verify the above equation by using the deﬁning properties of the LeviCivita tensor given by (1.36) and the properties of the Kronecker delta. The Levi—Civita tensor is intimately related to the determinant of a matrix.
One can easily see that 1 2 3
a1 a1 a1
det(aj)— 1 2 3 —£ aiaj 1“ (144)
1““CL2 £12 a2~uk123 
1 2 3
as as ‘13 Generalizing to an n x n matrix one has det(a{) : Z (sgna)a‘17(l)...af1(”) = Z (sgn U)a}7(1) moan)
(resn aESn (1.45) : Eilmin a? . . . air ,
where Sn is the permutaton group of n integers, sgna = +1(—1) if a is an even(odd) permutation of (123.. .n), and shutn is the generalized Levi
Civita tensor and is deﬁned similarly to (1.36). Equations(1.44) and (1.37) imply that A x B = 53k eiAjB'“ = A1 A2 A3 . (1.46) B1 B2 B3 We can use the contraction property of the Levi—Civita tensor (1.43) and the
definition of the scalar product (1.33) to calculate the magnitude of the cross product as follows. (AxB)(A><B) :Z(AXB)i(AxB)i ’L = Z eijkAin simnAmBn = eijkeimnAjBkAmB”
i . (1.47)
: (aims...) * (SJmam) AJBkAmB"
: AjAjBnB” — (Aij)(AkBk) = A‘~’B2 — (A ~ B)2
= 1412B2 — 1412B2 c052 (9 2 Alng sin2 6 .
HenceA x B] = AB sin6’ , (1.48) where 6 is the angle between the vectors A and B. Furthermore, using the
properties of the Levi—Civita tensor [(1.36)], we see that (A x B) A = ZsijkAJ‘BkAi = EijkAiAjBk = 0 . (1.49) Similarly,
(AxB)B=O. (1.50) Thus the cross product of two vectors is always perpendicular to the plane
deﬁned by the two vectors. The righthand rule for the direction of the cross
product follows from (1.42). As a further example in the use of the contraction property of the LeviCivita
tensor we will prove the following very useful identity, known as the “bac minus
cab” identity in vector arithmetic. Ax(BxC):B(AC)—C(AB) . (1.51) We have
(A x (B X (2))i = ask/MB x C)k
= EijkAjEkhnBlCm = eijkeklm AjBlC'm = EkijEklm AjBlC'm
: (5.15]m — aimaﬂ) AjB’C'm (1.52)
= (6ilBl)(6ijij) — (IslWW5lele
: BiAmCm — CiAlBl = Bi(A  C) — 01(A  B). Use the properties of the Levi—Civita tensor to show that A(BxC):B(CxA):C(A><B). (1.53) The quantity in (1.53) can be seen to be equal to the oriented volume of
the parallelopiped formed by the vectors A, B and C (in the order given) (see
Fig. 1.6). 10 CHAPTER 1. VECTORS, TENSORS, LINEAR TRANSFORMATIONS Indeed, referring to Fig. 1.6, oriented volume = y x area of parallelogram formed by A and B 1.54
:yABsin9=CCOS¢ABsin0=C(AXB). ( ) Use the “bac minus cab” identity (1.51) to Show that
(AXB)><(CxD)=C'(D(A><B))—D(C(A><B)) (155) :B(A(CXD))—A(B(C><D)). Exercise 1.9 Use the “bee minus cab” identity and (1.53) to ShOW that (AxB)(C><D)=(AC)(BD)—(A~D)(BC). (1.56) ...
View
Full
Document
This note was uploaded on 06/10/2008 for the course PHY 322 taught by Professor Lam during the Spring '08 term at Cal Poly Pomona.
 Spring '08
 LAM
 mechanics

Click to edit the document details