This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Likewise w 1
W1  W}—5 E in this basis {1.2.3}
will The inner product {Vl W} is given by the matrix product of the transpose conjugate
of the column vector representing V} with the column vector representing {W}: WI
"'3
<VW>=ivt.vi‘.....u: 5 {1.2.91 1.3. Dual Spaces and the Dirac Notation There is a technical point here. The inner product is a number we are trying to
generate from two kcts l V} and l W). which are both represenled by column vectors
in some basis. Now there is no way to make a number out of two columns by direct
matrix multiplication, but there is a way to make a number by matrix multiplication
of a row times a column. Our trick for producing a number out of two columns has
been to associate a unique row vector with one column [its transpose conjugate}
and Form its matrix product with the column representing the other. This has the
feature that the answer depends on which of the two vectors we are going to convert
to the row. the two choices {{ V W) and {W Via} leading to answers related by
complex conjugation. But one can also take the Following alternate view. Column vectors are concrete
manifestations of an abstract vector  V} or ket in a basis. We can also work back
ward and go from the column vectors to the abstract kcts. But then it is similarly
possible to work backward and associate with each row treetor an abstract object
{ WI, called Firc W. Now we can name the bras as we want but let us do the following.
Associated with every ket  V} is a column vector. Let us take its caffeine or transpose
conjugate. and form a row vector. The abstract bra associated with this will bear
the same label. i.e.. it will be called { Vl. Thus there are two vector spaces. the space
of kets and a dual space of bras, with a ket for every bra and vice versa [the
components being related by the adjoint operation]. Inner products are really deﬁned
only between bras and kets and hence From elements of two distinct but related
vector spaces. There is a basis ot" vectors If) for expanding kets and a similar basis
(it for expanding bras. The basis ket li} is represented in the basis we are using by
a column vector with all zeros except for a l in the tth row. while the basis bra (fl
is a row vector with all zeros except for a I in the ith column. 11 MATHEMATICAL
INTRODUCTIDN 12 CHAPTER 1 All this may be summarized as follows: iV‘N—r E Hlvr.v§‘....v:]H<V {1.3.1} where H means “within a basis.” There is, however, nothing wrong with the ﬁrst viewpoint of associating a scalar
product with a pair of columns or kets {making no reference to another dual space)
and living with the asymmetry between the ﬁrst and second vector in the inner
product [which one to transpose conjugateﬁ. If you found the above discussion
heavy going, you can temporarily ignore it. The only thing you must remember is
that in the case of a general nonarrow vector space: In Vectors can still be assigned components in some orthonormal basis. just as with
arrows, but these may be complex. It The inner product of any two vectors is given in terms of those components by
Eq. [1.2.5]. This product obeys all the axioms. 1.11. Expansion of Vectors in an lL'lt'tbonol'tnol Basis Suppose we wish to expand a vector  V} in an orthonormal basis. To ﬁnd the
components that go into the expansion we proceed as follows. We take the dot
product of both sides of the assumed expansion with Ly}: [or {y'l if you are a purist} V>=Z_ art) {1.3.2}
<31 V>=Zo<jfo {1.3.3}
=3} [1—3—4] in, to ﬁnd the y'th component ofa vector we take the dot m‘oduct with the jth unit
vector, exactly as with arrows. Using this result we may write IV} =2 IMEIV} {1.3.5} Let us make sure the basis vectors look; as they should. If we set V}= ly'} in Eq.
{1.3.5}, we ﬁnd the correct answer: the itb component of the jth basis vector is 5}}.
Thus for example the column representing basis vector number 4 will have a l in
the 4th row and zero everywhere else. The abstract relation {1/}:z elf} _ {1.3.15} becomes in this basis . l3 MATHEMATICAL
u] 1 Q I} INTRODUCTION
as t) 1 1]
=m 3 he {I +~s,. 5 (1.3.?)
a, o o l 1.3.]. Adjoint Operation We have seen that we may pass from the oolumn representing a ket to the
row representing the oorresponding bra by the adjoint operation, i.e., transpose
mnjugation. Let us now ask: if (Vl is the bra corresponding to the ket  V} what
bra eorrmpornds to el V} where a is some scalar? By going to any basis it is readily
found that olV}—* g —*[u"of,o*v§,...,o*o:]—*{Va* {1.3.3}
as, It is customary to write al V} as aV} and the oorrespondmg bra as {oi/l. What
we have found is that {aVI =<Vla* (1.3.9) Since the relation between bras and kets is linear we can say that if we have an
equation among kets such as oIV}=oW}+eZ’}+ {Lilli}
this implies another one among the corresponding bras:
(V[a*={Wb*+{ZJc*+~ [1.3.11] The two equations above axe said to be adjoins: ofeoch other. Just as any equation
involving oomplex numbers implies another obtained by taking die oomplex oonju
gates of both sides, an equation between {bras} kets implies another one between
{kets} bras. If you think in a basis, you will see that this follows simply from the
fact that if two eolumns are equal, so are their transpose conjugates. Here is the rule for taking the adjoint: 14
CHAPTER I To take the adjoint or" a linear equation relating kcls [bras], replace every ket
{bra} by its bra [ket] and complex conjugate all coefﬁcients. We can extend this rule as follows Suppose we have an expansion for a vector: V>= E alt} (1.3.12) in terms of basis vectors. The adjoint is one: clue Recalling that 0;: {fl V) and of ={VIi}, it follows that the adjoint of IV>= 2 team {1.3.13}
:'1
is {VI = Z {Vlﬂtfl [1114] r I
from which comes the rule: To take the adjoint of an equation involving bras and kets and coefﬁcients,
reverse the order of all factors, exchanging bras and kcts and complex conjugating
all mefﬁcients. Gram—Schmidt 'I'heorem Let us now take up the Gram—Schmidt procedure for converting a linearly
independent basis into an orthonormal one. The basic idea can be seen by a simple
example. Imagine the twodimensional space of arrows in a plane. Let us take two
nonparallel vectors. which qualify as a basis. To get an orthonormal basis cut of
those, we do the following: o Rescale the ﬁrst by its own lengthI so it becomes a unit vector. This will be the
ﬁrst basis 1vector. o Subtract front the second vector its projection along the ﬁrst, leaving behind only
the part perpendicular to the ﬁrst. [Such a part will remain since by assumption
the vectors are nonparallel.} o Rescale the left over piece by its own length. We now have the second basis vector;
it is orthogonal to the ﬁrst and of unit length. This simple example tells the whole storyr behind this procedure. which will now
be discussed in general terms in the Dirac notation. Let u}. tea... be a linearly independent basis. The ﬁrst vector of the 15 orthononnat basis will be Mammancittl. MRDD UCTI 0N 1)=% where It=.x<ftt> Clearly {lll}ﬂ2=l ffl2
ﬁts for the second vector in the basis, consider
II'}'H>ll><liﬂ> which is Ill} minus the part pointing along the ﬁrst unit vector. [Think of the arrow
example as you read on.) Not surprisingly it is orthogonal to the latter: OW)"<1H>—<lll>(lﬂ'>ﬂ We now divide [2“) by its norm to get IE} which will be orthogonal to the ﬁrst and
normalized to unity. Finally, consider I3")H”)—l}(ll’H}—2}(2Hf} which is orthogonal to both ll) and IE}. Dividing by its norm we get l3}, the third
member of the orthogonal basis. There is nothing new with the generation of the
rest of the basis. Wittere did we use the linear independence of the original basis? What if we had
started with a lineady depenth basis? Then at some point a vector like II“) or I3“)
would have vanished1 putting a stop to the whole procedure. 0n the other hand,
linear independence will assure us that stash a thing will never happen since it amounts
to having a nontrivial linear combination of linearly independent vectors that adds
up the null vector. {Go back to the equations for 12') or I?!) and satisfy yourself
that these are linear combinations of the old basis vectors.} Exercise Lil. Form an orthonormal basis in two dimensions starting with 43 =3f+43i and E'= Bind}. Can you generate another orthonormal basis starting with these two vectors“?
If so, produee another. lﬁ
cnmea ] Exercise 1.3.2. Show honr to go from the basis 3  t) [l
r>= t} III}: I WI}: 2
t} 2 5
to the orthonormal basis l t} l]
Io=H 2}=[IN§] 3>=:"2r'v’§]
n 2N3 1N3 When we ﬁrst learn about dimensionality, we associate it with the number of perpendicular directions. In this chapter are deﬁned it in terms of the maximum
number of linearly independent vectors. The following theorem connects the two
deﬁnitions. Theorem 4. The dhnensionality of a space equals I'll, the maximum number of
mutually orthogonal vectors in it. To show this, ﬁrst note that any mutually orthogonal set is also linearly indispen
dent. Suppose we had a linear combination of orthogonal vectors adding up to zero. By taking the dot product of both sides with any one member and using the
orthogonality we can show that the coeﬂ‘ieiecnt multiplying that vector had to vanish. This can clearly be done for all the coefﬁcients, showing the linear combination is
trivial. Now in can only be equal to. greater than or lesser than n. the dimensionality
of the space. The Gram—Schmidt procedure eliminates the last case by explicit con
struetien, while the linear independence ol‘ the perpendicular vectors rules out the
penultimate option. Selim: and Triangle Inequalities
Two powerful theorems apply to any inner product space obeying our axioms: Theorem 5. The Schwarz Inequality I<VIW>I5IVIIW {1.3.15}
Thorium 6. The Triangle Inequality V+ W£V+Wl {1.3.16} The proof of the ﬁrst will he provided so you earn get used to working with bras
and kets. The second will he left as an exercise Before proving anything, note that the results are obviously true for arrows: I?
the Scﬁwm‘z mommy says that the dot product of two vectors cannot exceed the MATHEMATICAL product of their lengths and the triangle mortality says that the length of a sum monumqu
cannot exceed the sum of the lengths. This is an example which illustrates the merits
of thinking of abstract vectors as arrows and guessing what properties they might
share with arrows. The proof will of course have to rely on just the axioms.
To prove the Schwarz inequality, consider axiom (212)21] applied to (WI V}
IWI1 I2}IV> IW) (1.3.1?) We get our? WIV_<WJI:> IWI IWI <WIV><V W>_<WIV>*<WIV>
IWI‘ IWI’ +(WV>‘<WIV><Wi W> IWI‘ an {1.3.13} {ZIE>=<V— W} =<ViV>— where we have used the antilinearity ot‘ the inner product with respect to the bra.
Using (WV>‘<VIW>
weﬁnd gm V}: VI W> VV
(I >2 IWII [1.3.19] Crﬂﬁeruultiplying by [WP and taking square roots, the result follows. Exercise 13.3. When will this equality be satisﬁed? Does this agree with your experience
with arrows? Exercise L14. Prove the triangle inequality starting with  V+ WI]. You must use
Rel: VI W} £{ V'l W}! and the Schwarz inequality. Show that the ﬁnal inequality becomes an
equality only it'  V} =nI W} where a is a real positive scalar.  1.4. Subspam Deﬁnition i‘i‘. Giveu a vector space ‘lv', a subset of its elements that form a vector space among thernselvesi is called a subspace. We will denote a particular
subspace r' of dimensionality m by v'." . lVedoradditinnandsralarmultiplicatinnmdeﬁnedthesamewayin Ihesubspaceasinv. 18
(csInner. 1 Example 1.4.3. In the space HRH}, the following are some examples of sub
spaces: {a} all vectors along the x axis, the space VJ; {b} all vectors along the y
axis. the space M}; {cl all vectors in the x—y plane, the space Mfr. Notice that all
subspaces contain the null vector and that each vector is accompanied by its inverse
to fulﬁll animus for a vector space. Thus the set of all vectors along the positive x
axis alone do not form a vector space. [I Deﬁnition 12. Given two subspaces it?“ and it“, we deﬁne their sum
Wﬂs‘tfflW as the set containing [1] all elements of VT, {2} all elements of
ham, [3} all possible linear combinations of the above. But for the elements [3},
closure would be lost. Example £4.21 If, for example, Valletta“; contained only vectors along the x
and y axes, we could. by adding two elements, one from each direction, generate
one along neither. On the other hand, if we also included all linear combinations,
we would get the correct answer, WJGW} =‘ir'iy. El Exercise 141* In a space 'lJ'", prove that the set of all vectors {V1),V‘1}, . . . }.
orthogonal to any IV) '1" ll). form a subspace W". Exercise 14.2. Suppose Uf' and to“? are two subspaces such that any element of la”. is
orthogonal to any element of to. Show that the dimensionality of shall“: is e. +111. (Hint:
Theorem 4.] 1.5. Linear Operators An operator ﬂ is an instnietion for transforming any given vector IV} into
another,  V“). The action of the operator is represented as follows: ﬁlmIV} {1.5.1} l[hie says that the operator R has transformed the ltet 1V) into the hot IV’). We
1trill rostrict our attention throughout to operators n that do not take us out of the
vector space, i.e., if V) is an element of a space V, so is V'}=ﬂ V). Operators can also act on bras: {V’Iﬂ‘<V”l {1.5.5} We will only be concerned with linear operators, i.e., ones that obey the following
rules: on v,}= nﬂ V.) [1.5.3:]
ﬂ{o m>+e lﬁ}}=oﬂ tro+so t3} (1.5.311)
{VAoﬂ={V,ﬂo (1.5.4:) {min+<elﬁtca<elo+ﬁ<elo ' {1.5.413} 19 MATHEMATICAL
TNT'RU'DULTIDN ﬁne 1.3. Action of the operator Hﬁxi}. Note that
i[l2}+3}]=ﬁl2}+ﬂl3) as expected of‘a linear operator. {We
‘51] oﬁen refer to Killian as R if no confusion is likely.) Examfe' Lil. The simplest operator is the itlentitjrr operator, 1', which carries
the instruction: I—rLeave the vector alone! 1 V} = V} for all kets  V} {1.5.5} {VI={V foral] bras {VI [1.5.6]
We next pass on to a more interesting operator on Mam}:
RE Jri } —cRotate vector by fair about the unit Treetori More generally, Riﬂ] stands for a rotation by an angle ﬂ = Il[ about the axis parallel
'to the unit vector Q=Elf9.] Let us consider the action of this operator on the three
unit vectors L j, and k, which in our notation will be denoted by ll), l2}, and 13}
{see Fig. 1.3}. From the ﬁgure it is clear that Rtiailll>=lllr [Lara]
Rtiaillﬁ>=lﬁ} (1.5.?b}
Rtssill3>=I2> {Lara} Clearly Rﬁtri] is linear. For instance, it is clear from the same ﬁgure that
Ri1}+3}l=ﬂll}+33} D The nice feature of linear operators is that onoe their action on the basis vectors
is know. their action on any vector in the space is determined If ﬂlf)=i’}
fora basis ll}, l2) ..... la} in F“, then for an}.r  V}=Eo,—t"> ﬂIV>=Eﬂali> =2 oﬂli)=£v.li’> {1.5.3} CHAHERI This is the case in the example ﬂ=Rt%ai]. If IV}=UII1>+U2i2}+UJI3>
is any vector, then
RIV}=urﬂll}+vzﬂl2}+vsﬂl3}=vull}+1123}—vat2} The procure: of two operators stands for the instruction that the instructions
oorrespondjng to Ute two operators be carried out in sequence ﬁtﬂ V}=r"t{ﬂ V}}=AID.V) {1.5.9} where lﬂV} is the ket obtained by the action of Q on  V}. The order of the operators
in a product is very.r important: in general, na—nozin, h]
called the commtotor of Q. and :1 isn't zero. For example Rﬁa‘i] and Rﬁ'girj} do not commute, i.e., their conunutator is nonzero.
Two useful identities involving oorurnutators are [52, Ad] = MSL ti] + [11, Mt? [15.10]
[M1, is] =h[n, a] + pt, tun {1.5.11}
Notice that apart from the emphasis on ordering, these rules resemble the chain rule
in calculus for the derivative of a product.
The inverse of It, denoted by tit—I, satisﬁesi
m"=ﬂ‘]t't=t' {1.5.12}
Not everyr operator has an inverSe. The condition for the existence of the inverse is
given in Appendix at. The operator Rﬂrrri] has an inverse: it is art—get). The
inverse of a product of operators is the product of the inverses in reverse:
[nar1=n"o'1 {1.5.13} for onlyr then do we have torment" =(oa1rrt"n"}=naa"o" =nn" =r L15. Matrix Elements of Linen Operators We are now accustomed to the idea of an abstract vector being represented in
a basis by an u—tuple of numbers, called its components, in tonne of which all vector “u “(Cl with n ﬁnite. ﬂ"ﬂ=t¢1’ﬂ'l=f. Prove this using the ideas introduced toward the end of
Theorem AJ . ., Appendix Al. ...
View
Full Document
 Winter '03
 Nelson
 mechanics

Click to edit the document details