{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Shankar011th020

Shankar011th020 - Likewise w 1 W1 | W}—5 E in this...

Info icon This preview shows pages 1–10. Sign up to view the full content.

View Full Document Right Arrow Icon
Image of page 1

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 2
Image of page 3

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 4
Image of page 5

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 6
Image of page 7

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 8
Image of page 9

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 10
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Likewise w 1 W1 | W}—5 E in this basis {1.2.3} will The inner product {Vl W} is given by the matrix product of the transpose conjugate of the column vector representing |V} with the column vector representing {W}: WI "'3 <V|W>=ivt.vi‘.....u: 5 {1.2.91 1.3. Dual Spaces and the Dirac Notation There is a technical point here. The inner product is a number we are trying to generate from two kcts l V} and l W). which are both represenled by column vectors in some basis. Now there is no way to make a number out of two columns by direct matrix multiplication, but there is a way to make a number by matrix multiplication of a row times a column. Our trick for producing a number out of two columns has been to associate a unique row vector with one column [its transpose conjugate} and Form its matrix product with the column representing the other. This has the feature that the answer depends on which of the two vectors we are going to convert to the row. the two choices {{ V| W) and {W| Via} leading to answers related by complex conjugation. But one can also take the Following alternate view. Column vectors are concrete manifestations of an abstract vector | V} or ket in a basis. We can also work back- ward and go from the column vectors to the abstract kcts. But then it is similarly possible to work backward and associate with each row tree-tor an abstract object { WI, called Firc- W. Now we can name the bras as we want but let us do the following. Associated with every ket | V} is a column vector. Let us take its caffeine or transpose conjugate. and form a row vector. The abstract bra associated with this will bear the same label. i.e.. it will be called { Vl. Thus there are two vector spaces. the space of kets and a dual space of bras, with a ket for every bra and vice versa [the components being related by the adjoint operation]. Inner products are really defined only between bras and kets and hence From elements of two distinct but related vector spaces. There is a basis ot" vectors If) for expanding kets and a similar basis (it for expanding bras. The basis ket li} is represented in the basis we are using by a column vector with all zeros except for a l in the tth row. while the basis bra (fl is a row vector with all zeros except for a I in the ith column. 11 MATHEMATICAL INTRODUCTIDN 12 CHAPTER 1 All this may be summarized as follows: iV‘N—r E Hlvr.v§‘....v:]H<V| {1.3.1} where H means “within a basis.” There is, however, nothing wrong with the first viewpoint of associating a scalar product with a pair of columns or kets {making no reference to another dual space) and living with the asymmetry between the first and second vector in the inner product [which one to transpose conjugatefi. If you found the above discussion heavy going, you can temporarily ignore it. The only thing you must remember is that in the case of a general nonarrow vector space: In Vectors can still be assigned components in some orthonormal basis. just as with arrows, but these may be complex. It The inner product of any two vectors is given in terms of those components by Eq. [1.2.5]. This product obeys all the axioms. 1.11. Expansion of Vectors in an lL'lt'tbonol'tnol Basis Suppose we wish to expand a vector | V} in an orthonormal basis. To find the components that go into the expansion we proceed as follows. We take the dot product of both sides of the assumed expansion with Ly}: [or {y'l if you are a purist} |V>=Z_ art) {1.3.2} <31 V>=Zo<jfo {1.3.3} =3}- [1—3—4] in, to find the y'th component ofa vector we take the dot m‘oduct with the jth unit vector, exactly as with arrows. Using this result we may write IV} =2 IMEIV} {1.3.5} Let us make sure the basis vectors look; as they should. If we set |V}= ly'} in Eq. {1.3.5}, we find the correct answer: the itb component of the jth basis vector is 5}}. Thus for example the column representing basis vector number 4 will have a l in the 4th row and zero everywhere else. The abstract relation {1/}:z elf} _ {1.3.15} becomes in this basis . l3 MATHEMATICAL u] 1 Q I} INTRODUCTION as t) 1 1] =m 3 he {I +-~-s,. 5 (1.3.?) a, o o l 1.3.]. Adjoint Operation We have seen that we may pass from the oolumn representing a ket to the row representing the oorresponding bra by the adjoint operation, i.e., transpose mnjugation. Let us now ask: if (Vl is the bra corresponding to the ket | V} what bra eorrmpornds to el V} where a is some scalar? By going to any basis it is readily found that olV}—* g —*[u"of,o*v§,...,o*o:]—*{V|a* {1.3.3} as, It is customary to write al V} as |aV} and the oorrespondmg bra as {oi/l. What we have found is that {aVI =<Vla* (1.3.9) Since the relation between bras and kets is linear we can say that if we have an equation among kets such as oIV}=o|W}+e|Z’}+--- {Lilli} this implies another one among the corresponding bras: (V[a*={W|b*+{ZJc*+-~ [1.3.11] The two equations above axe said to be adjoins: ofeoch other. Just as any equation involving oomplex numbers implies another obtained by taking die oomplex oonju- gates of both sides, an equation between {bras} kets implies another one between {kets} bras. If you think in a basis, you will see that this follows simply from the fact that if two eolumns are equal, so are their transpose conjugates. Here is the rule for taking the adjoint: 14 CHAPTER I To take the adjoint or" a linear equation relating kcls [bras], replace every ket {bra} by its bra [ket] and complex conjugate all coefficients. We can extend this rule as follows- Suppose we have an expansion for a vector: |V>= E alt} (1.3.12) in terms of basis vectors. The adjoint is one: clue Recalling that 0;: {fl V) and of ={VIi}, it follows that the adjoint of IV>= 2 team {1.3.13} :'--1 is {VI = Z {Vlfltfl [1-114] r- I from which comes the rule: To take the adjoint of an equation involving bras and kets and coefficients, reverse the order of all factors, exchanging bras and kcts and complex conjugating all mefficients. Gram—Schmidt 'I'heorem Let us now take up the Gram—Schmidt procedure for converting a linearly independent basis into an orthonormal one. The basic idea can be seen by a simple example. Imagine the two-dimensional space of arrows in a plane. Let us take two nonparallel vectors. which qualify as a basis. To get an orthonormal basis cut of those, we do the following: o Rescale the first by its own lengthI so it becomes a unit vector. This will be the first basis 1vector. o Subtract front the second vector its projection along the first, leaving behind only the part perpendicular to the first. [Such a part will remain since by assumption the vectors are nonparallel.} o Rescale the left over piece by its own length. We now have the second basis vector; it is orthogonal to the first and of unit length. This simple example tells the whole storyr behind this procedure. which will now be discussed in general terms in the Dirac notation. Let u}. tea... be a linearly independent basis. The first vector of the 15 orthononnat basis will be Mammancittl. MRDD UCTI 0N |1)=% where It|=.x<ftt> Clearly {lll}-fl2=l ffl2 fits for the second vector in the basis, consider II'}'|H>-ll><lifl> which is Ill} minus the part pointing along the first unit vector. [Think of the arrow example as you read on.) Not surprisingly it is orthogonal to the latter: OW)"<1|H>—<lll>(l|fl'>-fl We now divide [2“) by its norm to get IE} which will be orthogonal to the first and normalized to unity. Finally, consider I3")-H”)—|l}(l|l’H}-—|2}(2|Hf} which is orthogonal to both ll) and IE}. Dividing by its norm we get l3}, the third member of the orthogonal basis. There is nothing new with the generation of the rest of the basis. Wittere did we use the linear independence of the original basis? What if we had started with a lineady depenth basis? Then at some point a vector like II“) or I3“) would have vanished1 putting a stop to the whole procedure. 0n the other hand, linear independence will assure us that stash a thing will never happen since it amounts to having a nontrivial linear combination of linearly independent vectors that adds up the null vector. {Go back to the equations for 12') or I?!) and satisfy yourself that these are linear combinations of the old basis vectors.} Exercise Lil. Form an orthonormal basis in two dimensions starting with 43 =3f+43i and E'= Bind}. Can you generate another orthonormal basis starting with these two vectors“? If so, produee another. lfi cnmea ] Exercise 1.3.2. Show honr to go from the basis 3 - t) [l |r>= t} III}: I WI}: 2 t} 2 5 to the orthonormal basis l t} l] Io=H |2}=[IN§] |3>=|:"2r'v’§] n 2N3 1N3 When we first learn about dimensionality, we associate it with the number of perpendicular directions. In this chapter are defined it in terms of the maximum number of linearly independent vectors. The following theorem connects the two definitions. Theorem 4. The dhnensionality of a space equals I'll, the maximum number of mutually orthogonal vectors in it. To show this, first note that any mutually orthogonal set is also linearly indispen- dent. Suppose we had a linear combination of orthogonal vectors adding up to zero. By taking the dot product of both sides with any one member and using the orthogonality we can show that the coefl‘ieiecnt multiplying that vector had to vanish. This can clearly be done for all the coefficients, showing the linear combination is trivial. Now in can only be equal to. greater than or lesser than n. the dimensionality of the space. The Gram—Schmidt procedure eliminates the last case by explicit con- struetien, while the linear independence ol‘ the perpendicular vectors rules out the penultimate option. Selim: and Triangle Inequalities Two powerful theorems apply to any inner product space obeying our axioms: Theorem 5. The Schwarz Inequality I<VIW>I5IVIIW| {1.3.15} Thorium 6. The Triangle Inequality |V+ W|£|V|+|Wl {1.3.16} The proof of the first will he provided so you earn get used to working with bras and kets. The second will he left as an exercise- Before proving anything, note that the results are obviously true for arrows: I? the Scfiwm‘z mommy says that the dot product of two vectors cannot exceed the MATHEMATICAL product of their lengths and the triangle mortality says that the length of a sum monumqu cannot exceed the sum of the lengths. This is an example which illustrates the merits of thinking of abstract vectors as arrows and guessing what properties they might share with arrows. The proof will of course have to rely on just the axioms. To prove the Schwarz inequality, consider axiom (212)21] applied to (WI V} IWI1 I2}-IV>- IW) (1.3.1?) We get our? WIV_<WJI:> IWI IWI <WIV><V| W>_<WIV>*<WIV> IWI‘ IWI’ +(W|V>‘<WIV><Wi W> IWI‘ an {1.3.13} {ZIE>=<V— W} =<ViV>— where we have used the antilinearity ot‘ the inner product with respect to the bra. Using (W|V>‘-<VIW> wefind gm V}: VI W> VV (I >2 IWII [1.3.19] Crflfieruultiplying by [WP and taking square roots, the result follows. Exercise 13.3. When will this equality be satisfied? Does this agree with your experience with arrows? Exercise L14. Prove the triangle inequality starting with | V+ WI]. You must use Rel: VI W} £|{ V'l W}! and the Schwarz inequality. Show that the final inequality becomes an equality only it' | V} =nI W} where a is a real positive scalar. | 1.4. Subspam Definition i‘i‘. Giveu a vector space ‘lv', a subset of its elements that form a vector space among thernselvesi is called a subspace. We will denote a particular subspace r' of dimensionality m by v'." . lVedoradditinnandsralarmultiplicatinnmdefinedthesamewayin Ihesubspaceasinv. 18 (cs-Inner. 1 Example 1.4.3. In the space HRH}, the following are some examples of sub- spaces: {a} all vectors along the x axis, the space VJ; {b} all vectors along the y axis. the space M}; {cl all vectors in the x—y plane, the space Mfr. Notice that all subspaces contain the null vector and that each vector is accompanied by its inverse to fulfill animus for a vector space. Thus the set of all vectors along the positive x axis alone do not form a vector space. [I Definition 12. Given two subspaces it?“ and it“, we define their sum Wfls‘tffl-W as the set containing [1] all elements of VT, {2} all elements of ham, [3} all possible linear combinations of the above. But for the elements [3}, closure would be lost. Example £4.21 If, for example, Valletta“; contained only vectors along the x and y axes, we could. by adding two elements, one from each direction, generate one along neither. On the other hand, if we also included all linear combinations, we would get the correct answer, WJGW} =‘ir'iy. El Exercise 141* In a space 'lJ'", prove that the set of all vectors {|V1),|V‘1}, . . . }. orthogonal to any IV) '1" ll). form a subspace W". Exercise 14.2. Suppose Uf' and to“? are two subspaces such that any element of la”. is orthogonal to any element of to. Show that the dimensionality of shall“: is e. +111. (Hint: Theorem 4.] 1.5. Linear Operators An operator fl is an instnietion for transforming any given vector IV} into another, | V“). The action of the operator is represented as follows: film-IV} {1.5.1} l[hie says that the operator R has transformed the ltet 1V) into the hot IV’). We 1trill rostrict our attention throughout to operators n that do not take us out of the vector space, i.e., if| V) is an element of a space V, so is |V'}=-fl| V). Operators can also act on bras: {V’Ifl‘<V”l {1.5.5} We will only be concerned with linear operators, i.e., ones that obey the following rules: on v,}= nfl| V.) [1.5.3:] fl{o| m>+e| lfi}}=ofl| tro+so| t3} (1.5.311) {VAofl={V,-|flo (1.5.4:) {min+<elfitc-a<elo+fi<elo ' {1.5.413} 19 MATHEMATICAL TNT'RU'DULTIDN fine 1.3. Action of the operator Hfixi}. Note that i[l2}+|3}]=fil2}+fll3) as expected of‘a linear operator. {We ‘51] ofien refer to Killian as R if no confusion is likely.) Examfe' Lil. The simplest operator is the itlentitjrr operator, 1', which carries the instruction: I—rLeave the vector alone! 1| V} =| V} for all kets | V} {1.5.5} {V|I={V| foral] bras {VI [1.5.6] We next pass on to a more interesting operator on Mam}: RE Jri } -—c-Rotate vector by fair about the unit Treetori More generally, Rifl] stands for a rotation by an angle fl = |Il[ about the axis parallel 'to the unit vector Q=Elf9.] Let us consider the action of this operator on the three unit vectors L j, and k, which in our notation will be denoted by ll), l2}, and 13} {see Fig. 1.3}. From the figure it is clear that Rtiailll>=lllr [Lara] Rtiaillfi>=lfi} (1.5.?b} Rtssill3>=-I2> {Lara} Clearly Rfitri] is linear. For instance, it is clear from the same figure that Ri|1}+|3}l=flll}+3|3}- D The nice feature of linear operators is that onoe their action on the basis vectors is know. their action on any vector in the space is determined- If fllf)=|i’} fora basis ll}, l2) ..... la} in F“, then for an}.r | V}=Eo,—|t"> flIV>=Eflali> =2 oflli)=£v.-li’> {1.5.3} CHAHERI This is the case in the example fl=Rt%ai]. If IV}=UII1>+U2i2}+UJI3> is any vector, then RIV}=urflll}+vzfll2}+vsfll3}=vull}+112|3}—vat2} The procure: of two operators stands for the instruction that the instructions oorrespondjng to Ute two operators be carried out in sequence fitfl| V}=r"t{fl| V}}=AID.V) {1.5.9} where lflV} is the ket obtained by the action of Q on | V}. The order of the operators in a product is very.r important: in general, na—nozin, h] called the commtotor of Q. and :1 isn't zero. For example Rfia‘i] and Rfi'girj} do not commute, i.e., their conunutator is nonzero. Two useful identities involving oorurnutators are [52, Ad] = MSL ti] + [11, Mt? [1-5.10] [M1, is] =h[n, a] + pt, tun {1.5.11} Notice that apart from the emphasis on ordering, these rules resemble the chain rule in calculus for the derivative of a product. The inverse of It, denoted by tit—I, satisfiesi m"=fl‘]t't=t' {1.5.12} Not everyr operator has an inverSe. The condition for the existence of the inverse is given in Appendix at. The operator Rflrrri] has an inverse: it is art—get). The inverse of a product of operators is the product of the inverses in reverse: [nar1=n"o'1 {1.5.13} for onlyr then do we have torment" =(oa1rrt"n"}=naa"o" =nn" =r L15. Matrix Elements of Linen Operators We are now accustomed to the idea of an abstract vector being represented in a basis by an u—tuple of numbers, called its components, in tonne of which all vector “u “(Cl with n finite. fl"fl=t¢1’fl'l=f. Prove this using the ideas introduced toward the end of Theorem AJ . |., Appendix Al. ...
View Full Document

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern