sec4 - IV. Matrix Mechanics We now turn to the a pragmatic...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: IV. Matrix Mechanics We now turn to the a pragmatic aspect of QM: given a particular problem, how can we translate the Dirac notation into a form that might be interpretable by a computer? As hinted at previously, we do this by mapping Dirac notation onto a complex vector space . The operations in Hilbert space then reduce to linear algebra that can easily be done on a computer. This formalism is completely equivalent to the Dirac notation weve already covered; in different contexts, one will prove more useful than the other. 1. States can be represented by vectors First, we will begin with an arbitrary complete orthonormal basis of { } i . states Then, we know that we can write any other state as: = c 1 1 + c 2 2 + c 3 3 = + ... c i i i How are these coefficient determined? Here, we follow a common trick and take the inner product with the j th state: ) ( ( ) ) ( j i i i Since the Kronecker delta is only non-zero when i=j, the sum collapses to one term: j i i j i ij = = = c c c i i j = c j The simple conclusion of these equations is that knowing the coefficients is equivalent to knowing the wavefunction . If we know , we can determine the coefficients through the second relation. cients, we can reconstruct by Vice versa, If we know the coeffi performing the sum i c i i . Thus, if we fix this arbitrary basis, we can throw away all the basis state and just keep track of the coefficients of the ket state: c 1 c 2 c 3 ... In harmony with the intuitive arguments made previously, here we associate the ket states with column vectors. Notice the small subscript , which reminds us that this vector of coefficients represents i { } i n the basis. If we were really careful, we would keep this subscript at all times; however, in practice we will typically know what basis we are working in, and the subscript will be dropped. ng bra state How to we represent the correspondi as a vector? Well, we know that = ( ) = i i * = c c . i i i i Now, as noted before, we expect to associate bra states with row vectors, and the above relation shows us that the elements of this row vector should be the complex conjugates of the column vector: * * * ( c 1 c 2 c 3 ... ) Noting that bra states and ket states were defined to be Hermitian conjugates of one another, we see that Hermitian conjugation in state space corresponds to taking the complex conjugate transpose of the coefficient vector. Now, the vector notation is totally equivalent to Dirac notation; thus, anything we compute in one representation should be exactly the same if computed in the other. As one illustration of this point, it is useful to check that this association of states with vectors preserves the inner product: = j j i ij ij i i ' * ' * ' * '...
View Full Document

This note was uploaded on 11/28/2011 for the course CHEM 5.74 taught by Professor Robertfield during the Spring '04 term at MIT.

Page1 / 15

sec4 - IV. Matrix Mechanics We now turn to the a pragmatic...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online