This preview shows page 1. Sign up to view the full content.
Unformatted text preview: 3 MODEL PROBLEMS
3.1 Introduction:
The number of problems that can be solved exactly in quantum mechanics is very
small. All of the techniques that are used to realistically describe the properties of
molecules (the second part of the book) are approximations, for example. Why bother
with the ones that can be solved exactly at all then? There are two main reasons to work
through this chapter anyway, despite this discouraging initial information. The first is the
simpler problems which can be exactly solved help bring to life the somewhat abstract
general concepts introduced in the last chapter. Second, the exactly solvable problems
sometimes serve as useful simplified idealizations for crucial aspects of complicated
realistic problems. We’ve already seen how our first model problem, the particle in a
box, can model the delocalized π electrons in conjugated molecules for example.
The first model problem treated here is the 2level system. This is by far the
easiest, and is a useful idealization of many realistic problems in which two energy levels
are close together and can interact, while all other energy levels are much lower or
higher. The twolevel system also serves to illustrate most basic quantum concepts. The
second model problem of this chapter is the harmonic oscillator, which is a tremendously
important model in quantum mechanics. It is provides the basis for describing the
vibrations of the atoms in molecules and thus connects to infrared spectroscopy. It also
allows us to introduce creation and destruction operators. The third model problem is the
rigid rotor, which is relevant to modeling rotations of molecules and also connects to
microwave spectroscopy in which rotational transitions are measured. The quantum
treatment of angular momentum is introduced to describe the rigid rotor. Finally, the last
model problem examined is the hydrogen atom, which is most difficult, but whose
solutions provide many of the foundations for describing electrons in molecules in the
second part of the book. 3.2 The two level system.
There are innumerable examples of twolevel systems in quantum mechanics.
One goal to bear in mind for the future is to be able to recognize problem types that can
be modeled correctly as two level systems. To get you started, here are a few examples:
(a) Any spin ½ system in a magnetic field. This is the physical system we shall
mainly focus on because electrons are particles with spin ½ and therefore this is very
appropriate for us. (Insert picture here)
(b) Simple model for vibrational tunneling. Pairs of energy levels which are close
to each other compared to other levels. An example is the vibrational motion of the
ammonia molecule, NH3, which looks a bit like an umbrella. Just as umbrellas can invert
in the wind, it is possible for NH3 to tunnel from one well to another, as depicted in the
diagram below. (Insert picture here)
(c) Frontier orbital interactions between molecules. If a molecule with some
propensity to donate electrons (Lewis acid character) such as ammonia, approaches a molecule with some propensity to accept electrons (Lewis base character) such as borane,
BH3, then this interaction can be described as a twolevel system in which the highest
occupied orbital (HOMO) of the Lewis acid interacts with the lowest unoccupied orbital
(LUMO) of the Lewis base. (Insert picture here)
(d) The chemical bond between a proton (H+) and a hydrogen atom can be
viewed as a twolevel system in which the 1s energy levels of the atom and the ion
interact. This can be a very strong interaction which gives rise to the chemical bond.
(e) The fundamental unit of quantum computing. While classical computers are
based on the “bit”, which is either “on” (1) or “off” (0), intense research is underway to
explore the implications of a computing architecture in which the classical “bit” is
replaced by a quantum “qubit”, which is nothing other than a two level system. Since a
quantum two level system can encode superposition states rather than just “on” or “off”,
it is potentially much more powerful. Practical realizations still belong to the future,
however. 3.2.1 Energy levels and time evolution
Let us denote the two eigenvalues and eigenstates of a twolevel system described by the
ˆ
Hamiltonian H 0 as:
ˆ
H 0 !1 = E1 !1
(3.1)
ˆ
H 0 ! 2 = E2 ! 2
Since they eigenstates are those of the Hamiltonian, which is Hermitian, we know from
the previous chapter that they are orthonormal:
!1 !1 = ! 2 ! 2 = 1
(3.2)
!1 ! 2 = 0
Alternatively, if we rewrite the Hamiltonian in the basis of its eigenstates, it is a 2 by 2
matrix which is diagonal:
! E1 0 $
(3.3)
H0 = #
&
" 0 E2 %
A general state vector for a twolevel system would be a superposition of the two levels:
(3.4)
! = c1 "1 + c2 " 2
In the 2dimensional basis, this is simply a column vector of length 2. Of course a
measurement of the energy for the system in the general state (3.4) must yield one of the
two possible values of the energy, E1 or E2.
The time evolution of the system can be evaluated quite straightforwardly from
the general result given in the previous chapter, Eq. (XXX). For instance, if the system is
described by Eq. (3.4) at time t=0, then at a later time t, it is given by:
! ( t ) = c1 ( t ) "1 + c2 ( t ) " 2
(3.5)
= c1 ( t = 0 ) e# iE1t / ! "1 + c2 ( t = 0 ) e# iE2 t / ! " 2
It should be fairly evident that the probability of being in either of the two states does not
depend on time. For instance: Pr1 ( t ) = !1 " ( t ) 2 = c1 ( t ) 2 (3.6) = (t = 0 ) e
c1 ( t = 0 ) e
= c1 ( t = 0 )
which is timeindependent. Interestingly, however, if we measure some other observable
such as position which does not commute with the Hamiltonian, then the probabilities do
depend on time.
As a simple example, consider the particle in a box of length L in the
superposition state:
! ( t ) = 12 e" iEn=1t / ! # n =1 + 12 e" iEn=2 t / ! # n = 2
(3.7)
= 12 e" iEn=1t / ! # n =1 + e" i ( En=2 " En=1 )t / ! # n = 2
*
c1 + iE1t / ! { # iE1t / ! 2 } Inspecting the term inside the curly brackets, we can see that the phase factor on the
second term depends on time – at t=0, it is equal to 1, but at t = ! ! ( E2 " E1 ) it is equal
to 1. This does not affect the probability of being in either state, but, as shown in the
Figure below, this change in the constructive/destructive interference of the two
components of the superposition does affect the timedependent probability distribution
which is given by the square of the wavefunction amplitude at time t, Pr ( x, t ) = ! ( x, t ) .
2 Figure 31: (a) Plot of the average value of position as a function of time for a
particle in a box initially in the superposition state given above. (b) Probability
distribution for the system at time t=0. (c) Probability distribution for the system at
the time t = ! ! ( E2 " E1 ) . 3.2.2 Energy levels and eigenstates of the perturbed two level system
Up until now we have been considering the twolevel system in the basis of its
eigenstates, which as discussed above, made the timeevolution of the state vector very
easy to write down, and meant that the probabilities of observing a given energy did not
change with time. The key next step is to consider the effect of changing the
Hamiltonian so that the system is altered (or, as we commonly say, perturbed) from its
ˆ
original form H 0 to a new form:
ˆ
ˆ
ˆ
(3.8)
H = H +W
0 ˆ
The additional term W is called the perturbation. Physically, the most important
ˆ
example to think of in this and the following section is that the unperturbed system, H 0 ,
corresponds to the energy levels of an idealized molecule (say its ground and its excited state), and the perturbation corresponds to an electric or magnetic field applied to probe
the system in a laboratory experiment, such as shining light on a sample. The
conclusions we draw in this section will give you some basic insights into the way in
which the properties of quantum systems can be affected by radiation – and as you may
already know from earlier classes, the result will be that the perturbation can induce
transitions between the energy levels of the system (in other words changing the
probabilities of measuring E1 or E2. from their initial values).
ˆ
The first order of business is to be specific about the form of the perturbation, W ,
which in matrix form must have an offdiagonal term that couples the unperturbed
eigenstates. Diagonal terms would only shift the energy levels from their unperturbed
values without altering the eigenstates. Therefore it is perfectly sufficient for our present
purposes to choose:
" 0 W%
(3.9)
W=$ !
'
W
0&
#
Can you see why the (1,2) and (2,1) elements are chosen to be complex conjugates of
each other? It has to do with the fact that the energy must always be represented by a
Hermitian operator – see Eq. XXX of the previous chapter. The total Hamiltonian is
now:
" E1 W %
(3.10)
H = H0 + W = $ !
'
E2 &
#W
Since this matrix is not diagonal it is clear that the eigenstates can no longer be !1 and ! 2 , just as E1 and E2. can no longer be the eigenvalues. Therefore an initial state !1
will evolve with time and the probability of measuring the system in the other state ! 2
at later times may now be nonzero. This is what is meant by the perturbation inducing
transitions between the energy levels of the unperturbed system.
Let us turn now to the problem of calculating the perturbed energy levels and
eigenstates. These can be solved for exactly with only moderate algebraic effort in the
case of the eigenvalues, which we’ll do explicitly. The problem is to solve the secular
determinant equal to zero:
E1 ! "
W
2
(3.11)
det ( H ! "1) =
= ( E1 ! " ) ( E2 ! " ) ! W = 0
#
W
E2 ! "
This is a quadratic equation in the unknown eigenvalues, λ, which can either be solved
brute force using the quadratic formula, or by expansion and rearrangement as follows:
1
2
2&
#1
2
! 2 " ( E1 + E2 ) ! + % ( E1 + E2 ) " ( E1 + E2 ) ( + E1E2 " W = 0
(3.12)
4
$4
'
2 1
1
2
#
&
2
(3.13)
% ! " 2 ( E1 + E2 ) ( " 4 ( E1 " E2 ) " W = 0
$
'
Let us now define the average unperturbed energy, E , and the offset, ! , from the
average unperturbed energy necessary to reach the two individual unperturbed energies
as 1
( E1 + E2 )
2
(3.14)
1
! = ( E2 " E1 )
2
With these definitions, the two perturbed eigenvalues, E+ and E! can be written as:
E= E± = E ± ! 2 + W 2 (3.15) Figure 32: Plot of the eigenvalues of the perturbed twolevel system, E+ and E! as
a function of the separation ! that characterizes the unperturbed levels for a fixed
value of the perturbation strength, W. The dashed diagonal lines show the
unperturbed energies, E1 and E2. It is evident that the energy levels change
substantially due to W in the center of the plot, while they change only slightly due
to W at the edges. Evidently the edges correspond to weak perturbations while the
center corresponds to a strong perturbation.
The next step is to solve explicitly for the corresponding perturbed eigenvectors,
! + and ! " , as mixtures of the two unperturbed eigenvectors, !1 and ! 2 , now that
the perturbed eigenvalues are available. While this is relatively straightforward algebra,
it is a little tedious, so here we’ll just present the working form that is finally obtained for
! + and ! " :
"
"
(3.16)
! + = cos exp ( #i$ / 2 ) !1 + sin exp ( i$ / 2 ) ! 2
2
2
#
#
(3.17)
! + = " sin exp ( "i$ / 2 ) !1 + cos exp ( i$ / 2 ) ! 2
2
2 The definition of the two angles, θ and φ, used in the equations above is
tan ! = W / "
(3.18)
W = W exp ( i! )
(3.19)
It is evident that all terms involving φ can be conveniently dropped if the matrix element
describing the perturbation is real, while the terms involving θ are critically dependent on
the relative sizes of the perturbation and the initial splitting of the energy levels, Δ.
Given the perturbed energies and the perturbed eigenfunctions, there are two
limiting regimes of interest, as seen from the behavior of the plot shown in the Figure
above. We shall discuss these in turn.
i) Wellseparated unperturbed levels relative to perturbation strength: ! ! W .
In this case, we can consider W to be a small perturbation relative to the original energy
splitting Δ, since W / ! ! 1 , and simplify our expressions for the perturbed eigenvalues
and vectors accordingly. Physically we are expecting this to be the regime in which the
unperturbed levels are only weakly mixed together by the perturbation. The discriminant
in Eqs. (3.15) above can be expanded in a Maclaurin series as follows:
#1
&
2
2
2
! 2 + W = ! 1 + W / ! 2 " ! $1 + W / ! 2 + ...'
(3.20)
%2
(
Therefore the perturbed energy levels in this limit become:
2
"
&
W
$
$
E± = E ± ! #1 +
+ ...'
(3.21)
2
2!
$
$
%
(
which shows that while W / ! ! 1 is already small, the effect on the energies is even
2 smaller, going as W / ! 2 . The unperturbed levels essentially keep their original
identity, as can also be seen by examining the eigenvectors in this limit, which
corresponds to tan ! = W / " # 0 , and therefore tan ! " ! Taking the perturbation
matrix elements as real for simplicity (φ=0), the perturbed level ! + now becomes: W
!2
(3.22)
2#
. The key point is that this perturbed state originates ! + " !1 + with a similar expression for ! " from the unperturbed level !1 , and essentially keeps that character with a small amount
of mixing in of the other (for instance excited) state. This is basically the limit that is
relevant to spectroscopy involving weak fields (the usual laboratory case). By contrast:
ii) Unperturbed levels close together relative to perturbation strength: ! ! W .
In this case we must consider W to be a large perturbation relative to the original energy
splitting Δ, so that the discriminant in Eqs. (3.15) should now be expanded as:
#1
&
2
2
2
! 2 + W = W 1 + ! 2 / W " W $1 + ! 2 / W + ...'
(3.23)
%2
(
The perturbed energies in this limit become: "
&
!2
$
$
E± = E ± W #1 +
+ ...'
(3.24)
2
$
$
% 2W
(
This is qualitatively different to the weak limit in Eq. (3.21) above, because the changes
in the energy levels are now proportional to the first power of W, rather than the second
power. Likewise the expression for the perturbed levels is greatly different, because we
are now in the limit where tan ! = W / " # $ and thus ! " # / 2 . Therefore again
considering φ=0 for simplicity, the perturbed level ! + now becomes:
1
1
!+ "
!1 +
!2
(3.25)
2
2
This perturbed level corresponds to equal mixing of the two unperturbed states – in other
words it no longer bears any preferential similarity to either of the unperturbed levels.
This is surely the signature of a strong perturbation! Physically, cases of this kind are
found in resonance interactions in valence bond theory (for instance the equal coupling of
the two Kekuke structures of benzene) or effects like JahnTeller distortion. 3.2.3 Timeevolution of the perturbed two level system
Having studied the perturbed energy levels and eigenvectors in detail in the previous
section, the remaining issue that we shall look at is how the state vector evolves as a
function of time after the perturbation is switched on at time t = 0, when the system is in
the first unperturbed eigenstate:
! ( t = 0 ) = "1
(3.26)
In other words, Eq. (3.26) is our initial condition, and then the Hamiltonian is modified to
be Eq. (3.8), so that !1 is no longer an eigenstate. We want to see how the probability
of finding the system in the second unperturbed level, ! 2 , which is initially zero,
develops as a function of time. This will enable us to concretely see how transitions may
be induced between the unperturbed levels by the perturbation.
To propagate the system in time, it is necessary to express the initial state vector,
Eq. (3.26), in terms of the eigenstates of the system in the presence of the perturbation.
Those eigenstates are of course ! + and ! " as given by Eqs. (3.16) and (3.17) of the
previous section. For simplicity we shall assume that the matrix elements of the
perturbation are real so that ! = 0 . It is then straightforward to show that:
"
"
(3.27)
!1 = cos ! + # sin ! #
2
2
This is our reexpressed ! ( t = 0 ) , which can now be directly propagated in time: "
"
$ iE t '
$ iE t '
! ( t ) = cos exp & # + ) * + # sin exp & # # ) * #
(3.28)
% !(
% !(
2
2
In turn, we can substitute the definitions of ! + and ! " from Eqs. (3.16) and (3.17)
(with our simplifying assumption that ! = 0 ) back into Eq. (3.28) so that finally the timeevolved state vector can be directly given in terms of the original unperturbed states: "
"
*
$ iE t '
$ iE t ' ! ( t ) = +cos2 exp & # + ) + sin 2 exp & # # ) . 01
% !(
% ! (/
2
2
,
(3.29)
*1
$ iE+ t ' 1
$ iE# t ' + + sin " exp & #
# sin " exp & #
0
% !) 2
(
% ! ). 2
(/
,2
The second term in this equation represents the amplitude for making a transition from
unperturbed level !1 to ! 2 under the influence of the perturbation W.
As usual the probability of finding the system in the excited level, ! 2 , is given
by projection with the bra corresponding to eigenstate ! 2 : Pr ( ! 2 )(t ) = ! 2 " (t ) 2 1
% iE t (
% iE t (
= sin 2 # exp ' $ + * $ exp ' $ $ *
& !)
& !)
4 2 (3.30)
+t
= sin 2 # sin 2
!
This result quantifies the transition probability associated with the perturbation as a
function of time, and it is plotted in the figure below. The maximum transition
probability is given by sin 2 ! and we have already seen that the largest value of ! is
! / 2 in the limit of the infinitely strong perturbation. Thus in this limiting case of
extremely strong perturbations, complete transfer of population from the initial state !1
to ! 2 is possible. These are sometimes called Rabi oscillations. Or, in the case of a
weak perturbation, there is a relatively small probability of transition. Figure 33: Plot of the transition probability between the two unperturbed levels of
the twolevel system as a function of time, due to the perturbation W. It is evident
that the transition probability oscillates in time, with a maximum value that is
controlled by the strength of the perturbation, through sin 2 ! . The transition probability oscillates with a period that depends only on the initial splitting of the
unperturbed levels. 3.3 The harmonic oscillator.
The harmonic oscillator is simply a particle in a well that is a quadratic potential.
If the particle has mass µ, and the quadratic potential has force constant k then the
classical total energy is:
1
1
!
(3.31)
E = µ x 2 + kx 2
2
2
It is convenient to switch to a massweighted coordinate, defined as:
(3.32)
q = µ1 2 x
This allows is to write the total energy as:
1
1 k2 12 1 22
!
!
(3.33)
E = q2 +
q = q + !q
2
2µ
2
2
where we have introduced an angular frequency, ! 2 = k µ , for the reason that the
!!
classical equation of motion (Newton’s equation) now is q = !" 2 q (feel free to verify for
nd
yourself!), which is a very simple 2 order differential equation with very simple
solutions (the functions whose second derivatives look like the negative of
themselves…):
(3.34)
q = A cos ! t + B sin ! t
These are simply small periodic oscillations with angular frequency ω backwards and
forwards in the potential well, in terms of the massweighted coordinates. The significant
of ω should now be clear! Figure 34: (a) Plot of the quadratic potential well corresponding to the harmonic
oscillator, and (b) the classical oscillations of the position of a particle in this
potential well with time.
We turn now to the problem of solving for allowed energy levels of the quantum
harmonic oscillator. This problem is a distinct step up in trickiness from the other energy
level problems we have solved so far (particle in a box, and the 2 level system). The
most elegant approach is to use what are called “creation” and “destruction” operators,
which literally create and destroy quanta of energy as we shall see. These operators will
be useful in other problems as well, including the treatment of angular momentum in the
rigid rotor and the hydrogen atom, and even the creation and destruction of photons (not
explicitly treated in this book) or electrons. It is therefore well worth the initial effort to
understand the remainder of this section, not just for the sake of the harmonic oscillator,
but also for getting “under the hood” about the properties of these intriguing creation and
destruction operators.
The first step is to set up the Hamiltonian in dimensionless coordinates so that
things are arranged as simply as possible before we begin the work of finding the
eigenvalues. The quantum Hamiltonian is initially obtained by the usual replacement of
momentum by its operator equivalent from Eq. (3.31)
2
2
ˆ = ! ! d + 1 kx 2
H
2 µ dx 2 2
(3.35)
!2 d 2 1 2 2
=!
+ "q
2 dq 2 2
Recalling the Planck relation, E = !! , and observing that the harmonic oscillator has its
own frequency, it seems natural to separate define a dimensionless Hamiltonian,
ˆˆ
h = H / ( !! ) , where therefore:
2
ˆ
ˆ H = " ! d + ! q2
(3.36)
h=
!!
2! dq 2 2 !
From the last term of the above equation, we see that there is a natural dimensionless
scale for position: "
q
(3.37)
!
The corresponding conjugate momentum operator (also dimensionless) is then:
1d 1 ! d
ˆ
!=
=
(3.38)
i d" i # dq
The overall result is that the dimensionless Hamiltonian now takes the simplest possible
form:
1 d 2 1 2 1 2 1 ˆ2
ˆ
ˆ
(3.39)
h=!
+ "= #+ "
2 d" 2 2
2
2
ˆ
To find the eigenvalues and eigenvectors of h , we now define operators that will
!= ˆ
ˆ
turn out to behave as creation ( a † ) and destruction ( a ) operators – they are linear
combinations of position and momentum:
1ˆ
ˆ
ˆ
a=
! + i"
(3.40)
2
1ˆ
ˆ
ˆ
a† =
! " i#
(3.41)
2
ˆ
ˆ
The first important property a and a † is that they obey a very simple commutation
relation:
1ˆ
1ˆ
ˆˆ ˆ
ˆˆ ˆ
! a, a † # = % + i & % ' i & ' % ' i & % + i &
"ˆ ˆ $ 2
2
(3.42)
ˆ ' %& = ! d , % # = 1
ˆˆ
ˆ
= i &%
( d% )
"
$
ˆ
The next important property is that it is possible to rewrite the Hamiltonian, h , quite (
( ( ( )(
) )
) )( )( ) ˆ
ˆ
simply in terms of a and a † : we leave it as an exercise (see Problem XX) to invert
ˆ
ˆ
equations (3.40) and (3.41) to express position and momentum in term of a and a † , and
substitute, and rearrange to show that:
ˆ ˆˆ 1 ˆ ˆ 1
(3.43)
h = aa † ! = a †a +
2
2
ˆ
The stage is now set to solve for the eigenvalues of h by solving the closely related
ˆˆ
problem of finding the eigenvalues of a †a , which proceeds in 4 steps.
†
ˆˆ
1) The eigenvalues of a a are all positive. This fact can be readily established by
assuming that we have got one of the as yet unknown eigenvectors, ! v , with eigenvalue
wv :
ˆˆ
(3.44)
a †a ! v = wv ! v
Project on the left with the bra corresponding to ! v to obtain an expression for the
eigenvalue, and rearrange:
ˆˆ
ˆ
ˆ
(3.45)
wv = ! v a †a ! v = a! v a! v " 0 Since the last expression is the scalar product of a ket with itself, the result must be
ˆˆ
positive. The case of a zero eigenvalue of a †a is particularly interesting. In that case we
conclude from (3.45) that:
ˆ
(3.46)
a !v=0 = 0 ˆˆ
ˆ
In this case we see that the operator a completely destroys the eigenstate of a †a …
starting to justify why we call it a destruction operator.
ˆˆ
ˆ
2) If ! v is an eigenket of a †a , then so too is the eigenket a ! v . This result
follows from application of the commutation relation (3.42), as follows:
ˆˆˆ
ˆˆ
ˆ
a †a a ! v = aa † " 1 a ! v
(3.47)
ˆˆˆ
ˆ
ˆ
= a a †a ! v " a ! v = ( wv " 1) a ! v () ( () ) ˆˆ
ˆ
This shows that application of the destruction operator, a , on an eigenket of a †a leads to
a new eigenket in which one quantum has been destroyed. It is similarly straightforward
ˆˆ
ˆ
to show that a † ! v is also an eigenstate of a †a , but with its eigenvalue incremented:
ˆˆˆ
(a a) a
† † () ( ) ˆ ˆˆ
ˆ ˆˆ
! v = a † aa † ! v = a † a † a + 1 ! v
ˆ
= ( wv + 1) a † ! v (3.48) ˆ
This is the sense in which a † is a creation operator.
ˆˆ
3) The eigenvalues of the operator a †a are positive integers including 0. We
know from Eq. (3.47) that acting with the destruction operator creates a new eigenstate
with an eigenvalue that is one smaller. We also know from Eq. (3.45) that the
eigenvalues must be positive. To satisfy both of these criteria together requires us to
invoke the killer condition, (3.46), to end the series of eigenstates at exactly eigenvalue
zero. This means that the next larger eigenvalue would have to be exactly 1 otherwise we
would not hit the killer condition. For example, consider an eigenvalue 0.75: action of
the destruction operator on its eigenstate would create a new one with eigenvalue 0.25
which contradicts our positivity condition. We therefore conclude that the allowed
eigenvalues must obey the following relation:
ˆˆ
(3.49)
a †a ! v = v ! v
v = 0,1, 2, ...
ˆˆ
Since the operator a †a apparently counts how many quanta there are in one of its
eigenstates, it is sometimes called the number operator.
ˆ
ˆˆ
4) Given the eigenvalues of a †a in Eq. (3.49), we can obtain the eigenvalues of h
ˆ
and H since they are directly related:
"ˆ ˆ 1%
ˆ
ˆ
H = !! h = !! $ a †a + '
(3.50)
2&
#
Combining Eqs. (3.49) and (3.50) gives us the allowed energy eigenvalues of the
harmonic oscillator:
1,
)
#ˆ ˆ 1&
ˆ
H ! v = !" % a †a + ( ! v = !" + v + . ! v
v = 0,1, 2, ...
(3.51)
*
2'
2$
These eigenvalues form an equally spaced ladder of levels ascending the potential well
from bottom to top. The lowest energy level is not zero, as demanded by Heisenberg’s
uncertainty principle. These energy levels are plotted in the Figure below. Figure 35: Plot of the quadratic potential well corresponding to the harmonic
oscillator, and within it the first three evenly spaced energy levels and the
corresponding eigenfunctions in the position representation.
Our next task is to find the eigenfunctions corresponding to the eigenvalues. It is
definitely easiest to begin with the lowest energy level, and solve for the function that
obeys the condition that it and it alone must satisfy, namely Eq. (3.46). Inserting the
definition of the destruction operator, (3.40), this condition becomes:
#d
&
ˆ
(3.52)
a !v=0 = %
+ "( !v=0 = 0
$ d"
'
The function whose first derivative satisfies the above equation can easily be shown to be
a socalled Gaussian function:
! v = 0 (" ) = N v = 0 exp #" 2 / 2
(3.53) ( ) where the normalization constant can be found from a table of integrals.
Having the lowest eigenvector available to us is a suitable basis for finding all the
others because we have the handy creation operator available with which to generate
ˆ
them – via ! v +1 " a † ! v The next one will be given by:
%d
(
ˆ
(3.54)
! v =1 (" ) # a †! v = 0 (" ) = ' $ + " * ! v = 0 (" )
& d"
)
Subsequent ones can be generated in exactly the same way, and give rise to the forms
shown in the figure above for the lowest few levels. These eigenfunctions have the form
of a vdependent polynomial multiplying a Gaussian function. These polynomials are
called the Hermite polynomials and they are well known in mathematics. 3.4 The rigid rotor, angular momentum and rotational
spectroscopy.
We leave the harmonic oscillator, the simplest model for vibrations in a molecule, and
turn now to considering the simplest model of the rotations of a molecule, the rigid rotor.
The rigid rotor consists of a mass, , rotating about a fixed origin to which it is tethered by
a massless stick (idealization is the name of the game here!).
Figure here: diagram of the rigid rotor, and relative orientations of vectors for position,
velocity and angular momentum.
Let us first briefly consider the classical mechanics of the rigid rotor. After all,
we’re going to need the classical total energy in order to form the quantum Hamiltonian
by replacing position and momentum by their operator equivalents. There is no potential
energy, since the rotor is freely rotating, so the total energy is:
1
p2
!
E = µr 2 =
(3.55)
2
2µ
Since the distance from the center of mass is fixed, it is useful to introduce angular
velocity, ω, and then reexpress the energy in terms of angular velocity:
!
r = r!
(3.56)
1
1
2
(3.57)
E = µ r ! 2 = I! 2
2
2
2
The quantity I = 1 µ r is called the moment of inertia of the rigid rotor. The larger the
2
moment of inertia the more force (torque) it takes to get the rotor rotating.
Indeed the product of the moment of inertia and the angular velocity gives the
magnitude of angular momentum (the rotational analog of the linear momentum we are
familiar with). Angular momentum itself is a vector quantity defined as:
!
L = r ! p = µr ! r
(3.58)
The large multiplication sign signifies the cross product, in case you are rusty on this
point. Geometrically that means that it is a vector that lies perpendicular to the plane
defined by the position and velocity relative to the origin. In our case, position and
momentum are perpendicular, as is geometrically obvious from the diagram shown in
Fig. XXX above, and thus if they are in the plane of the page, angular momentum is
oriented perpendicular to the page. The magnitude of angular momentum is given by:
2
L ! L = µ r " = I"
(3.59)
Therefore we have yet a third way of writing the total energy (well, actually the kinetic
energy) of the rigid rotor, which is in terms of the square of total angular momentum:
1 L2
E=
(3.60)
2I
Let’s do the transition to quantum mechanics. The moment of inertia is just a
constant for a given rigid rotor, but angular momentum, as a dynamic observable will be
represented by an operator, so that the total energy operator is: ˆ2
ˆ 1L
H=
(3.61)
2I
Clearly if we can find the eigenvalues and eigenvectors of total angular momentum (the
square of the magnitude of the vector) then we’ll directly have the solutions to the energy
levels and eigenfunctions of the rigid rotor. The first obvious problem is that we do not
know the operator for angular momentum yet, let alone its eigenvalues or eigenvectors.
But it is (hopefully) obvious how to get the operator from Eq. (3.58) since we know the
operators for momentum and of course position:
i
x
ˆˆˆ!
L=r!p=
i"
"x j k y z " "y " (3.62) "z !$ "
"
"
"
"
"'
& y "z # z "y , # x "z # z "x , x "y # y "x )
(
i%
ˆ
ˆˆˆ
Angular momentum is a vector operator, L = Lx , Ly , Lz , with the 3 individual
= ( ) components also being operators, as defined in Eq. (3.62) above. Total angular
momentum squared is also an operator, defined as:
ˆ
ˆˆ ˆ
ˆ
ˆ
L2 = LiL = L2 + L2 + L2z
x
y
z (3.63)
This means that in fact there are 4 possible sets of eigenvalues and eigenvectors of
angular momentum that we may have to deal with: values for the individual components
and a total squared value. To get going on understanding them, we must first see whether
or not any share anything in common. Thus the first step is to explicitly examine the
commutation relationships of the components of angular momentum amongst themselves
and with total angular momentum. It can be shown (see exercise YYYY) that:
ˆˆ
ˆ
ˆˆ
ˆ
ˆˆ
ˆ
! Lx , Ly # = i!Lzz ; ! Lx , Lz # = %i!Lzz ; ! Ly , Lz # = i!Lxz
(3.64)
"
$
"
$
"
$ ˆˆ
ˆˆ
ˆˆ
! L2 , Lx # = ! L2 , Ly # = ! L2 , Lz # = 0
(3.65)
"
$"
$"
$
The conclusion from these equations is that none of the 3 components of angular
momentum commute with each other, but that all commute with total angular momentum
squared. Therefore we can have simultaneous knowledge of only two pieces of
information – total angular momentum and one component, which we conventionally
choose to be the z component. We shall now seek their common eigenvalues and
eigenvectors.
There are various ways of finding the eigenvalues and eigenvectors. We’ll follow
an approach that is similar to what we’ve already used for the harmonic oscillator,
namely, to employ creation and destruction operators. First, to get to the essence of
things, let’s define dimensionless angular momentum operators, denoted by lower case
ˆ
characters: L = !ˆ = ! lˆx , lˆy , lˆz , with (slightly simplified) commutation relationships:
l ( ) !lˆx , lˆy # = ilˆzz ; !lˆx , lˆz # = %ilˆzz ; !lˆy , lˆz # = ilˆxz
"
$
"
$
"
$ (3.66) This reminds us that the Planck constant will be the fundamental unit of angular
momentum. We now define creation and destruction operators, whose function will later
justify their choice of names just as it did for the harmonic oscillator case:
lˆ+ = lˆx + ilˆy
(3.67)
lˆ! = lˆx ! ilˆy
And we can quickly establish their commutation relationships from the defining ones for
dimensionless angular momentum, Eqs. (3.66):
!lˆ 2 , lˆ+ # = !lˆ 2 , lˆ% # = 0
"
$"
$
!lˆz , lˆ+ # = !lˆz , lˆx # + i !lˆz , lˆy # = ilˆy + i %ilˆx = lˆ+
"
$"
$"
$
(3.68)
!lˆz , lˆ% # = !lˆz , lˆx # % i !lˆz , lˆy # = ilˆy % i %ilˆx = %lˆ%
"
$"
$"
$
!lˆ+ , lˆ% # = !lˆx + ilˆy , lˆx % ilˆy # = %2i !lˆx , lˆy # = %2i ilˆz = 2lˆz
"
$"
$
"
$
The creation and destruction character of the operators can be seen from the commutation
relationships. Let us suppose that we have an eigenfunction, lm of lˆ 2 and lˆz
(eigenvalue m of lˆ ). Then consider the commutator of the operator lˆ with lˆ : ()
() z ( () + )( ) (
)
= lˆ ( lˆ + 1) lm = lˆ ( m + 1) lm
= ( m + 1) ( lˆ lm ) z lˆz lˆ+ lm = lˆz lˆ+ lm = lˆ+ lˆz + lˆ+ lm
+ (3.69) + z + Our result shows that the operator lˆ+ creates an additional quantum of the zcomponent
of angular momentum when it acts on an eigenstate of lˆ . It is an exercise of exactly the
z same complexity to show similarly that:
lˆz lˆ! lm = ( m ! 1) lˆ! lm ( ) ( ) (3.70) The defining relations for the creation and destruction operators can also be
inverted:
1
lˆx = lˆ+ + lˆ!
2
(3.71)
ˆ = 1 lˆ ! lˆ
ly
+
!
2i
Therefore the sum of the squares of the two components of angular momentum that we
cannot know can be written in terms of the creation and destruction operators, and of
course that is also equal to the difference between total angular momentum, lˆ 2 , and lˆz2 :
2
2
1
1
lˆ 2 ! lˆz2 = lˆx2 + lˆy2 = lˆ+ + lˆ! ! lˆ+ ! lˆ!
4
4
(3.72)
1 ˆˆ ˆˆ
= l+ l! + l! l+
2
Two other expressions for this difference can be obtained by using the last commutation
relation in Eqs. (3.68): (
(
( ( )
)
) ) ( ) lˆ 2 ! lˆz2 = lˆ! lˆ+ + lˆz
(3.73)
= lˆ+ lˆ! ! lˆz
To lay further groundwork for finding the eigenvalues, let us assume that we have
an eigenfunction lm of lˆ 2 (with eigenvalue w, that presumably relates to the square of
l) and lˆ (with eigenvalue m). We can then make a modified ket using the creation
z operators: ! + = lˆ+ lm and take its norm (which must be zero or larger), using Eq.
(3.73) to simplify:
! + ! + = lm lˆ" lˆ+ lm = lm lˆ 2 " lˆz2 " lˆz lm (3.74)
= w " m2 " m # 0
Proceeding likewise with the destruction operator, we can define ! " = lˆ" lm and take
its norm, lm lˆ lˆ lm , which must also be zero or larger to show that:
+! w ! m2 + m " 0
(3.75)
We can now deduce the eigenvalues based on the following logic:
1) Denoting the smallest possible eigenvalue of lˆz as mmin , then we know that the action
of the destruction operator cannot give a valid eigenfunction with a smaller eigenvalue.
Therefore it must be true that lˆ! lmmin = 0 and thus Eq. (3.75) becomes an equality in
2
this case: w ! mmin mmin = 0 .
+ 2) Conversely, if we denote the largest possible eigenvalue of lˆz as mmax , then we know
that the action of the creation operator cannot give a valid eigenfunction with a larger
eigenvalue. Therefore it must be true that lˆ+ lmmax = 0 and thus Eq. (3.74) becomes an
2
equality in this case: w ! mma! mmax = 0 .
x
3) Combining the two results established above gives a connection between the largest
2
2
possible and smallest possible eigenvalues: mma+ mmax = mmin mmin . There are is only one
!
x
consistent way to satisfy this relationship, namely that:
(3.76)
mmin = ! mmax
4) In order to interconvert mmin into mmax in integer steps through the action of the
creation operator, it must be true that 2 mmax = 1, 2, 3... , so that mmax are either integer or
halfinteger values.
5) Given the largest possible eigenvalue, mmax , of lˆz , we can also, from point (2) above,
find the eigenvalue, w, of total angular momentum, lˆ 2 . It is w = m ( m + 1) , and it
max max now seems sensible to identify the hitherto unspecified label l in the eigenstate lm as
simply l ! mmax .
Let us summarize and put it all together in terms of the original, rather than
reduced, angular momentum operators. We have deduced that:
ˆ
(3.77)
L2 lm = ! 2l ( l + 1) lm l = 0, 1 2 ,1, 3 2 , 2, ...
ˆ
(3.78)
Lz lm = !m lm m = !l, !l + 1, ..., l ! 1, l There are, therefore, (2l+1) distinct values of the quantum number m for a given l value.
The physical picture is as follows:
• With respect to total angular momentum, the value of l, by Eq. (3.77), sets the
length (magnitude) of the angular momentum vector, and only certain values are allowed.
Geometrically, as drawn below, the value of l defines a sphere, and any given point on
the sphere is a possible angular momentum vector.
• With respect to the specific direction of the angular momentum vector, it is only
the zcomponent that is specified, with (2l+1) possible values given by Eq. (3.78). Each
possible m value defines a allowed circle on the sphere specified by l. This corresponds
physically to a cone of permissible angular momentum vectors, each with a common zcomponent but unknown x and y components. Thus is partial “space quantization”
Let us turn now to the question of the eigenvectors of angular momentum. There
are various ways of solving for the eigenvectors. We shall follow a strategy similar to
what we used for the harmonic oscillator, beginning with the termination condition for
the highest (m=l) value associated with a given l: lˆ+ ll = 0 . We’ll seek a solution by
separation of variables, using the natural coordinates, which are spherical polar
coordinates:
x = r sin ! cos "
(3.79)
y = r sin ! sin "
z = r cos!
Rigid rotation, of, course, corresponds to fixed r, so we now seek to separate the angular
variables:We’ll seek a solution by separation of variables:
(3.80)
Yl m =1 (! , " ) = N # (! ) $ (" )
Putting the components of angular momentum into spherical polar coordinates begins
with the one that is simplest – namely rotation about z, described by the azimuthal angle
φ. It can be shown that:
ˆ 1!
Lz =
(3.81)
i !"
ˆ
Considering the condition L ! (" ) = l !! (" ) shows that the φ dependence is given by:
z !
# (" ) = il !# (" )
!" $ # (" ) = exp ( il" ) (3.82) The θ dependence follows from the termination condition already mentioned, lˆ+ ll = 0 ,
where the creation operator in spherical polar coordinates is given by:
$"
"'
ˆ
ˆ
ˆ
(3.83)
L+ = Lx + iLy = ei! % + i cot # (
"! )
& "#
Using the separated variable form, (3.80), together with the known φ dependence, (3.82)
gives (after a few steps of omitted algebra), an equation for the θ dependence:
!" (# )
= ( l cot # ) " (# )
!#
(3.84)
l
$ " (# ) = sin # We finally have our target eigenfunction, after substituting (3.84) and (3.82) into Eq.
(3.80):
(3.85)
l, m = l ! Yl m =1 (" , # ) = N sin l " eil#
From the first eigenfunction, for m=l, it is now now in principle straightforward,
if potentially a little tedious, to step down the ladder of possible m values one at a time,
using the destruction operator, lˆ! , whose explicit form in polar coordinates follows as the
adjoint of Eq. (3.83). Thus we can obtain the lower eigenfunctions as:
a
(3.86)
l, m = l ! a " lˆ l, m = l ()
! It can be explicitly verified that the first several spherical harmonics take the following
form. Firstly, the l=0 case:
1
Y00 =
(3.87)
4!
This corresponds to a constant amplitude everywhere on a sphere. It is gives the angular
dependence (or, more accurately, the lack of it) associated with s functions. The l=1 case
of course has 3 possible values of m (1, 0, 1):
3
Y11 =
sin " ei#
8! 3
cos"
4! Y10 = (3.88) 3
sin " e$ i#
8!
These functions have a single angular node, and, when mixed together to make real
functions, make be familiar to you as providing the angular part of p functions. With a
little additional work, the l=2 functions can also be obtained as:
15
Y22 =
sin 2 " e2 i#
32!
Y1$1 = Y21 = 15
cos" sin " ei#
8! Y20 = 5
3 cos 2 " $ 1
16! Y2$1 = ( ) (3.89) 15
cos" sin " e$ i#
8! 15
sin 2 " e$2 i#
32!
These functions have 2 angular nodal surfaces and provide the angular dependence of d
functions, when we consider atomic orbitals.
These eigenfunctions of angular momentum are called the spherical harmonics
and, as eigenfunctions of Hermitian operators, have all three of the nice properties that
Y22 = we established generally in the previous chapter. First, they form a basis in which any
angular function can be expanded:
% f (! , " ) = $ l l
$ cmYlm (! ," ) (3.90) l = 0 m= #l Second, they are orthonormal, which we can write either abstractly in terms of the braket
notation, or concretely in terms of integrals over the angular variables:
( 0 l !m! lm = 2( 0 m!
m
) d" ) d# sin# $Yl ! (# ," )& Yl (# ," ) =* ll !* mm !
%
'
* (3.91) Third, they obey the completeness or closure relation. 3.5 The hydrogen atom.
While you are almost certainly familiar with the eigenvalues (energy levels) and
eigenfunctions (atomic orbitals) of the hydrogen atom from earlier classes, we are now in
a position to see the origin of both from a more fundamental perspective. We take the
hydrogenlike atom as having a nucleus of charge Z (the proton for H itself) clamped at
the origin, and want to solve the quantum mechanical problem of the bound states of the
electron, in the Coulomb potential:
Ze2
Ze2
Ze2
(3.92)
V (r) = !
=!
=!
r
r
x 2 + y2 + z 2
The point of writing the potential in 3 different ways above is to emphasize that in terms
of Cartesian variables, x,y,z, the potential does not separate, but in terms of polar
coordinates, , it does. Therefore we are strongly motivated to write the Hamiltonian
(kinetic energy plus potential energy) in terms of polar coordinates and try to exploit this
spherical symmetry by separating variables. This requires us to express the kinetic
energy operator for the electron moving in 3 dimensions:
! 2 # "2
"2
"2 &
ˆ
(3.93)
T =!
+ 2 + 2(
2 m % "x 2 "y
"z '
$
in terms of polar coordinates. After some straightforward algebra, one can show that:
1 "2
1 % "2
1"
1 "2 (
!2 =
r+ 2' 2 +
+2
r "r 2
r & "#
tan # "# sin # "$ 2 *
)
(3.94)
2
2
ˆ
1"
L
=
r+ 2 2
r "r 2
!r
Putting it all together gives us the hydrogenic atom Hamiltonian as:
2
2
2
ˆ2
ˆ = ! ! " r + L ! Ze
H
(3.95)
2 mr "r 2
r
2 mr 2
All the angular dependence of this Hamiltonian is contained in the total angular
ˆˆ
ˆˆ
momentum operator, and thus it is immediately true that ! H , L2 # = ! H , Lz # = 0 . In other
"
$"
$
words, the eigenfunctions of angular momentum yield the solution to the angular part of
this problem, and separating the radial and angular variables will work beautifully: (3.96)
! ( r ) = R ( r )Yl m (" , # )
The angular part of the eigenfunctions are the spherical harmonics, Eq. (3.85) and
subsequent equations, whose properties we deduced in the previous section. Since the
total angular momentum eigenvalues are ! 2l ( l + 1) from Eq. (3.77), the equation obeyed
by the radial component of the eigenfunction is:
# ! 2 "2
! 2l ( l + 1) Ze2 &
(3.97)
r+
!
%!
( R ( r ) = ER ( r )
2
r'
2 mr 2
$ 2 mr "r
While there is no direct angular dependence, there is an indirect coupling between the
radial equation and the angular motion via the second term of Eq. (3.97), which is a
repulsive interaction that increases with the square of total angular momentum. It is a
centrifugal term, which prevents the electron from approaching the nucleus too closely
when it has high angular momentum. We can immediately infer that higher values of the
angular momentum quantum number will, all else being equal, lead to larger average
values of the radial coordinate, <r>.
The radial eigenvalue/eigenvector problem, Eq. (3.97), can be solved exactly, but
it is not a trivial exercise. As a student of quantum mechanics, for whom time is short,
and for whom the main goal is to proceed to study modern methods that can be used to
explain the behavior of electrons in molecules, the details of this solution while
interesting are not essential (for full details, you may consult other textbooks such as the
one by McQuarrie and Simons, or, perhaps later in the future, an appendix to this
project). The outcome that is most important is the fact that the energy levels are given in
terms of a principal quantum number, n, which is allowed to be a positive integer value:
EI
(3.98)
En = ! 2 ; n = 1, 2, 3....
n
where the reference energy is that required to completely ionize the electron from the
lowest energy level:
mZ 2 e 4 Ze2
(3.99)
EI =
=
2 a0
2!2
Notice how the ionization energy can be nicely written in terms of the Bohr radius, so
that the energy looks like a modified Coulomb energy, but with the sign changed. That is
an example of the quantum mechanical virial theorem (need to add to earlier chapter).
The Bohr radius itself is:
!2
a0 =
(3.100)
me2 Z
The Bohr radius sets the length scale for the radial wavefunctions. Again, omitting the
details, a few examples of the radial wavefunctions are shown below:
Rn =1,l = 0 = 2 ( a0 ) !3 / 2 e! r / a0 Rn = 2,l = 0 = 2 !1/ 2 ( a0 )
Rn = 2,l =1 = !3 / 2 " r % ! r / a0
$ 1 ! 2a ' e
#
0& 2 !3/ 2
!3 / 2 r ! r / a0
a
e
1/ 2 ( 0 )
a0
3 (3.101) You should be familiar with them from introductory chemistry classes – they are the
radial wavefunctions for the 1s, 2s and 2p states respectively. Notice how the total
number of nodes (radial + angular) controls the total energy so that the 2p radial
wavefunction is nodeless, while the 2s radial wavefunction has a single node. ...
View Full
Document
 Spring '09
 HEADGORDON

Click to edit the document details