This preview shows page 1. Sign up to view the full content.
Unformatted text preview: 4 APPROXIMATION METHODS
4.1 Introduction:
The number of problems that can be solved exactly in quantum mechanics is
indeed very small, and furthermore essentially all of the realistic ones require
approximations. The purpose of this chapter is to introduce you to some of the most
important methods that are available for getting reasonable, though not exact, answers to
quantum mechanical problems. These methods introduced here will be used repeatedly
in later chapters in the context of treating manyelectron problems. However, it is
simplest to learn them first in the context of simpler problems where only a single
particle is involved, and that is our present purpose.
There are two major categories of methods for timeindependent problems. The
first technique is perturbation theory, which aims to slightly correct a known solution to a
problem that is similar to another problem which does not have any solution available.
The details of this approach differ somewhat depending on whether the state which we
are trying to correct is nondegenerate (the most common case) or not. The second
technique is the variational method, which involves guessing a trial wavefunction,
containing adjustable undetermined parameters, which are then adjusted to minimize the
energy. It can be shown that the parameters which minimize the energy give the best
approximation. 4.2 Timeindependent perturbation theory: nondegenerate
states
The basis of timeindependent perturbation theory is to find a problem with a
known solution that is close to the target unknown problem. Mathematically, this means
ˆ
that the Hamiltonian of the known problem, H ( 0 ) , is close to that of the unknown
ˆ
problem, H , and therefore the difference between the two, which we will call the
ˆ
ˆ
ˆ
perturbation, !V (1) = H " H ( 0 ) , is small. Therefore we are going to attempt a power
series expansion in terms of the difference, where the parameter, ! , will be used to group
terms in the expansion. Such an expansion should be rapidly convergent, if the
perturbation is small. If not, another method must be used.
Sometimes ! will have a direct physical interpretation, such as the magnitude of
an applied field, where the unperturbed problem could be a molecule in the absence of
the field, while the perturbed problem is the molecule in the presence of an applied
electron field, for example. Other times ! is simply a formal ordering parameter which
will be set to 1 when the power series expansions for the eigenvalue and eigenvector of
the target problem are truncated at some power, ! n . For instance this is the case if one is
considering a particle in a box where the shape of the box is changed from being flat
(known solution) to sloping (see problem XXX), where the solution is not known to you.
ˆ
The eigenvalues and eigenfunctions of the H ( 0 ) problem are assumed to be
known, and will be denoted as: ˆ
H ( 0 ) ! (j0 ) = E (j0 ) ! (j0 ) (4.1) The superscript, (0), indicates the problem is unperturbed (zero order) and the index j
indicates which unperturbed energy level we are referring to. Most commonly we will be
considering corrections to the lowest energy level (ground states). The strategy we
follow will now be to write power series expansions first for the Hamiltonian:
ˆ
ˆ
ˆ
(4.2)
H = H ( 0 ) + !V (1)
All quantities in Eq. (4.2) must be known otherwise we cannot even get started with the
agenda of perturbation theory, which is to solve orderbyorder for the terms in power
series expansions of the unknown perturbed eigenvalues and eigenvectors. Specifically
these expansions take the same form as Maclaurin or Taylor series:
E j = E (j0 ) + ! E (j1) + ! 2 E (j2 ) + …
(4.3) ! j = ! (j0 ) + " ! (j1) + " 2 ! (j2 ) + … (4.4) Orderbyorder solution for the unknowns starts by substituting the expansions,
Eqs. (4.2), (4.3) and (4.4) into the timeindependent Schrodinger equation,
ˆ
H ! j = E j ! j , which initially generates a large and cumbersome expression: ˆ
ˆ
" H ( 0 ) + !V (1) $ " & (j0 ) + ! & (j1) + ! 2 & (j2 ) + …$ =
#
%#
% (4.5)
" E (j0 ) + ! E (j1) + ! 2 E (j2 ) + …$ " & (j0 ) + ! & (j1) + ! 2 & (j2 ) + …$
#
%#
%
If the perturbation is small, we should be able to begin by collecting the terms from Eq.
(4.5) that are proportional to ! 0 (that simply yields the solved equation for the
unperturbed problem, (4.1)). The next step is to then collect all terms proportional to !1 ,
which will represent the largest correction to the unperturbed problem, and solve for the
unknowns. Gathering all terms proportional to !1 from Eq. (4.5) yields:
ˆ
ˆ
(4.6)
H ( 0 ) ! (1) + V (1) ! ( 0 ) = E ( 0 ) ! (1) + E (1) ! ( 0 )
j j j j j j The first unknown that we’re after is the leading effect of the perturbation on the
eigenvalue of interest, E (j1) . To solve for this perturbed energy level, we project Eq. (4.6)
with the known bra corresponding to the unperturbed energy level, ! (j0 ) . The result is:
ˆ
ˆ
! (j0 ) H ( 0 ) ! (j1) + ! (j0 ) V (1) ! (j0 ) = E (j0 ) ! (j0 ) ! (j1) + E (j1) ! (j0 ) ! (j0 ) (4.7) After staring at this equation for either a short time, it should be immediately clear that
because the unperturbed state can be chosen as normalized, the final scalar product is 1.
After staring at this equation for a somewhat longer time, it should gradually become
evident that the first and third terms cancel. One uses the adjoint of the unperturbed
problem, Eq. (4.1), which is:
ˆ
(4.8)
! (0) H (0) = ! (0) E (0)
j j j to show the cancellation explicitly. We are then left with exactly what we want – an
expression for the leading correction to the eigenvalue entirely in terms of known
quantities: ˆ
E (j1) = ! (j0 ) V (1) ! (j0 ) (4.9) Given the known unperturbed eigenvector as a function, and the known operator
describing the perturbation, Eq. (4.9) represents an integral (or matrix element) to be
evaluated. The interpretation of Eq. (4.9) in terms of matrix elements is particularly
straightforward. If we are given the matrix of the perturbation, V, in the basis of the
unperturbed eigenvectors, the first order correction to the jth level is simply the (j,j)
element of the matrix.
The other remaining unknown in the first order perturbed equation, Eq. (4.6), is
the first order correction to the eigenfunction due to the perturbation. To get an
expression for ! (j1) , we should expand it in a basis and then solve for the expansion
coefficients, as we learned in Chapter 2. There is one basis that we already know is
available to use for this purpose, and that is the basis of the unperturbed eigenfunctions,
defined by solutions to the unperturbed problem, Eq. (4.1). Therefore, we put this basis
to work, and write:
(
(
! (j1) = # ! k0 ) ck1) (4.10) k" j The astute reader will notice that we have defined the sum over unperturbed states, k, to
exclude the state j we are correcting. The physical reason for this that changing the
amount of the state we had to begin with has no effect apart from normalization.
Mathematically this is expressed by the condition that the perturbed wavefunction should
have unit scalar product with the unperturbed one: ! (j0 ) ! j = 1 .
To solve for the coefficients, we just proceed in the usual fashion for solving for
coefficients (see Sec. 2.XX) by projecting Eq. (4.6) with the each unperturbed level,
(
! k0 ) , one at a time (except for ! (j0 ) which we already used to resolve the energy). This gives:
(
(
(
(
ˆ
ˆ
! k0 ) H ( 0 ) ! (j1) + ! k0 ) V (1) ! (j0 ) = E (j0 ) ! k0 ) ! (j1) + E (j1) ! k0 ) ! (j0 ) (4.11) Inserting the adjoint of the condition for the kth unperturbed state,
ˆ
! ( 0 ) H ( 0 ) = ! ( 0 ) E ( 0 ) , into the first term of Eq. (4.11), and noting that orthogonality of
k k k the unperturbed states causes the last term to be zero leads us to:
ˆ
E ( 0 ) ! E ( 0 ) " ( 0 ) " (1) = " ( 0 ) V (1) " ( 0 ) ( j k ) k j k (4.12) j We can now insert the expansion for the perturbed eigenstate, Eq. (4.10) to observe that
(
(
ck1) = ! k0 ) ! (j1) and therefore Eq. (4.12) yields an expression for the expansion coefficients, and thus, via Eq. (4.10), the perturbed wavefunction:
ˆ
c(1) = ! ( 0 ) V (1) ! ( 0 )
E (0) " E (0)
k k j ( j k ) (
(
(
(
ˆ
# ! (j1) = % ! k0 ) ck1) = % ! k0 ) ! k0 ) V (1) ! (j0 )
k$ j k$ j ( (
E (j0 ) " Ek0 ) ) (4.13) This equation shows that all unperturbed states, apart from the original one, contribute to
the first order correction to the eigenstate. Not such a simple situation as the first order
energy correction, unfortunately!
Now that you’ve got the feel for it, it’s not that difficult to go on and do the
second order correction to the energy. First we collect all terms in the expansion, Eq.
(4.5) that are proportional to ! 2 :
ˆ
ˆ
(4.14)
H ( 0 ) ! ( 2 ) + V (1) ! (1) = E ( 0 ) ! ( 2 ) + E (1) ! (1) + E ( 2 ) ! ( 0 )
j j j j j j j j We project with the unperturbed eigenstate, , to get an expression for the second order
energy correction on its own. This is almost identical to the procedure we used to isolate
the first order energy correction, and yields:
ˆ
(4.15)
E ( 2 ) = ! ( 0 ) V (1) ! (1)
j j j You should be comfortable with the 3 simplifications that were made in order to obtain
this expression. Finally, the last step is to insert the hardwon expression for the first
order perturbed wavefunction, Eq. (4.13), to obtain the explicit form of the second order
energy correction:
(
(
ˆ
ˆ
E (j2 ) = ! (j0 ) V (1) ! (j1) = # ! (j0 ) V (1) ! k0 ) ck1)
k" j =#
k" j (
(
ˆ
ˆ
! (j0 ) V (1) ! k0 ) ! k0 ) V (1) ! (j0 ) (4.16) ( E( ) $ E( ) )
0 j 0
k As follows directly from the form of the firstorder correction to the unperturbed
eigenstate, Eq. (4.13), all unperturbed states contribute to the second order energy
correction! This expression is accordingly often called a “sum over states” result. One
can, of course, go further with perturbation theory, but usually if second order is not good
enough, then third order won’t be either, and so on. One should instead resort to a nonperturbative approach such as the variational principles discussed later on in Sec. XXX.
Instead the next step is to look at applications, and we’ll do a simple one with important
physical implications in the following subsection. 4.3 Application of perturbation theory to dispersion interactions
In introductory chemistry, one is usually introduced to the main intermolecular
interactions, including hydrogen bonding, dipoledipole interactions, and the somewhat
more mysterious dispersion forces (Van der Waals interactions). The dispersion
interaction is usually claimed to be the coupling of instantaneous dipole moments, which
seems surprising when one considers that no dipole moment is actually induced by the
presence of a second spherical atom near a first one. What then is exactly the nature of
the interaction?
This is an ideal problem to treat by perturbation theory because the intermolecular
interaction is much, much weaker than the intramolecular interactions. While your
interests may involve large molecules and the role of dispersion interactions in the
condensed phase, we are in a position to explore their treatment by perturbation theory
only if we have an exactly solved unperturbed problem available to start with. The best we’ve got is a pair of noninteracting hydrogen atoms, and the perturbation will then be
their intermolecular interaction. In raw form, this is the internuclear and interelectron
repulsion, and the intermolecular electronnuclear attraction. Referring to the diagram
below, we have:
!
!
!1
V = e2 $ R !1 ! R" 1 ! RA1 + R"# &
(4.17)
B
#
%
'
As it stands, this expression is not useful because we’d like to put everything in
terms of the internuclear distance, R, and the local electron coordinates on each atom, r!
and r! . This can be done to an excellent approximation because the internuclear distance
can be assumed to be much bigger than the typical intramolecular electronnucleus
distance. We will make use of the Maclaurin series expansion for the negative square
root of a small correction to 1, namely:
1
3
(4.18)
(1 + x )!1 2 = 1 ! x + x 2 ! …
2
8
Let’s first consider the distance between electron a and nucleus B:
(4.19)
R aB = R aA + R AB ! " ra + R
The dot product of this vector with itself is:
#
n " ra ra2 &
2
(4.20)
RaB = R 2 %1 ! 2
+ 2(
R
R'
$
Applying the Maclaurin series to take the negative square root of the righthand side (at
least to the first two leading orders…):
2
# n"r
&
3( n " ra )
ra2
!1
!1
a
! RaB = ! R %1 +
+
+
! …(
(4.21)
R
2 R2
2 R2
%
(
$
'
Repeating the exercise for the distance between electron b and nucleus A:
(4.22)
R Ab = R AB + R Bb ! R + ra
2
# n"r
&
3( n " rb )
rb2
b
= ! R %1 !
!
+
! …(
R
2 R2
2 R2
%
(
$
'
Finally, once more for the interelectronic distance between electrons a and b:
R ab = R aA + R AB + R Bb ! " ra + R + rb = R + rab
!1
! RAb !1 (4.23) (4.24) 2
# n"r
&
3( n " rab )
ra2b
ab
= R %1 !
!
+
! …(
(4.25)
R
2 R2
2 R2
%
(
$
'
We can now substitute our expressions for the inverse distances into the
expression for the perturbation, Eq. (4.17), and collect terms. The leading ( R !1 ) term,
which physically corresponds to interactions of the net charge of each hydrogen atom
with each other, cancels immediately, as expected for neutral atoms. The terms which
decay next most slowly ( R !2 ) correspond physically to the coupling between the
permanent charge of one atom and an instantaneous dipole moment of the other atom.
They also cancel as is expected physically, and can be seen from:
R !2 [ ! n " ra + n " rb ! n " rab ] = R !2 # ! n " ra + n " rb ! n " ( rb ! ra ) % = 0
(4.26)
$
&
!1
RAb !1 The thirdmost slowly decaying terms are those that go as R !3 with internuclear distance,
and correspond physically to interaction between an instantaneous dipole moment on one
atom with another on the other. The expression for this interaction is:
2
2
2
3
3
3
R !3 # 1 ra2 ! 2 ( n " ra ) + 1 rb2 ! 2 ( n " rb ) ! 1 ra2b + 2 ( n " rab ) %
2
2
$2
&
(4.27)
2
= R !3 #( rb " ra ) ! 3( n " ra ) ( n " rb ) %
$
&
This instantaneous dipoledipole interaction is not zero, and therefore becomes our
perturbation operator. We assume that the unit vector connecting the two nuclei is along
the z axis, giving us the working expression:
e2
ˆ
V (1) = 3 [ xa xb + ya yb ! 2 za zb ]
(4.28)
R
In staring at the above equation, remember that the internuclear distance is a parameter
(not an operator that acts on the wavefunctions of the electron).
We are now ready to apply perturbation theory. Of course the place to start is
first order, with the unperturbed wavefunction being the 1s ground state of each atom: ! ( 0 ) = 1sA " 1sB (4.29) This is written with the direct product multiplication symbol to emphasize that the first
atom involves electron a, while the second involves electron b (the atoms are assumed
not to overlap). Therefore when we do integrals that involve both sets of coordinates,
those integrals will separate into one part for each atom, as we shall now below.
Substituting Eq. (4.29) and (4.28) back into Eq. (4.9) for the first order energy correction
yields:
R3
E (1) 2 = 1sA xa 1sA 1sB xb 1sB + 1sA ya 1sA 1sB yb 1sB
(4.30)
e
!2 1sA za 1sA 1sB zb 1sB
Recalling that we set the electronic origins at the protons of each atom, we see that all 6
of these expectation values (the average values of the electron coordinate on each atom)
are zero, and therefore so is the first order energy correction.
Accordingly, we must look at second order perturbation theory to seek the leading
order interaction between the two atoms. Our unperturbed eigenstates include all direct
products apart of hydrogen functions on the two atoms apart from the ground state, Eq.
(4.29):
(
! k0 ) " ( nlm )A # ( n$l $m$ )B (4.31) Substituting into the second order energy expression, Eq. (4.16), gives the following
somewhat messy expression: # ( nlm ) xa (100 ) ( n!l !m! ) xb (100 )
%
%
$+ ( nlm ) ya (100 ) ( n!l !m! ) yb (100 )
( excluding
n = n! = 0) %
e4
%"2 ( nlm ) za (100 ) ( n!l !m! ) zb (100 )
(2)
E=6*&
2 En =1 " Enlm " En !l !m !
R nlm
n !l !m ! '
%
%
(
%
%
) 2 (4.32) A few comments can be made without struggling through all the algebra that potentially
remains to explicitly evaluate the terms of Eq. (4.32). First, some terms are nonzero.
The n = n! = 2; l = l ! = 1 (i.e. 2p excited states) are examples. For instance the
n = 2; l = 1; m = 0 function is the 2pz orbital. It is antisymmetric with respect to
reflection in the xy plane, just like z itself. Therefore the third term of the numerator in
Eq. (4.32) is overall symmetric with respect to reflection in the xy plane and does not
vanish. The second key point is that the second order correction to the ground state
energy is negative definite. The numerator is a perfect square, and the denominator is
always less than zero, when we are correcting the ground state. Therefore all dispersion
interactions are attractive (lower the energy). 4.4 Perturbation theory for degenerate states
So far, even if you were not explicitly aware of it, we have been doing perturbation
theory for nondegenerate states. The important distinction is for a nondegenerate state,
if we know the energy of the unperturbed state, say, E (j0 ) , then we also know the
eigenstate uniquely. On the other hand, if the unperturbed state is gfold degenerate, then
simply knowing the unperturbed energy is not enough to specify the state – it could be
any linear combination of the g functions that span the gfold degenerate subspace.
Therefore much of our previous analysis is invalid, and we shall briefly consider how to
redo it through first order when the unperturbed reference is degenerate. Let us denote
the unknown zero order eigenfunction as:
g 0
0
0 = " ! (jm) c(jm) (4.33) m The first order perturbed equation looks the same as before at first glance, apart from the
unknown zero order expansion coefficients that enter from (4.33):
"V (1) ! E (j1) $ 0 + " H ( 0 ) ! E (j0 ) $ 0 = 0
(4.34)
#
%
#
%
Previously we projected the first order perturbed equation with the unperturbed bra state.
Now, however, we do not actually know this state. All we know is that it lies in the space
of the g functions used in the expansion, (4.33). Therefore we shall project with each of
those in turn, noting that the last two terms cancel (make sure you understand why!), to
yield:
g 0
0
0
0
! (jm) #V (1) " E (j1) % 0 = ( ! (jm) #V (1) " E (j1) % ! (jm)' c(jm)' = 0
$
&
$
& (4.35) m' This is a matrix eigenvalue – eigenvector problem:
V(1)c(j0 ) = E (j1)c(j0 ) (4.36)
The matrix of the Hamiltonian is g ! g , and its eigenvalues give the shifts in each of the
g unperturbed degenerate levels, and, corresponding to each is a particular zero order
eigenvector, that is a linear combination of the original vectors, with coefficients
contained in the vector, c(j0 ) . So we are diagonalizing matrices even to get the first order
perturbed energies – what have we gained by doing perturbation theory? The answer is
that the dimension of the matrices is tiny (just the size of the unperturbed degenerate subspace) rather than enormous would be the case if all basis vectors were allowed to
mix together. For example, the degenerate n=2 levels of the H atom define a 4dimensional degenerate subspace which might be broken by a perturbation, and then all
that is required by this first order degenerate perturbation theory is to diagonalize the 4 by
4 matrix of the operator describing the perturbation. 4.5 The variational method
The variational method is an approach that first provides a measure of the relative quality
of various guessed wave functions for a problem where the exact answers are not
available. Let us say that a particular guess wave function for the ground state is the ket
!
! . The energy associated with this guess is given by the expectation value equation
that should be used for any state that is not an exact eigenstate (and we are pretty sure
that our guessed ket will not be exact!):
!ˆ!
! ! H!
(4.37)
E=
!!
!!
!
!
Now different ! will lead to different approximate energies, E . The variational
theorem says that the best guess is the one that gives the lowest energy. This is because
the variational theorem states that all approximate energies are equal to or greater than
the true ground state energy. Accordingly, the lowest possible energy is the exact one,
and therefore the best approximation, from a number of possibilities, will be the one with
lowest energy.
Mathematically, the variational theorem is simply that:
!
(4.38)
E ! E0
where the true ground state energy is E0 . The proof is quite easy – we use the fact that
the (unknown) true eigenstates,
expanded in that basis: { ! } , form a basis and therefore any guess ket can be
j !
! = " ! j cj (4.39) j !
The trial ket ! can always be normalized, and therefore: !!
! ! = " cj = 1
2 (4.40) j Setting the denominator of Eq. (4.37) to 1, because of normalization, and inserting the
expansion (4.37) into the bra and ket of the numerator yields the following expression for
the energy corresponding to the guessed ket:
2
ˆ
!
!ˆ!
(4.41)
E = ! H ! = " " c *c ! H ! = " c E
jk j k j k j j j All exact eigenstates are, by definition, greater than or equal to the exact ground state, so
we can rewrite:
#
2
2&
!
E = ! c j E j " % ! c j ( E0 = E0
(4.42)
$j
'
j !
which finally establishes that E ! E0 .
As was mentioned at the outset, this result tells us that the better of two guesses
for the wavefunction corresponding to an unknown ground state is the one which yields
the lower energy. This also suggests that an even better guessing procedure is to choose
trial functions that include adjustable parameters. For instance, one might guess that the
ground state wavefunction of the H atom is described by a Gaussian function,
N exp !" r 2 . The best version of the trial function will then be, by the variational ( ) theorem, the one which yields the lowest energy. In other words, the parameter (in this
case, ! ) should be varied to minimize the energy:
!
!E
=0
(4.43)
!"
If there are multiple parameters, then they should be varied so that all partial derivatives
are zero at the optimal parameter values.
Finally, we note that while we’ve discussed the variational method as an
approximation to the ground state, it is also possible to extend the results to approximate
!
higher eigenstates. The trial energy, E , is stationary to first order changes in the trial ket
near any of the exact eigenvalues. However, the easiest way to use this result is in the
context of the special case where the variational parameters are all linear, which we
discuss next for both the ground and excited states.
The linear variational method takes a series of guess functions, !" , and
employs the variational principle to mix them together with linear variational parameters,
c! : { } !
! = $ "# c# (4.44) # This expression looks very much as though the guess functions are being used as a basis,
but in contrast to a real basis expansion, which is usually infinite in length, and
guaranteed to be able to exactly represent the desired eigenstate, the set of guess
functions are finite in number, and generally will not be able to exactly represent the
desired eigenstate. The best we can do is to apply the variational principle to minimize
the energy, substituting the linear expansion, Eq. (4.44) into the energy expectation value
equation, Eq. (4.37):
*
# c! H!" c"
! !"
E=
(4.45)
*
# c! S!" c"
!" where the matrix elements of the trial functions are:
ˆ
H = # H#
!" ! " S!" = #! #"
The task now is to vary the parameters to minimize the trial energy. Considering
variations in both the parameters and their complex conjugates separately we have: (4.46) !
!E = *
*
$ (! c" H"# c# + c" H"#! c# ) "# *
$ c" S"# c# "# % *
$ c" H"# c# (4.47) "# 2 *
*
$ (! c" S"# c# + c" S"#! c# ) = 0 &
) "#
*
( $ c" S"# c# +
(
+
' "#
*
Multiplying through by the original denominator allows us to simplify to:
*
*
*
*
!
$ ! c" H"# c# + c" H"#! c# % E $ ! c" S"# c# + c" S"#! c# = 0
"# ( ) "# ( ) (4.48) The variations are just complex conjugates of each other, so we can select one of them
and conclude that the optimal variational coefficients satisfy:
!
(4.49)
# H!" c" = E # S!" c"
" " This is a “generalized eigenvalue problem”, which may be rewritten in matrix form as
!
Hc = ESc , and clearly reduces to a finite eigenvalue problem if the overlap matrix is the
unit matrix.
The key results from Eq. (4.49) are as follows. First, the approximate ground
state is the lowest eigenvalue of the Hamiltonian in the trial function basis. Second, the
optimum parameters are the corresponding eigenvector. Third, higher eigenvalues are
variational approximations to excited states and are orthogonal to each other, just like
exact solutions.
The presence of the overlap matrix on the righthand side of Eq. (4.49) is the
difference between a generalized eigenvalue problem and a conventional one. It serves to
correct for the fact that the trial functions are not necessarily orthonormal. On the other
hand we could make them orthonormal, and then the problem in the transformed
representation will look like a conventional eigenvalue problem. This is quite
straightforward to do, if the inverse square root of the overlap matrix is first evaluated.
You can do this by first diagonalizing the overlap matrix:
s = U †SU
(4.50)
Then functions of a matrix are evaluated as functions of the eigenvalues, transformed by
the eigenvectors:
S = UsU †
(4.51)
S!1/ 2 = Us !1/ 2 U †
With this transformation available, let us use it to transform Eq. (4.49) into an
orthonormal basis:
!
H S!1/ 2S1/ 2 c = E S1/ 2S1/ 2 c ( ) " S!1/ 2 HS!1/ 2 (
)
!
(S c ) = E (S c )
1/ 2 !
" Hu = Eu 1/ 2 (4.52) Thus the transformation S!1/ 2 orthonormalizes the basis and u is the eigenvector in the
orthonormalized basis, and is connected to the original eigenvectors c by:
c = S!1/ 2 u
(4.53) ...
View
Full
Document
This note was uploaded on 02/22/2010 for the course CHEM N/A taught by Professor Headgordon during the Spring '09 term at University of California, Berkeley.
 Spring '09
 HEADGORDON

Click to edit the document details