The Metropolis method is simply the power method for computing the right eigenvector of
M
with the largest magnitude eigenvalue. By construction, the correct probability distribution
is a right eigenvector with eigenvalue
1
. Therefore, for the Metropolis method to converge to
this result, one has to show that
M
has only one eigenvalue with this magnitude, and all other
eigenvalues are smaller.
12.6 Langevin and FokkerPlanck Equations
We end this chapter with a discussion and derivation of the FokkerPlanck and Langevin
equations. These equations will in turn be used in our discussion on advanced Monte Carlo
methods for quantum mechanical systems, see chapter for example chapter 16.
12.6.1 FokkerPlanck Equation
For many physical systems initial distributions of a stochastic variable
y
tend to an equilibrium
distribution
w
equilibrium
(
y
)
, that is
w
(
y
,
t
)
→
w
equilibrium
(
y
)
as
t
→
∞
. In equilibrium, detailed balance
constrains the transition rates
W
(
y
→
y
′
)
w
(
y
)=
W
(
y
′
→
y
)
w
equilibrium
(
y
)
,
where
W
(
y
′
→
y
)
is the probability per unit time that the system changes from a state

y
)
,
characterized by the value
y
for the stochastic variable
Y
, to a state

y
′
)
.
Note that for a system in equilibrium the transition rate
W
(
y
′
→
y
)
and the reverse
W
(
y
→
y
′
)
may be very different.
Let us now assume that we have three probability distribution functions for times
t
0
<
t
′
<
t
,
that is
w
(
x
0
,
t
0
)
,
w
(
x
′
,
t
′
)
and
w
(
x
,
t
)
. We have then
396
12
Random walks and the Metropolis algorithm
Initialize:
Establish an
initial state,
for example
a position
x
(
i
)
Suggest
a move
y
t
Compute ac
ceptance ratio
A
(
x
(
i
)
→
y
t
)
Generate a
uniformly
distributed
variable
r
Is
A
(
x
(
i
)
→
y
t
)
≥
r
?
Reject move:
x
(
i
+
1
)
=
x
(
i
)
Accept move:
x
(
i
)
=
y
t
=
x
(
i
+
1
)
Last
move?
Get local
expecta
tion values
Last
MC
step?
Collect
samples
End
yes
no
yes
yes
no
Fig. 12.7
Chart flow for the Metropolis algorithm.
12.6
Langevin and FokkerPlanck Equations
397
w
(
x
,
t
)=
integraldisplay
∞
−
∞
W
(
x
.
t

x
′
.
t
′
)
w
(
x
′
,
t
′
)
d
x
′
,
and
w
(
x
,
t
)=
integraldisplay
∞
−
∞
W
(
x
.
t

x
0
.
t
0
)
w
(
x
0
,
t
0
)
d
x
0
,
and
w
(
x
′
,
t
′
)=
integraldisplay
∞
−
∞
W
(
x
′
.
t
′

x
0
,
t
0
)
w
(
x
0
,
t
0
)
d
x
0
.
We can combine these equations and arrive at the famous EinsteinSmoluchenskiKolmogorov
Chapman (ESKC) relation
W
(
x
t

x
0
t
0
)=
integraldisplay
∞
−
∞
W
(
x
,
t

x
′
,
t
′
)
W
(
x
′
,
t
′

x
0
,
t
0
)
d
x
′
.
We can replace the spatial dependence with a dependence upon say the velocity (or momen
tum), that is we have
W
(
v
,
t

v
0
,
t
0
)=
integraldisplay
∞
−
∞
W
(
v
,
t

v
′
,
t
′
)
W
(
v
′
,
t
′

v
0
,
t
0
)
d
x
′
.
We will now derive the FokkerPlanck equation. We start from the ESKC equation
W
(
x
,
t

x
0
,
t
0
)=
integraldisplay
∞
−
∞
W
(
x
,
t

x
′
,
t
′
)
W
(
x
′
,
t
′

x
0
,
t
0
)
d
x
′
.
We define
s
=
t
′
−
t
0
,
τ
=
t
−
t
′
and
t
−
t
0
=
s
+
τ
. We have then
W
(
x
,
s
+
τ

x
0
)=
integraldisplay
∞
−
∞
W
(
x
,
τ

x
′
)
W
(
x
′
,
s

x
0
)
d
x
′
.
You've reached the end of your free preview.
Want to read all 552 pages?
 Summer '14
 Numerical Analysis, The Land, Sula, Monte Carlo method, Monte Carlo methods