This preview shows page 1. Sign up to view the full content.
Unformatted text preview: ), …
Proposal Q:
‣ pick a block i uniformly (or round robin, or
any other schedule)
‣ sample XB(i) ~ P(XB(i)  X¬B(i)) Gibbs example
0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8
0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 1.2 Gibbs example
1 0.5 0 0.5 1 1.5 1 0.5 0 0.5 1 1.5 Why is Gibbs useful? P (x , x i ) P (xi  x i )
i
¬
¬
For Gibbs, p =
P (xi , x¬i ) P (x  x¬i )
i Gibbs derivation
P (xi , x¬i ) =
=
=
x¬i ) P (xi 
P (xi , x¬i ) P (x  x¬i )
i
P (x , x¬i ) P (xi  x¬i )
i
P (xi , x¬i ) P (x  x¬i )
i
P (x , x¬i ) P (xi , x¬i )/P (x¬i )
i
P (xi , x¬i ) P (x , x¬i )/P (x¬i )
i
1 Gibbs in practice
Proof of p=1 means Gibbs is often easy to
implement
Often works well
‣ if we choose good blocks (but there may be
no good blocking!)
Fancier version: adaptive blocks, based on
current x Gibbs failure example
5 4
3
2 1 0 1 2 3 4 5 6 4 2 0 2 4 6 Sequential sampling In an HMM or DBN, to sample P(XT), start from
X1 and sample forward step by step
‣ Xt+1 ~ P(Xt+1  Xt)
P(X1:T) = P(X1) P(X2  X1) P(X3  X2) … Particle ﬁlter
Can sample Xt+1 ~ P(Xt+1  Xt) using any
algorithm from above
If we use parallel importance sampling to get
N samples at once from each P(Xt), we get a
particle ﬁlter
‣ also need one more trick: resampling
Write xt,i (i = 1…N) for sample at time t Particle ﬁlter
Want one sample from each of P(Xt+1  xt,i)
Have only Z P(Xt+1  xt,i)
For each i, pick xt+1,i from proposal Q(x)
Compute unnormalized importance weight wi = ZP (xt+1,i  xt,i )/Q(xt+1,i )
ˆ Particle ﬁlter
Normalize weights: 1
w=
¯
wi
ˆ
Ni wi = wi /w
ˆ¯ Now, (wi, xt+1,i) is an approximate weighted
sample from P(Xt+1)
What will happen if we do this for T=1, 2, … ? Resampling
To get an unweighted sample, resample
Sample N times (with replacement) from xt+1,i
with probabilities wi/N
‣ alternately: deterministically take ﬂoor(wi)
copies of xt+1,i and sample only from
fractional part [wi – ﬂoor(wi)]
Each xt+1,i appears wi times on average, so
we’re still a sample from P(Xt+1) Particle ﬁlter example Learning Learning
Basic learning problem: given some
experience, ﬁnd a new or improved model
Experience: a sample x1, …, xN
Model: want to predict xN+1, … Example
Experience = range sensor readings & odometry
from ro...
View Full
Document
 Fall '09
 Bryant

Click to edit the document details