This preview shows page 1. Sign up to view the full content.
Unformatted text preview: his is
a property of all dynamic graphical models. Prof. Jeﬀ Bilmes EE596A/Winter 2013/DGMs – Lecture 4  Jan 23rd, 2013 page 419 (of 239) HMMs HMMs as GMs Other HMM queries What HMMs can do MPE Summ HMM  parameter names, homogeneous case Recall parameter names, timehomogeneous case.
1 P (Qt = j Qt−1 = i) = aij or [A]ij is a ﬁrstorder
timehomogeneous transition matrix.
2 P (Q1 = i) = πi is the initial state distribution.
3 P (Xt = xQt = i) = bi (x) is the observation distribution for the
current state being in conﬁguration i.
Notice that there are a ﬁxed number of parameters regardless of the
length T . In other words, parameters are shared across all time. This is
a property of all dynamic graphical models.
What probabilistic queries would we need to learn these parameters? Prof. Jeﬀ Bilmes EE596A/Winter 2013/DGMs – Lecture 4  Jan 23rd, 2013 page 419 (of 239) HMMs HMMs as GMs Other HMM queries What HMMs can do MPE Summ HMM  learning with EM To decide which queries to compute, should know which ones we
want. If learning HMM parameters with EM, what queries do we
need? Prof. Jeﬀ Bilmes EE596A/Winter 2013/DGMs – Lecture 4  Jan 23rd, 2013 page 420 (of 239) HMMs HMMs as GMs Other HMM queries What HMMs can do MPE Summ HMM  learning with EM To decide which queries to compute, should know which ones we
want. If learning HMM parameters with EM, what queries do we
need?
X1:T = x1:T observed, Q1:T hidden variables.
¯ Prof. Jeﬀ Bilmes EE596A/Winter 2013/DGMs – Lecture 4  Jan 23rd, 2013 page 420 (of 239) HMMs HMMs as GMs Other HMM queries What HMMs can do MPE Summ HMM  learning with EM To decide which queries to compute, should know which ones we
want. If learning HMM parameters with EM, what queries do we
need?
X1:T = x1:T observed, Q1:T hidden variables.
¯
For convenience, deﬁne λ as all parameters to to be learnt, and λp
are the previous iteration’s parameters. Prof. Jeﬀ Bilmes EE596A/Winter 2013/DGMs – Lecture 4  Jan 23rd, 2013 page 420 (of 239) HMMs HMMs as GMs Other HMM queries What HMMs can do MPE Summ HMM  learning with EM
The EM algorithm then repeatedly optimizes the following objective:
f (λ) (4.12) Prof. Jeﬀ Bilmes EE596A/Winter 2013/DGMs – Lecture 4  Jan 23rd, 2013 page 421 (of 239) HMMs HMMs as GMs Other HMM queries What HMMs can do MPE Summ HMM  learning with EM
The EM algorithm then repeatedly optimizes the following objective:
f (λ) = Q(λ, λp ) (4.12) Prof. Jeﬀ Bilmes EE596A/Winter 2013/DGMs – Lecture 4  Jan 23rd, 2013 page 421 (of 239) HMMs HMMs as GMs Other HMM queries What HMMs can do MPE Summ HMM  learning with EM
The EM algorithm then repeatedly optimizes the following objective:
f (λ) = Q(λ, λp ) = Ep(x1:T ,q1:T λp ) [log p(x1:T , q1:T λ)] (4.9) (4.12) Prof. Jeﬀ Bilmes EE596A/Winter 2013/DGMs – Lecture 4  Jan 23rd, 2013 page 421 (of 239) HMMs HMMs as GMs Other HMM queries What HMMs can do MPE Summ HMM  learning with EM
The EM algorithm then repeatedly optimizes the following objective:
f (λ) = Q(λ, λ...
View Full
Document
 Winter '14

Click to edit the document details