Prof je bilmes ee596awinter 2013dgms lecture 4 jan

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: his is a property of all dynamic graphical models. Prof. Jeff Bilmes EE596A/Winter 2013/DGMs – Lecture 4 - Jan 23rd, 2013 page 4-19 (of 239) HMMs HMMs as GMs Other HMM queries What HMMs can do MPE Summ HMM - parameter names, homogeneous case Recall parameter names, time-homogeneous case. 1 P (Qt = j |Qt−1 = i) = aij or [A]ij is a first-order time-homogeneous transition matrix. 2 P (Q1 = i) = πi is the initial state distribution. 3 P (Xt = x|Qt = i) = bi (x) is the observation distribution for the current state being in configuration i. Notice that there are a fixed number of parameters regardless of the length T . In other words, parameters are shared across all time. This is a property of all dynamic graphical models. What probabilistic queries would we need to learn these parameters? Prof. Jeff Bilmes EE596A/Winter 2013/DGMs – Lecture 4 - Jan 23rd, 2013 page 4-19 (of 239) HMMs HMMs as GMs Other HMM queries What HMMs can do MPE Summ HMM - learning with EM To decide which queries to compute, should know which ones we want. If learning HMM parameters with EM, what queries do we need? Prof. Jeff Bilmes EE596A/Winter 2013/DGMs – Lecture 4 - Jan 23rd, 2013 page 4-20 (of 239) HMMs HMMs as GMs Other HMM queries What HMMs can do MPE Summ HMM - learning with EM To decide which queries to compute, should know which ones we want. If learning HMM parameters with EM, what queries do we need? X1:T = x1:T observed, Q1:T hidden variables. ¯ Prof. Jeff Bilmes EE596A/Winter 2013/DGMs – Lecture 4 - Jan 23rd, 2013 page 4-20 (of 239) HMMs HMMs as GMs Other HMM queries What HMMs can do MPE Summ HMM - learning with EM To decide which queries to compute, should know which ones we want. If learning HMM parameters with EM, what queries do we need? X1:T = x1:T observed, Q1:T hidden variables. ¯ For convenience, define λ as all parameters to to be learnt, and λp are the previous iteration’s parameters. Prof. Jeff Bilmes EE596A/Winter 2013/DGMs – Lecture 4 - Jan 23rd, 2013 page 4-20 (of 239) HMMs HMMs as GMs Other HMM queries What HMMs can do MPE Summ HMM - learning with EM The EM algorithm then repeatedly optimizes the following objective: f (λ) (4.12) Prof. Jeff Bilmes EE596A/Winter 2013/DGMs – Lecture 4 - Jan 23rd, 2013 page 4-21 (of 239) HMMs HMMs as GMs Other HMM queries What HMMs can do MPE Summ HMM - learning with EM The EM algorithm then repeatedly optimizes the following objective: f (λ) = Q(λ, λp ) (4.12) Prof. Jeff Bilmes EE596A/Winter 2013/DGMs – Lecture 4 - Jan 23rd, 2013 page 4-21 (of 239) HMMs HMMs as GMs Other HMM queries What HMMs can do MPE Summ HMM - learning with EM The EM algorithm then repeatedly optimizes the following objective: f (λ) = Q(λ, λp ) = Ep(x1:T ,q1:T |λp ) [log p(x1:T , q1:T |λ)] (4.9) (4.12) Prof. Jeff Bilmes EE596A/Winter 2013/DGMs – Lecture 4 - Jan 23rd, 2013 page 4-21 (of 239) HMMs HMMs as GMs Other HMM queries What HMMs can do MPE Summ HMM - learning with EM The EM algorithm then repeatedly optimizes the following objective: f (λ) = Q(λ, λ...
View Full Document

Ask a homework question - tutors are online