This preview shows page 1. Sign up to view the full content.
Unformatted text preview: Jan 25th, 2013 page 558 (of 232) HMMs Trellis Other HMM queries MPE Sampling What HMMs can do Summary Scratch HMMs stationarity depends on MC
Therefore, the HMM’s stationarity condition is entirely determined
by the stationarity condition of the underlying hidden Markov chain.
Consider the way in which HMMs are often used:
Long chains
Chains with cycle transition matrices
Chains with uppertriangular matrices Prof. Jeﬀ Bilmes EE596A/Winter 2013/DGMs – Lecture 5  Jan 25th, 2013 page 558 (of 232) HMMs Trellis Other HMM queries MPE Sampling What HMMs can do Summary Scratch HMMs stationarity depends on MC
Therefore, the HMM’s stationarity condition is entirely determined
by the stationarity condition of the underlying hidden Markov chain.
Consider the way in which HMMs are often used:
Long chains
Chains with cycle transition matrices
Chains with uppertriangular matrices
Chains with strictly lefttoright transitions. Ex: speech recognition Prof. Jeﬀ Bilmes EE596A/Winter 2013/DGMs – Lecture 5  Jan 25th, 2013 page 558 (of 232) HMMs Trellis Other HMM queries MPE Sampling What HMMs can do Summary Scratch HMMs stationarity depends on MC
Therefore, the HMM’s stationarity condition is entirely determined
by the stationarity condition of the underlying hidden Markov chain.
Consider the way in which HMMs are often used:
Long chains
Chains with cycle transition matrices
Chains with uppertriangular matrices
Chains with strictly lefttoright transitions. Ex: speech recognition Hence, in only rare cases, when HMMs are used, are they stationary
stochastic processes.
Prof. Jeﬀ Bilmes EE596A/Winter 2013/DGMs – Lecture 5  Jan 25th, 2013 page 558 (of 232) HMMs Trellis Other HMM queries MPE Sampling What HMMs can do Summary Scratch Gaussian Mixture HMM
One of the most widely used HMMs in practice is one where the
observation distributions are Gaussian mixtures, where
p(xq ) = c =
m p(xq, c)p(cq ) (5.39) N (xµqm , Σqm ) cmq (5.40) and where
N (xµ, Σ) = Prof. Jeﬀ Bilmes 1
1
exp − (x − µ) Σ−1 (x − µ)
d/2
2
2π Σ EE596A/Winter 2013/DGMs – Lecture 5  Jan 25th, 2013 (5.41) page 559 (of 232) HMMs Trellis Other HMM queries MPE Sampling What HMMs can do Summary Scratch Gaussian Mixture HMM
One of the most widely used HMMs in practice is one where the
observation distributions are Gaussian mixtures, where
p(xq ) = c =
m p(xq, c)p(cq ) (5.39) N (xµqm , Σqm ) cmq (5.40) and where
1
1
exp − (x − µ) Σ−1 (x − µ)
d/2
2
2π Σ
The HMM BN becomes
N (xµ, Σ) = Qt – 1 Ct – 1 Qt + 1 Ct + 1 Ct
Xt –1 Prof. Jeﬀ Bilmes Qt Xt Xt +1 (5.41) Qt + 2 Ct + 2
Xt +2 EE596A/Winter 2013/DGMs – Lecture 5  Jan 25th, 2013 page 559 (of 232) HMMs Trellis Other HMM queries MPE Sampling What HMMs can do Summary Scratch Correlated & Covariance
Correlation between two real random vectors X and Y
cor(X, Y ) = E [XY ] Prof. Jeﬀ Bilmes EE596A/Winter 2013/DGMs – Lecture 5  Jan 25th, 2013 (5.42) page 560 (of 232) HMMs Trellis Other HMM queries MPE Sampling What...
View Full
Document
 Winter '14

Click to edit the document details