hw4 - Arthur Kunkle ECE 5526 HW #4 Problem 1 Each HMM was...

Info iconThis preview shows pages 1–5. Sign up to view the full content.

View Full Document Right Arrow Icon
Arthur Kunkle ECE 5526 HW #4
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Problem 1 Each HMM was used to generate and visualize a sample sequence, X. These are the outputs from each HMM. HMM1 HMM2 HMM3 HMM4
Background image of page 2
HMM5 HMM6 Questions: 1. The following characterize a correct transition matrix: a. Has dimensions of the amount of states b. First column is all 0 (initial state cannot be transitioned to) c. Last row in all 0 except final entry (probability is 1 to enter final state) d. All entries in a row or column (except first column) sum to 1 2. The transition matrix will effect the “duration” of emiisions within particular classes or groups of classes. In the above output visualizations, especially in HMM’s 4-6, the sample chains tend to occur in class clumps. 3. Without a final state, the observation sequence length would be unbounded. 4. A single HMM is specified by: a. D-dimensional mean vector for each state b. DxD-dimensional variance matrix for each state c. NxN-dimensional transition matrix for all transitions d. N-dimensional initial state probability vector Total parameters: D + N + D^2 + N^2 5. A word would use a left-right model. The sequence of phones would be fixed, with a probability of repeating the same phone or longer utterances, which is also supported by this model type. More Questions: 1. log(a+b) = log(a) + log(1 + e^(log(b) – log(a)) = log(a * (1 + e^(log(b) – log(a)) = log(a + a*e^(log(b/a)) = log(a + a * (b/a))
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
= log(a + b) If log(a) > log(b), the second implementation would be better suited. This is because the difference in the exponential would yield a result that is much less likely to be asymtotic to zero (where the derivative becomes much sharper). ln(x) 2. log(alpha_t(j)) = log(b_j(x_t)) + log(sum(alpha_t(i) * a_ij)) = log(b_j(x_t)) + sum(log(alpha_t(i) * a_ij)) = log(b_j(x_t)) + sum(log(alpha_t(i)) + log(a_ij)) The biggest performance gain for this converstion is the ability to perform repeated additions instead of multiplications. Because the amount of state transitions can be very large for some HMM’s, this is a critical gain.
Background image of page 4
Image of page 5
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 11

hw4 - Arthur Kunkle ECE 5526 HW #4 Problem 1 Each HMM was...

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online