Nov 2 HMM Wrapup Annotated.pdf - Opening Oct 26 HMM Wrapup...

This preview shows page 1 - 10 out of 39 pages.

Opening Oct 26 HMM Wrapup and MDP Opening Example: Sketch the Forward algorithm to find P ( X 3 | E 1:3 ) on the given graph. What about P ( X 5 | E 1:3 ) ? Mullen: HMM Smoothing MDP Fall 2020 1 / 33
Opening Oct 26 HMM Wrapup and MDP Opening Example: Sketch the Forward algorithm to find P ( X 3 | E 1:3 ) on the given graph. We use Bayes’, and sum over the prior X value (iteratively): P ( X | E ) = P ( E | X ) P ( X ) P ( E ) P ( E | X ) X prior X P ( X | prior X ) P ( prior X ) What about P ( X 5 | E 1:3 ) ? Mullen: HMM Smoothing MDP Fall 2020 1 / 33
Recap Announcements and To-Dos Announcements: 1. Skip 1a for now but it’s worth a bit of E.C. if you get A * working. I’ll add a few edges to hard code in an addendum. Last time we learned: 1. Stationary distributions to Markov Models. Mullen: HMM Smoothing MDP Fall 2020 2 / 33
Recap Hidden Markov Models Example: Suppose you are a graduate student in a basement office. You are writing your dissertation, so you don’t get to leave very often. You are curious if it is raining, and the only contact you have with the outside world is through your advisor. If it is raining, she brings her umbrella 90% of the time, and has it just in case on 20% of sunny days. You know that historically, 40% of rainy days were followed by another rainy day, and 30% of sunny days were followed by a rainy day. Mullen: HMM Smoothing MDP Fall 2020 3 / 33
Recap HMM: Filtering Filtering: The goal is to predict X t +1 given all the evidence available E 1: t +1 . At t = 0 : P ( X 1 | E 1 ) = P ( E 1 | X 1 ) X 0 P ( X 1 | X 0 ) P ( X 0 ) At t = 1 : P ( X 2 | E 1:2 ) = P ( E 2 | X 2 ) X 1 P ( X 2 | X 1 ) P ( X 1 | E 1 ) We continue forward through the graph. Mullen: HMM Smoothing MDP Fall 2020 4 / 33
Recap HMM: Prediction Prediction: The goal is to predict X t + k +1 given all the evidence available E 1: t +1 . The prediction of X two ( k = 1) time steps beyond where our evidence ended was: P ( X t +2 | E 1: t ) = X X t +1 P ( X t +2 | X t +1 ) X X t P ( X t +1 | X t ) P ( X t | E 1: t ) Making a k -step prediction just means doing forward -steps up until we’re out of evidence, and then following the Markov process to evolve X t +1 | X t until we reach the desired future time. Mullen: HMM Smoothing MDP Fall 2020 5 / 33
Recap HMM: Smoothing Our final task is smoothing , where we try to update probabilities of prior states X based on current evidence. So we want a description for P ( X k | E 1: t ) , where t > k . P ( X k | E 1: t ) = P ( X k | E 1: k , E k +1: t ) = αP ( E k +1: t | X k , E 1: k ) P ( X k | E 1: k ) = αP ( E k +1: t | X k ) P ( X k | E 1: k ) We can find the last term by the Forward algorithm for filtering. Mullen: HMM Smoothing MDP Fall 2020 6 / 33
Recap HMM: Smoothing This leaves the P ( E k +1: t | X k ) term, which we denote by b k +1: t , which is the probability of future measurements given the current state of our system, which is just the combination of our transition and sensor models! Imagine taking one time step and asking about the new evidence: we need to describe X k +1 . b k +1: t = P ( E k +1: t | X k ) = X X k +1 P ( E k +1: t | X k , X k +1 | {z } indep ) P ( X k +1 , X k ) = X X k +1 P ( E k +1: t | {z } split up | X k +1 ) P ( X k +1 , X k ) = X X k +1 P ( E k +1 , E k +2: t | {z } conditional | X k +1 ) P ( X k +1 , X k ) = X X k +1 P ( E k +1 | X k +1 ) | {z } sensor model P ( E k +2: t | X k +1 ) | {z } similar to LHS P ( X k +1 , X k ) | {z } Markov model Mullen: HMM Smoothing MDP Fall 2020 7 / 33
Recap HMM: Smoothing

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture