Lecture 6 - Administrivia Probabilistic graphical models...

Info icon This preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
Probabilistic graphical models CPSC 532c (Topics in AI) Stat 521a (Topics in multivariate analysis) Lecture 6 Kevin Murphy Wednesday 29 September, 2004 Administrivia Discussion section on Thursday, 3.30-4, in 304 (this week only). Types of probabilistic inference There are several kinds of queries we can make. Suppose the joint is P ( Y,E,W ) = P ( Y,W ) × P ( E | ) . Conditional probability queries (sum-product): P ( Y | E = e ) s w P ( ) × P ( e | ) Most probable explanation (MPE) queries (max-product, MAP): ( y,w ) * = arg max y max w P ( ) × P ( e | ) Maximum A Posteriori (MAP) queries (max-sum-product, marginal MAP) y * = arg max y s w P ( ) × P ( e | ) Inference in Hidden Markov Models (HMM) X3 E1 E2 E3 X1 X2 Conditional probability queries, e.g. estimate current state given past evidence (online Fltering) P ( X t | e 1: t ) = s x 1: t - 1 P ( x 1: t - 1 ,X t | e 1: t ) Most probable explanation (MPE) queries, e.g., most probable sequence of states (Viterbi decoding) x * 1: t = arg max x 1: t P ( x 1: t | e 1: t )
Image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Word-error rate vs bit error rate Note: Most probable sequence of states not necessarily equal to sequence of most probable states. e.g., X 1 X 2 P ( X 1 ) p 0 . 4 0 . 6 P P ( X 2 | X 1 ) p 0 . 1 0 . 9 0 . 5 0 . 5 P P ( X 1 ,X 2 ) p 0 . 04 0 . 36 0 . 3 0 . 3 P arg max x 1 P ( X 1 ) = 1 , arg max x 1 max x 2 P ( X 1 2 ) = (0 , 1) Viterbi decoding minimizes word error rate x * 1: t = arg max x 1: t P ( x 1: t | e 1: t ) To minimize bit error rate, use most marginally likely state P ( X t | y 1: t ) = s x 1: t - 1 P ( x 1: t - 1 t | e 1: t ) x * t = max x P ( X t = x | e 1: t ) MAP vs Marginal MAP E3 W1 W2 W3 Q1 Q2 Q3 E1 E2 Consider a Dynamic Bayes Net (DBN) for speech recognition, where W = word and Q = phoneme. Most likely sequence of states (Viterbi/ MAP, max-product): arg max q 1: t ,w 1: t P ( q 1: t ,w 1: t | e 1: t ) Most likely sequence of words (Marginal MAP, max-sum-product): arg max w 1: t s q 1: t P ( w 1: t ,q 1: t | e 1: t ) Max-product often used as computationally simpler approximation to max-sum-product (or can use A * decoding).
Image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern