{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}


cs221-section8 - CS221 Section 8 1 CS 221 Section 8 The...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
CS221 Section 8 1 CS 221 Section 8: The Filtering Algorithm 1. The Maximum a Posteriori Estimate Chocolate Bar We return to the example of the Bayesian chocolate company. The bosses have heard of your success calculating the probability of the lever positions given the observations of the chocolate colors. As it is, the company brands their chocolate as the “Completely Random Chocolate Bar,” since the faulty levers mean they can’t know what the chocolate contains. But now they want to re-brand it as the “Most Likely Chocolate Bar.” The idea is they will label the chocolate as “Most Likely Almond,” “Most Likely Coconut,” etc., depending on what you calculated was the maximum a posteriori (MAP) label of the chocolate coming off the assembly line at that time. They hope this will attract more customers versus their old “Completely Random Chocolate.” You must automate the process of calculating the assignment probabilities. The factory will be making tens of thousands of chocolate bars a day, so you don’t have time to manually calculate probabilities for each one of them. You must also be able to calculate a chocolate bar’s label immediately after you make the observation of whether it is light or dark chocolate, as the labeling machine comes up next and you must tell it what to print. But you have forgotten about the filtering algorithm that you learned about in CS221. You decide to start re-deriving probabilities and see if you spot a pattern. You will switch notation from using the variables X and Y to using s and o for your states (lever positions) and observations (chocolate shades), respectively. To review, you only know the transition model P ( s t +1 | s t ) (how the levers will change position from one step to another), the observation probabilities P ( o t | s t ) (what shade of chocolate you observe given its color), and you have a uniform prior over the first state P ( s 0 ). Computing the MAP label for a bar of chocolate at time t will be equivalent to computing P ( s t | o 1: t ), where o 1: t represents all of your observations from the time the factory started making chocolate that day. (a) Derive the formula for the MAP label for the first chocolate bar off the line, given its chocolate type. I.e., derive P ( s 1 | o 1 ) where s 1 is the state of the levers, and o 1 is the test outcome saying whether the chocolate is light or dark. Also show how you would efficiently compute this with matrix and vector operations. Assume you have a uniform prior over the lever positions once the factory starts up. Use the matrix, O , which contains all of the observation probabilities, P ( o t | s t ). In particular, each column corresponds to a different observation, and each row corre- sponds to a different state: O = 0 . 1 0 . 9 0 . 3 0 . 7 0 . 8 0 . 2 0 . 9 0 . 1 This follows directly from the original problem statement:
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
CS221 Section 8 2 Inside P(Color=Light (L) | Inside) P(Color=Dark (D) | Inside) Plain (N) 0.1 0.9 Almond (A) 0.3 0.7 Coconut (C) 0.8 0.2 Almond + Coconut (B) 0.9 0.1 Answer: We begin by applying Bayes’ Rule: P ( s 1 | o 1 ) = P ( o 1 | s 1 ) P ( s 1 ) P ( o 1
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

Page1 / 9

cs221-section8 - CS221 Section 8 1 CS 221 Section 8 The...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon bookmark
Ask a homework question - tutors are online