This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: CS228 Final 1 CS 228, Winter 2008 Final You have 24 hours to complete this exam. You must return the completed exam to Gates 120 (the Fishbowl) at either 12:00 pm or 6:00 pm the day after you receive the exam, depending on what time you chose to receive it. This exam is long and difficult, and we do not expect everyone to finish all of the questions. Be sure to use good test taking skills and attack the easier problems first, spend more time on questions worth more points, and generally pay attention to how you spend your time. Also, you are welcome (and we expect you) to use or refer to algorithms from the reader when appropriate, without having to rederive or explain them. Furthermore, please use standard notation from the reader (when possible) and clearly define any terms you introduce. Algorithm answers should be provided in the form of pseudocode (with explanations where necessary), and your answers will be easier to grade (read: you will get higher grades) if you use proper spacing and layout of your answers on the page. We have provided approximate times and lengths for some of the problems to give you a rough estimate of how long we think it might take (in time and pages not including diagrams). Short Questions 1. [12 points] Clique Tree Calibration Suppose that we have a clique tree over a set of factors F with cliques C 1 , ..., C N , which we have calibrated using sum-product message propagation so that we have all messages δ i → j . (a) [6 points] If we modify a factor in some clique C i , which message updates do we have to perform to recalibrate the tree? (b) [6 points] If we modify a factor in some clique C i , but we just want the marginal over a single pre-specified variable X k , which message updates do we have to perform? 2. [7 points] Learning 2-TBNs In this question we will analyze the problem of learning a 2-TBN model from data. Assume that our state is represented by a set of variables X 1 , . . . , X n , and that our goal is to learn the 2-TBN structure. Assume also that we are considering only models where there are at most 3 parents per variable. Explain briefly why the problem of learning a 2-TBN structure is considerably easier (that is, in terms of the asymptotic running time) when we assume that there are no intra-time- slice edges in the 2-TBN. Estimated length: 2–3 sentences. 3. [12 points] Multi-conditional Parameter Learning, Markov Networks In this problem, we will consider the problem of learning parameters for a Markov network us- ing a specific objective function. In particular assume that we have two sets of variables CS228 Final 2 Y and X...
View Full Document
- Winter '09
- Bayesian network, Belief propagation, Markov Networks