lecture 14

# lecture 14 - CS 188 Artificial Intelligence Spring 2010...

This preview shows pages 1–5. Sign up to view the full content.

1 CS 188: Artificial Intelligence Spring 2010 Lecture 14: Bayes’ Nets 3/4/2010 Pieter Abbeel – UC Berkeley Many slides throughout the course adapted from Dan Klein, Stuart Russell, Andrew Moore Announcements s Assignments s P3 due tonight s W4 going out tonight s Midterm s 3/18, 6-9pm, 0010 Evans s No lecture on 3/18 2

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
2 Probabilistic Inference s Probabilistic inference: compute a desired probability from other known probabilities (e.g. conditional from joint) s We generally compute conditional probabilities s P(on time | no reported accidents) = 0.90 s These represent the agent’s beliefs given the evidence s Probabilities change with new evidence: s P(on time | no accidents, 5 a.m.) = 0.95 s P(on time | no accidents, 5 a.m., raining) = 0.80 s Observing new evidence causes beliefs to be updated 3 Inference by Enumeration s P(sun)? s P(sun | winter)? s P(sun | winter, warm)? S T W P summer hot sun 0.30 summer hot rain 0.05 summer cold sun 0.10 summer cold rain 0.05 winter hot sun 0.10 winter hot rain 0.05 winter cold sun 0.15 winter cold rain 0.20 4
3 Inference by Enumeration s General case: s Evidence variables: s Query* variable: s Hidden variables: s We want: s First, select the entries consistent with the evidence s Second, sum out H to get joint of Query and evidence: s Finally, normalize the remaining entries to conditionalize s Obvious problems: s Worst-case time complexity O(d n ) s Space complexity O(d n ) to store the joint distribution All variables * Works fine with multiple query variables, too The Product Rule s Sometimes have conditional distributions but want the joint s Example: R P sun 0.8 rain 0.2 D W P wet sun 0.1 dry sun 0.9 wet rain 0.7 dry rain 0.3 D W P wet sun 0.08 dry sun 0.72 wet rain 0.14 dry rain 0.06 6

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
4 The Chain Rule s More generally, can always write any joint distribution as an incremental product of conditional distributions s Why is this always true? 7 Bayes’ Rule s Two ways to factor a joint distribution over two variables: s Dividing, we get: s Why is this at all helpful? s
This is the end of the preview. Sign up to access the rest of the document.

## This note was uploaded on 04/21/2010 for the course EECS 188 taught by Professor Cs188 during the Spring '01 term at Berkeley.

### Page1 / 16

lecture 14 - CS 188 Artificial Intelligence Spring 2010...

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online