Lecture16 - CS440/ECE448 Intro to Artificial Intelligence...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: CS440/ECE448: Intro to Artificial Intelligence! Lecture 16
 Exact inference 
 in Bayes"Nets " " " " " " " " "" "" "" "" "" "" "" "" "" "" "" " Prof. Julia Hockenmaier! [email protected]! ! http://cs.illinois.edu/fa11/cs440! ! ! Grades…." Your midterm percentages" Your MP percentages" CS440/ECE448: Intro AI! 4! Your Quiz totals" CS440/ECE448: Intro AI! 5! Your current and predicted
 final grades" Probability review" CS440/ECE448: Intro AI! 7! Atomic events" ! ! Square! ¬Square! blue! Ω! Boolean random variable Square! CS440/ECE448: Intro AI! yellow! red! Categorical random variable Color! 8! Complex events" ! blue! ! yellow! Square! ¬Square! CS440/ECE448: Intro AI! red! 9! Joint probability P(A,B)" P(A∩B) = P(A, B) ! If A and B are boolean variables:! P(A,B) = P(A∧B) " A∩B" " A" CS440/ECE448: Intro AI! B" 10! Conditional probability P(A|B)! Definition:" P(A,B) " P(A | B) = " P(B) " " Product rule" P(A,B) = P(A | B)P(B) ! CS440/ECE448: Intro AI! " A∩B" " A" B" 11! The full joint distribution" Weather! Sunny! Cloudy! Rainy! Snowy! Yes! 0.25! 0.15! 0.05! 0.13! No! 0.05! 0.1! 0.25! 0.02! Fun?! From the full joint distribution, we can obtain:! –  Conditional distributions P(Fun? | Weather)! –  Marginal distributions P(Weather) ! CS440/ECE448: Intro AI! 12! Independence " Random variables X and Y are independent (X⊥Y) if P(X,Y) = P(X) × P(Y) NB.: Since X and Y are R.V.s (not individual events), 
 P(X,Y) = P(X)×P(Y) is an abbreviation for: ∀x∀y P(X=x,Y=y) =P(X=x)×P(Y=y) X and Y are conditionally independent given Z (X⊥Y | Z) if P(X,Y | Z) = P(X | Z ) × P(Y | Z) ! CS440/ECE448: Intro AI! 13! Conditional Independence " X and Y are conditionally independent given Z (X⊥Y | Z) if P(X,Y | Z) = P(X | Z ) × P(Y | Z) The value of X depends on the value of Z, 
 and the value of Y depends on the value of Z,! so X and Y are not independent.! ! CS440/ECE448: Intro AI! 14! Bayesian networks" Insight: (Conditional) independence assumptions are essential for probabilistic modeling! ! Bayes Net: a directed graph which represents the joint distribution of a number of random variables in a directed graph! –  Nodes = random variables! –  Directed edges = dependencies! CS440/ECE448: Intro AI! 15! The Student scenario
 (Koller & Friedmanʼ09)" A company wants to hire intelligent CS grads.! ! Each student has an SAT score and a recommendation letter from a professor 
 that they took a class from.! ! The SAT score depends on the studentʼs intelligence! ! The professorʼs recommendation depends purely on the studentʼs grade.! ! The studentʼs grade in the class depends on their intelligence as well as the difficulty of the class.! CS440/ECE448: Intro AI! 16! Intelligence! Difficulty! Grade! SAT! Letter! Each student has an SAT score" and a recommendation letter." The SAT score depends on their intelligence.! The recommendation depends on their grade." The grade depends on the studentʼs intelligence as well as the difficulty of the class.! CS440/ECE448: Intro AI! 17! Some terminology" Intelligence! Difficulty! Grade! SAT! Letter! Difficulty and Intelligence are parents of Grade.! Letter is a (direct) descendant of Grade.! SAT is a non-descendant of Grade.! ! CS440/ECE448: Intro AI! 18! D=lo D=hi I=lo I=hi 0.6 0.4 0.7 0.3 Difficulty! Intelligence! G=A G=B G=C I=lo,D=lo 0.3 0.4 Grade! 0.3 SAT! SAT=lo SAT=hi I=lo,D=hi 0.05 0.25 0.7 I=hi, D=lo 0.9 I=hil,D=hi 0.5 0.3 I=lo 0.95 0.08 0.02 0.2 Letter! L=w L=s G=A 0.1 0.8 0.6 G=C 0.99 I=hi 0.2 0.9 G=B 0.4 0.05 0.01 Difficulty is a binary R.V. (easy/hard)! Intelligence is a binary R.V. (low/high) SAT is a binary R.V. (low/high) There are three grades. (A,B,C) Letter is a binary R.V. (weak rec./strong recc) CS440/ECE448: Intro AI! 19! D=lo D=hi I=lo I=hi 0.6 0.4 0.7 0.3 Intelligence! Difficulty! SAT=lo SAT=hi G=A G=B G=C I=lo,D=lo 0.3 0.4 Grade! 0.3 I=lo,D=hi 0.05 0.25 0.7 I=hi, D=lo 0.9 0.08 0.02 I=hil,D=hi 0.5 0.3 L=w L=s G=A 0.1 0.6 G=C 0.99 0.01 Letter! 0.05 I=hi 0.2 0.8 Q: What is the probability of the following situation:! An intelligent student gets a B in an easy class, a high SAT and a weak letter?! 0.9 G=B 0.4 0.2 SAT! I=lo 0.95 Answer:! P(I=hi)P(D=lo)P(G=B|I=hi,D=lo)P(S=hi)P(L=w|G=B) = 0.3 x 0.6 x 0.08 x 0.8 x 0.4 = 0.004608 CS440/ECE448: Intro AI! 20! The chain rule for BNs" In order to compute the joint probability of the random vars X1…Xn in a Bayes Net, 
 we multiply the conditional probabilities of each R.V. Xi given its parents Pa(Xi):! n P( X1, ..., X n ) = ! P( Xi | Pa( Xi )) i=1 CS440/ECE448: Intro AI! 21! Conditional independences" Intelligence! Difficulty! Grade! SAT! Letter! Each node depends directly only on its parents.! ! Letter is conditionally independent of all other nodes given its parent: 
 (Letter ⊥ Intelligence, Difficulty, SAT |Grade) CS440/ECE448: Intro AI! 22! Conditional independences" Intelligence! Difficulty! Grade! SAT! Letter! What about Grade? ! ! Grade is conditionally independent of SAT given Intelligence, Letter (and Difficulty)
 (Grade⊥SAT | Letter, Intelligence, Difficulty) CS440/ECE448: Intro AI! 23! More terminology" Intelligence! Difficulty! Grade! SAT! Letter! Difficulty and Intelligence are parents of Grade.! Letter is a (direct) descendant of Grade.! The parents and direct descendant of a node form its Markov blanket.! CS440/ECE448: Intro AI! 24! Conditional independences
 in Bayes Nets" Intelligence! Difficulty! Grade! SAT! Letter! Each node is conditionally independent of its non-descendants given its Markov blanket.! CS440/ECE448: Intro AI! 25! Inference in Bayes Nets" More generally, we want to know the distribution of a set of query variables given some observed event." " What is the probability of getting a strong letter if you are an intelligent student?! ! An event is an assignment of values to a set of evidence variables. (here: intelligence)! CS440/ECE448: Intro AI! 26! Computing inferences 
 in Bayes Nets" From the joint to the conditional:! P(X | Y) = P(X,Y) / P(Y)! ! How do we compute P(Y)? ! Answer: Marginalization! ! Do we care about P(Y)?! Answer: not necessarily if we just want to compare P(X |Y=y) for the same set of yʼs.! CS440/ECE448: Intro AI! 27! Computing inferences 
 in Bayes Nets" What is the probability of getting a strong letter if you are an intelligent student?! ! What about the other, hidden, variables ? ! Answer: we have to marginalize them out.! ! P ( X, E) = ! P ( X, H , E ) ! H CS440/ECE448: Intro AI! 28! D=lo D=hi I=lo I=hi 0.6 0.4 0.7 0.3 Difficulty! Intelligence! G=A G=B G=C I=lo,D=lo 0.3 0.4 Grade! 0.3 SAT! SAT=lo SAT=hi I=lo,D=hi 0.05 0.25 0.7 I=hi, D=lo 0.9 I=hil,D=hi 0.5 0.3 I=lo 0.95 0.08 0.02 0.2 Letter! L=w L=s G=A 0.1 0.8 0.6 G=C 0.99 I=hi 0.2 0.9 G=B 0.4 0.05 0.01 What is the probability of getting a strong letter if you are an intelligent student? CS440/ECE448: Intro AI! 29! ...
View Full Document

{[ snackBarMessage ]}

Ask a homework question - tutors are online