{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Ch8.3.1-Introduction

# Ch8.3.1-Introduction - Machine Learning Srihari Inference...

This preview shows pages 1–9. Sign up to view the full content.

Machine Learning Srihari 1 Inference in Graphical Models Sargur Srihari

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Machine Learning Srihari 2 Topics 1. Objective of Inference Algorithms 2. Cases 1. Bayes Theorem Inference 2. Inference on a Chain 3. Trees 3. Factor Graphs 4. Exact Inference Algorithms for Tree graphs 1. The sum-product algorithm 2. The max-sum algorithm 5. Exact inference in general graphs 6. Approximate Inference: Loopy belief propagation 7. Learning the graph structure
Machine Learning Srihari 3 What is inference? Graphical models represent a joint probability distribution Types of inference tasks: 1. Compute marginal probabilities 1. Conditional probabilities can be easily computed from joint and marginals 2. Evaluate posterior distributions 1. Some of the nodes in a graph are clamped to observed values ( X) 2. Compute posterior distributions of one or more subsets of nodes (latent variables Z ), i.e., p(Z|X) 3. Compute maximum a posteriori probabilities

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Machine Learning Srihari 4 Inference Algorithms Exploit graphical structure to find efficient algorithms Algorithms are expressed as passing messages around graph Two types of inference algorithms – Exact We only discuss exact inference algorithms –Approximate methods useful when there are a large number of latent variables e.g., Variational Bayes
Machine Learning Srihari Three Simple Inference Cases 1. Bayes theorem as inference 2. Inference on a chain 3. Inference on a tree 5

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Machine Learning Srihari 6 1. Bayes Theorem as Inference Joint distribution p(x,y) over two variables x and y – Factors p(x,y)=p(x)p(y|x) represented as directed graph (a) We are given CPDs p(x) and p(y|x) If we observe value of y as in (b) – Can view marginal p(x) as prior – Over latent variable x Analogy to 2-class classifier x =0,1 and y is continuous – Wish to infer a posteriori distribution p(x|y)
Machine Learning Srihari Using sum and product rules, we can evaluate marginal – Need to evaluate a summation Which is then used in Bayes rule to calculate • Observations – Joint is now expressed as p(x,y)=p(y)p(x|y) Which is shown in (c) – Thus knowing value of y we know distribution of x 7 Inferring posterior using Bayes

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Machine Learning Srihari 8 2. Inference on a Chain
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 20

Ch8.3.1-Introduction - Machine Learning Srihari Inference...

This preview shows document pages 1 - 9. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online