Markov Chains
Tutorial #5
Ydo Wexler & Dan Geiger
.
Statistical Parameter Estimation
Reminder
The basic paradigm:
Data set
Model
Parameters:
MLE / bayesian approach Input data: series of observations X1, X2 . Xt We assumed observations were i.i.d (in
CSCI 5512: Homework 2 Solutions
Eric Theriault
Spring 2010
Programming Notes
All programming solutions are written in MATLAB. To be more useful to those students
who wrote in C and Java, the solutions use loops instead of matrix operations. Note that
thes
CSCI 5512: Artificial Intelligence II (Spring'10)
Homework 2 (Due Mar 17 by11:59 PM)
1. (20 points) Consider the Rain network in Figure 1. Assume that WetGrass = true. For simplicity, we denote the events by c, s, r and w for Cloudy=true, Sprinkler=tr
CSCI 5512: Articial Intelligence II
(Spring10)
Homework 2 (Due Mar 08 at 4pm)
1. (25 points) [Programming Assignment] Consider the rain network in Figure 1. Assume that
Sprinkler = true and W etGrass = true. For simplicity, we denote these two events by s
CSCI 5512W: Articial Intelligence II
(Spring08)
Mid-Term Exam
1. (25 points) In your local nuclear power station, there is an alarm that senses when a temperature gauge exceeds a given threshold. The gauge measures the temperature of the core.
Consider th
CSCI 5512: Articial Intelligence II
(Spring10)
Homework 1
(Due Mon, Feb 15, 4pm)
1. (20 points) Consider the Burglary network in Figure 1.
P(E)
P(B)
Burglary
B
E
T
F
T
F
.95
.94
.29
.001
.001
.002
P(A|B,E)
T
T
F
F
Earthquake
Alarm
A
JohnCalls
P(J|A)
T
F
.
CSCI 5512: Homework 1 Solutions
Professor: Arindam Banerjee
TA: Eric Theriault
Spring 2010
Problem 1
Part a
Explicitly sum
P (b|j, m) = P (b)
P (a|b, e)P (j |a)P (m|a)
P (e)
e
a
= 0.0051
Part b
P (B, E |j, m) = P (B, E, j, m)
P (a|B, E )P (j |a)P (m|a)
=
Support Vector Machines
CSci 5512: Articial Intelligence II
Instructor: Arindam Banerjee
April 8, 2012
Instructor: Arindam Banerjee
Support Vector Machines
Linear Separators
Instructor: Arindam Banerjee
Support Vector Machines
Linear SVMs: Separable Case
Convex Functions
A function f is convex if dom(f ) is a convex set and
[0, 1]
f (x1 + (1 )x2 ) f (x1 ) + (1 )f (x2 )
A function f is concave if f is convex
Instructor: Arindam Banerjee
Convex Analysis and Optimization
First Order Conditions
f is convex i
Linear Models
CSci 5512: Articial Intelligence II
Instructor: Arindam Banerjee
April 2, 2012
Instructor: Arindam Banerjee
Linear Models
Univariate Linear Regression
(a)
(b)
hw (x ) = w1 x + w0
n
n
L2 (yi , hw (xi ) =
2
Loss (hw ) =
i =1
Instructor: Arinda
Boosting
CSci 5512: Articial Intelligence II
Instructor: Arindam Banerjee
April 11, 2012
Instructor: Arindam Banerjee
Boosting
Ensemble Learning
Use a collection of hypothesis from the hypothesis space
Instructor: Arindam Banerjee
Boosting
Ensemble Learni
Learning with Hidden Variables
CSci 5512: Articial Intelligence II
Instructor: Arindam Banerjee
April 18, 2012
Instructor: Arindam Banerjee
Learning with Hidden Variables
Hidden Variables
Real world problem have hidden variables
Instructor: Arindam Banerj
Probabilistic Reasoning over Time: Part I
CSci 5512: Artificial Intelligence II Instructor: Arindam Banerjee
February 15, 2012
Instructor: Arindam Banerjee
Probabilistic Reasoning over Time: Part I
Outline
Time and uncertainty
Instructor: Arindam Banerjee
Probabilistic Reasoning over Time: Part II
CSci 5512: Artificial Intelligence II Instructor: Arindam Banerjee
February 22, 2012
Instructor: Arindam Banerjee
Probabilistic Reasoning over Time: Part II
Hidden Markov Models
Xt is a single, discrete variable
Making Simple Decisions
CSci 5512: Articial Intelligence II
Instructor: Arindam Banerjee
February 27, 2012
Instructor: Arindam Banerjee
Making Simple Decisions
Preferences
A
p
L
1p
B
A lottery is a situation with uncertain prizes
Lottery L = [p , A; (1 p
Learning Theory
CSci 5512: Articial Intelligence II
Instructor: Arindam Banerjee
March 21, 2012
Instructor: Arindam Banerjee
Learning Theory
PAC Learning
Learning from a Hypothesis Space H
Instructor: Arindam Banerjee
Learning Theory
PAC Learning
Learning
ArtificialIntelligenceNotes3
1. The Imitation Game
I propose to consider the question, "Can machines think?" This should begin with
definitions of the meaning of the terms "machine" and "think." The definitions might
be framed so as to reflect so far as p
TuringPart2notes
1. The new form of the problem can be described in terms of a game which we call the
'imitation game." It is played with three people, a man (A), a woman (B), and an interrogator
(C) who may be of either sex. The interrogator stays in a r
Problems
Primarily of two types: Integration and Optimization
Bayesian inference and learning
Computing normalization in Bayesian methods
p(y |x) =
Marginalization: p(y |x) =
Expectation:
p(y )p(x|y )
p(y )p(x|y )dy
y
z
Ey |x [f (y )] =
p(y , z|x)dz
y
f
Probabilistic Reasoning over Time: Part I
CSci 5512: Artificial Intelligence II
Instructor: Arindam Banerjee
February 17, 2010
Instructor: Arindam Banerjee
Probabilistic Reasoning over Time: Part I
Outline
Time and uncertainty
Inference: filtering, predic
A Supplemental Tutorial on Sum-Product Algorithm and Its
Implementation
Kittipat Bot Kampa
July 25, 2011
Abstract
This tutorial demonstrates how to use sum-product algorithm to eciently calculate the marginal posterior
distribution at a variable node. At
Probabilistic Reasoning over Time: Part II
CSci 5512: Artificial Intelligence II
Paul
Schrater
Instructor:
Arindam
Banerjee
February
2010
Fall24,
2014
Instructor: Arindam Banerjee
Probabilistic Reasoning over Time: Part II
Hidden Markov Models
Xt is a sin
Preferences
A
p
L
1p
B
A lottery is a situation with uncertain prizes
Lottery L = [p, A; (1 p), B]
Notation:
AB
AB
AB
A preferred to B
indierence between A and B
B not preferred to A
Instructor: Arindam Banerjee
Making Simple Decisions
Rational preference