This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: CSE 555 Spring 2010 Homework 1: Bayesian Decision Theory Jason J. Corso Computer Science and Engineering SUNY at Buffalo SUNY jcorso@buffalo.edu Date Assigned 13 Jan 2010 Date Due 1 Feb 2010 Homework must be submitted in class. No late work will be accepted. Problem 1: Bayesian Decision Rule (30%) Suppose the task is to classify the input signal x into one of K classes { 1 , 2 , . . . , K } such that the action ( x ) = i means classifying x into class i . The Bayesian decision rule is to maximize the posterior probability Bayes ( x ) = * = arg max p (  x ) . Suppose we replace it by a randomized decision rule , which classifies x to class i following the posterior probability p ( = i  x ) , i.e., rand ( x ) = p (  x ) . Solution: Maximizing the posterior probability is equivalent to minimizing the overall risk. Using the zeroone loss function, the overall risk for the Bayes Decision Rule is: R Bayes = I R ( Bayes ( x )  x ) p ( x ) dx = I n 1 max P ( j  x )  j = 1 , ..., k o p ( x ) dx For simplicity, the class with max posterior probability is abbreviated as max , and we get: R Bayes = I (1 P ( max  x )) p ( x ) dx. 1. What is the overall risk R rand for this decision rule? Derive it in terms of the posterior probability using the zeroone loss function. Solution: 1 For any given x , the probability of each class j = 1 , ..., k being the correct class is P ( j  k ) . With the randomized algorithm, it will select the correct class with probability P ( j  k ) , which means that it will select the wrong class with probability 1 P ( j  k ) . Thus, the zeroone conditional risk will become j P ( j  x ) 1 P ( j  x ) on average. Thus, R rand = I n X j P ( j  x ) 1 P ( j  x ) o p ( x ) dx = I n X j P ( j  x ) P ( j  x ) 2 o p ( x ) dx = I h 1 X j P ( j  x ) 2 i p ( x ) dx 2. Show that this risk R rand is always no smaller than the Bayes risk R Bayes . Thus, we cannot benefit from the randomized decision. Solution: Proving R rand R Bayes is equivalent to proving j P ( j  x ) 2 P ( max  x ) : X j P ( j  x ) 2 X j P ( j  x ) P ( max  x ) = P ( max  x ) , thus proved. R rand is always no smaller than R Bayes ....
View
Full
Document
This note was uploaded on 11/24/2010 for the course STAT 201a taught by Professor Wu during the Spring '10 term at Pasadena City College.
 Spring '10
 wu

Click to edit the document details