This preview shows page 1. Sign up to view the full content.
Unformatted text preview: y many other rv’s, there is a sequence
Y (m) = (Y1, Y2, . . . , Ym) of rv’s called the observation. We
usually assume that Y1, Y2, . . . , are I ID conditional on H =
0 and I ID conditional on H = 1. Thus, if the Yn are
continuous, fY (m)H (  ) =
y
m
n=1 fY H (yn  ). 3 Assume that, on the basis of observing a sample value
of Y , we must make a decision about H , i.e., choose
y
H = 0 or H = 1, i.e., detect whether or not H is 1. Decisions in probability theory, as in real life, are not
necessarily correct, so we need a criterion for making a
choice.
We might maximize the probability of choosing correctly,
for example, or, given a cost for the wrong choice, might
minimize the expected cost.
Note that the probability experiment here includes not
only the experiment of gathering data (i.e., measuring
the sample value of Y ) but also the sample value of
y
the hypothesis. 4 From Bayes’, recognizing that f ( ) = p0f ( 0) + p1f ( 1)
y
y
y
Pr {H =  } =
y y
pfY H (  )
p0fY H (  0...
View Full
Document
 Fall '11
 staff

Click to edit the document details