c1s2 - March 18, 2003 1.2 Decision Theory . Usually in...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: March 18, 2003 1.2 Decision Theory . Usually in statistics, instead of just two possible probability distributions P, Q, as in the last section, there is an infinite family P of such distributions, defined on a sample space , which is a measurable space ( X, B ), in other words a set X together with a -algebra B of subsets of X . As noted previously, if X is a subset of a Euclidean space, then B will usually be the -algebra of Borel subsets of X . If X is a countable set, then B will usually be the -algebra of all subsets of X (if also X R k , then all its subsets are in fact Borel sets). A probability measure on B will be called a law . The family P of laws on ( X, B ) is usually written as { P , } , where is called a parameter space . For example, if P is the set of all normal measures N ( , 2 ) for R and > 0, we can take = ( , ) or ( , 2 ) where in either case is the open upper half-plane, that is, the set of all ( t, u ) R 2 such that u > 0. We assume that the function P from to laws on B is one-to-one, in other words P = P whenever = in . So the sets P and are in 1-1 correspondence and any structure on one can be taken over to the other. We also assume given a -algebra T of subsets of . Most often will be a subset of some Euclidean space and T the family of Borel subsets of . The family { P , } will be called measurable on ( , T ) if and only if for each B B , the function P ( B ) is measurable on . If is finite or countable, then (as with sample spaces) T will usually be taken to be the collection of all its subsets. In that case the family { P , } is always measurable. An observation will be a point x of X . Given x , the statistician tries to make inferences about , such as estimating by a function ( x ). For example, if X = R n and P = N ( , 1) n , so x = ( X 1 , . . . , X n ) where the X i are i.i.d. with distribution N ( , 1), then ( x ) = X := ( X 1 + X n ) /n is the classical estimator of . In decision theory, there is also a measurable space ( D, S ), called the decision space . A measurable function d ( ) from X into D is called a decision rule . Such a rule says that if x is observed, then action d ( x ) should be taken. One possible decision space D would be the set of all d for , where d is the decision (estimate) that is the true value of the parameter. Or, if we just have a set P of laws, then d P would be the decision that P is the true law. Thus in the last section we had P = { P, Q } and for non-randomized tests, D = { d P , d Q } . There, a decision rule is equivalent to a measurable subset of X , which was taken to be the set where the decision will be d Q . For randomized rules, still for P = { P, Q } , the decision space D can be taken as the interval d 1, where d ( x ) is the probability that Q will be chosen if...
View Full Document

Page1 / 6

c1s2 - March 18, 2003 1.2 Decision Theory . Usually in...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online