This preview shows pages 1–2. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: ISyE8843A, Brani Vidakovic Handout 4 1 Decision Theoretic Setup: Loss, Posterior Risk, Bayes Action Let A be action space and a A be an action. For example, in estimation problems, A is the set of real numbers and a is a number, say a = 2 is adopted as an estimator of . In other words, the inference maker took the action a = 2 in estimating . In testing problems, the action space is A = { accept , reject } . The action, as a function of observations is called a decision rule, or simply a rule. An example of a rule is a ( X 1 ,...,X n ) = X . Often, the rules are denoted by ( X ) . No action can be taken without potential losses. Statisticians are pessimistic creatures that replaced nicely coined term utility to a more somber term loss , although, for all practical purposes, the loss is a negative utility. The loss function is denoted by L ( ,a ) and represents the payoff by a decision maker (statistician) if he takes the action a A , and the real state of nature is . The loss function usually satisfies the following properties, L ( a,a ) = 0 and L ( a, ) is nondecreasing function of  a  . Examples are squared error loss (SEL) L ( ,a ) = (  a ) 2 , absolute loss , L ( ,a ) =   a  , the 01 loss, L ( ,a ) = 1 (  a  > m ) , etc. The most common for estimation problems and mathematically easiest to work with is the SEL. The expected SEL (frequentist risk) is linked with variance and bias of an estimator, E X  (  ( X )) 2 = V ar ( ( X )) + [ bias ( ( X ))] 2 . where bias ( ( X ) = E X  ( X )) . One criticism of the SEL is that it grows fast (quadratically) when the error increases, thus severely punishing the errors. Example 1. The LINEX is defined as L ( ,a ) = exp { c ( a ) }  c ( a ) 1 , c R. For c > , the loss function L ( ,a ) is quite asymmetric about 0 with overestimation being more costly than underestimation. As  a  , the loss L ( ,a ) increases almost exponentially when a > and almost linearly when a < . For c < , the linearityexponentiality phenomenon is reversed. Also, when  a  is very small, L ( ,a ) is near c ( a ) 2 / 2 ....
View Full
Document
 Spring '11
 VIDAKOVIC

Click to edit the document details