This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: ISyE8843A, Brani Vidakovic Handout 4 1 Decision Theoretic Setup: Loss, Posterior Risk, Bayes Action Let A be action space and a ∈ A be an action. For example, in estimation problems, A is the set of real numbers and a is a number, say a = 2 is adopted as an estimator of θ ∈ Θ . In other words, the inference maker “took” the action a = 2 in estimating θ . In testing problems, the action space is A = { accept , reject } . The action, as a function of observations is called a decision rule, or simply a rule. An example of a rule is a ( X 1 ,...,X n ) = ¯ X . Often, the rules are denoted by δ ( X ) . No action can be taken without potential losses. Statisticians are pessimistic creatures that replaced nicely coined term utility to a more somber term loss , although, for all practical purposes, the loss is a negative utility. The loss function is denoted by L ( θ,a ) and represents the payoff by a decision maker (statistician) if he takes the action a ∈ A , and the real state of nature is θ ∈ Θ . The loss function usually satisfies the following properties, L ( a,a ) = 0 and L ( a,θ ) is nondecreasing function of  a θ  . Examples are squared error loss (SEL) L ( θ,a ) = ( θ a ) 2 , absolute loss , L ( θ,a ) =  θ a  , the 01 loss, L ( θ,a ) = 1 (  a θ  > m ) , etc. The most common for estimation problems and mathematically easiest to work with is the SEL. The expected SEL (frequentist risk) is linked with variance and bias of an estimator, E X  θ ( θ δ ( X )) 2 = V ar ( δ ( X )) + [ bias ( δ ( X ))] 2 . where bias ( δ ( X ) = E X  θ δ ( X )) θ. One criticism of the SEL is that it grows fast (quadratically) when the error increases, thus severely punishing the errors. Example 1. The LINEX is defined as L ( θ,a ) = exp { c ( a θ ) }  c ( a θ ) 1 , c ∈ R. For c > , the loss function L ( θ,a ) is quite asymmetric about 0 with overestimation being more costly than underestimation. As  a θ  → ∞ , the loss L ( θ,a ) increases almost exponentially when a θ > and almost linearly when a θ < . For c < , the linearityexponentiality phenomenon is reversed. Also, when  a θ  is very small, L ( θ,a ) is near c ( a θ ) 2 / 2 ....
View
Full
Document
 Spring '11
 VIDAKOVIC

Click to edit the document details