# 3The Bayes Premium - 2 The Bayes Premium In this chapter we...

This preview shows pages 1–4. Sign up to view the full content.

2 The Bayes Premium In this chapter we will study the best experience premium or Bayes premium, which we de f ned in (1.3), P Bayes := ] μ ( % )= E [ μ ( % ) | X ] . To do this, we will use concepts from statistical decision theory. In particular, we will see in exactly which sense P is “best” and why it is called the Bayes premium. 2.1 Basic Elements of Statistical Decision Theory Here we will give an overview of the elements of statistical decision theory which will be necessary for our exposition. For a comprehensive study of statistical decision theory, see for example Lehmann [Leh86]. The raw material for a statistical decision is the observation vector X =( X 1 ,X 2 ,...,X n ) 0 . The distribution function F & ( x P & [ X 7 x ] is completely or partly unknown. (Equivalently: The parameter & is completely or partly unknown.) We are interested in the value of a speci f c functional g ( & ) of the parameter & . We seek a function T ( X ) , which depends only on the observation vector X , which will estimate g ( & ) “as well as possible”. The function T ( X ) is called an estimator for g ( & ) . We will formulate this problem in the following way: & 5 % : The set of parameters, which contains the true value of & , T 5 D : The set of functions to which the estimator function must belong. T is a map from the observation space R n into the set of all possible values of the functional g, that is, the set { g ( & ): & 5 % } .

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
16 2 The Bayes Premium The idea of “as well as possible” is made precise by the introduction of a loss function : L ( & ,T ( x )) : loss, if & is the “true” parameter and T ( x ) is the value taken by the estimator when the value x is observed. F romth isweder ivethe risk function of the estimator T R T ( & ):= E & [ L ( & )] = Z R n L ( & ( x )) dF & ( x ) . (2.1) (Only such functions T and L are allowed for which the right-hand side of (2.1) exists.) The goal then is to f nd an estimator T 5 D ,forwh ichther isk R T ( & ) is as small as possible. In general, it is not possible to do this simultaneously for all values of & . In other words, in general there is no T which minimizes R T ( & ) uniformly over & . In Figure 2.1 we see an example, where depending on the value of & , T 1 or T 2 has the smaller value of the risk function. ) ( 1 - T R ) ( 2 T R Fig. 2.1. Risk functions R T 1 and R T 2 for T 1 and T 2 2.2 Bayes Risk and Bayes Estimator In Bayesian statistics, a smoothed average of the curve R T ( & ) is considered, where the average is weighted by means of a probability distribution U ( & ) (called an ap r io r i distribution for % ). In other words, we consider the ex- pected value of R T ( % ) , by regarding & as the realization of a random variable % with probability distribution U .
2.2 Bayes Risk and Bayes Estimator 17 De f nition 2.1. We de f ne the Bayes risk of the estimator T with respect to the a priori distribution U ( & ) as R ( T ):= Z % R T ( & ) dU ( & ) . Assuming that the de f ned integral makes sense, with this criterion we can always rank estimators by increasing risk. In other words, there is a complete ordering on the set of estimators.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

## This note was uploaded on 12/02/2011 for the course ACTSC 432 taught by Professor Davidlandriault during the Spring '09 term at Waterloo.

### Page1 / 40

3The Bayes Premium - 2 The Bayes Premium In this chapter we...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online