Bayes and EB benchmarkig for SAE. Dissertation 2012.pdf

# 31 the posterior risk of ˆ θ bm 1 under the given

• 86

This preview shows page 31 - 33 out of 86 pages.

31

Subscribe to view the full document.

The posterior risk of ˆ θ BM 1 under the given loss in Theorem 1 simplifies to m i =1 φ i [ V ( θ i | ˆ θ )+ s 1 ( t ¯ ˆ θ w ) 2 r 2 i ] . Hence the excess posterior risk due to adjustment of the Bayes estima- tor if the assumed prior were “true” is given by i φ i s 1 ( t ¯ ˆ θ w ) 2 r 2 i . When φ i = w i and i w i = 1 , the expression further simplifies to 1 m ( t ¯ ˆ θ w ) 2 . However, with a prior different from the assumed one, it is possible to have a lower posterior risk of the adjusted Bayes estimator than the Bayes estimator. To see this in a very simple setting, consider the case where ˆ θ i | θ i ind N( θ i , 1 ) and θ i iid N( 0, σ 2 u ), i = 1, ... , m . Then the Bayes estimator of θ is ˆ θ B = (1 B ) ˆ θ , where B = (1 + σ 2 u ) 1 . Also, if φ i = w i and m i =1 w i = 1 , then r i = 1 for all i = 1, ... , m , and s = 1 . Further, if t = ¯ ˆ θ w , as often is the case with internal benchmarking, then ˆ θ BM 1 = (1 B ) ˆ θ + B ¯ ˆ θ w 1 m , where 1 m denotes an m -component vector with each element equal to one. If instead we have the prior θ i iid N( 0, σ 2 v ), and B 0 = (1 + σ 2 v ) 1 , then after some simplification, the posterior risk of ˆ θ B is (1 B 0 )+( B B 0 ) 2 m i =1 w i ( ˆ θ i ¯ ˆ θ w ) 2 +( B B 0 ) 2 ¯ ˆ θ 2 w , while that of ˆ θ BM 1 is (1 B 0 )+( B B 0 ) 2 m i =1 w i ( ˆ θ i ¯ ˆ θ w ) 2 + B 2 0 ¯ ˆ θ 2 w . Now ˆ θ BM 1 has smaller posterior risk than that of ˆ θ B if and only if | B B 0 | / B 0 > 1 which is quite possible if B 0 is very small compared to B , i.e., if σ 2 v is much larger than σ 2 u . We now provide a generalization of Theorem 1 where we consider multiple con- straints instead of one single constraint. As an example, for the SAIPE county-level analysis, one may need to control the county estimates in each state so that their weighted total agrees with the corresponding state estimates. We now consider a more general quadratic loss given by L ( θ , e ) = ( e θ ) T Ω( e θ ), (3.1.4) where Ω is a positive definite matrix. The following theorem provides a Bayesian solution for the minimization of E [ L ( θ , e ) | ˆ θ ] subject to the constraint W T e = t , where t is a q -component vector and W is an m × q matrix of rank q < m . Theorem 2. The constrained Bayesian solution under the loss 3.1.4 is given by 32
ˆ θ MBM = ˆ θ B + Ω 1 W ( W T Ω 1 W ) 1 ( t ¯ ˆ θ B w ), where ¯ ˆ θ B w = W T ˆ θ B . Proof. First write E [( e θ ) T Ω( e θ ) | ˆ θ ] = E [( θ ˆ θ B ) T Ω( θ ˆ θ B ) | ˆ θ ]+( e ˆ θ B ) T Ω( e ˆ θ B ) . Hence, the problem reduces to minimization of ( e ˆ θ B ) T Ω( e ˆ θ B ) with respect to e subject to W T e = t . The result follows from the identity ( e ˆ θ B ) T Ω( e ˆ θ B ) = [( e ˆ θ B Ω 1 W ( W T Ω 1 W ) 1 ( t ¯ ˆ θ B w )] T × Ω[( e ˆ θ B Ω 1 W ( W T Ω 1 W ) 1 ( t ¯ ˆ θ B w )] + ( t ¯ ˆ θ B w ) T ( W T Ω 1 W ) 1 ( t ¯ ˆ θ B w ).
You've reached the end of this preview.
• Spring '16
• Yessi
• The Land, Estimation theory, Mean squared error, Bayes estimator, Empirical Bayes method

{[ snackBarMessage ]}

### What students are saying

• As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

Kiran Temple University Fox School of Business ‘17, Course Hero Intern

• I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

Dana University of Pennsylvania ‘17, Course Hero Intern

• The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

Jill Tulane University ‘16, Course Hero Intern