The posterior risk of
ˆ
θ
BM
1
under the given loss in Theorem 1 simplifies to
∑
m
i
=1
φ
i
[
V
(
θ
i

ˆ
θ
)+
s
−
1
(
t
−
¯
ˆ
θ
w
)
2
r
2
i
]
. Hence the excess posterior risk due to adjustment of the Bayes estima
tor if the assumed prior were “true” is given by
∑
i
φ
i
s
−
1
(
t
−
¯
ˆ
θ
w
)
2
r
2
i
. When
φ
i
=
w
i
and
∑
i
w
i
= 1
, the expression further simplifies to
1
m
(
t
−
¯
ˆ
θ
w
)
2
. However, with a prior different
from the assumed one, it is possible to have a lower posterior risk of the adjusted Bayes
estimator than the Bayes estimator.
To see this in a very simple setting, consider the case where
ˆ
θ
i

θ
i
ind
∼
N(
θ
i
, 1
) and
θ
i
iid
∼
N(
0,
σ
2
u
),
i
= 1, ... ,
m
. Then the Bayes estimator of
θ
is
ˆ
θ
B
= (1
−
B
)
ˆ
θ
, where
B
= (1 +
σ
2
u
)
−
1
. Also, if
φ
i
=
w
i
and
∑
m
i
=1
w
i
= 1
, then
r
i
= 1
for all
i
= 1, ... ,
m
,
and
s
= 1
. Further, if
t
=
¯
ˆ
θ
w
, as often is the case with internal benchmarking, then
ˆ
θ
BM
1
=
(1
−
B
)
ˆ
θ
+
B
¯
ˆ
θ
w
1
m
, where
1
m
denotes an
m
component vector with each element equal
to one. If instead we have the prior
θ
i
iid
∼
N(
0,
σ
2
v
), and
B
0
= (1 +
σ
2
v
)
−
1
, then after some
simplification, the posterior risk of
ˆ
θ
B
is
(1
−
B
0
)+(
B
−
B
0
)
2
∑
m
i
=1
w
i
(
ˆ
θ
i
−
¯
ˆ
θ
w
)
2
+(
B
−
B
0
)
2
¯
ˆ
θ
2
w
,
while that of
ˆ
θ
BM
1
is
(1
−
B
0
)+(
B
−
B
0
)
2
∑
m
i
=1
w
i
(
ˆ
θ
i
−
¯
ˆ
θ
w
)
2
+
B
2
0
¯
ˆ
θ
2
w
. Now
ˆ
θ
BM
1
has smaller
posterior risk than that of
ˆ
θ
B
if and only if

B
−
B
0

/
B
0
>
1
which is quite possible if
B
0
is
very small compared to
B
, i.e., if
σ
2
v
is much larger than
σ
2
u
.
We now provide a generalization of Theorem
1
where we consider multiple con
straints instead of one single constraint. As an example, for the SAIPE countylevel
analysis, one may need to control the county estimates in each state so that their
weighted total agrees with the corresponding state estimates. We now consider a more
general quadratic loss given by
L
(
θ
,
e
) = (
e
−
θ
)
T
Ω(
e
−
θ
),
(3.1.4)
where
Ω
is a positive definite matrix. The following theorem provides a Bayesian solution
for the minimization of
E
[
L
(
θ
,
e
)

ˆ
θ
]
subject to the constraint
W
T
e
=
t
, where
t
is a
q
component vector and
W
is an
m
×
q
matrix of rank
q
<
m
.
Theorem 2.
The constrained Bayesian solution under the loss
3.1.4
is given by
32