Since T = B 5, we have pT (t) = pB (t + 5), and we obtain
t+4
4
pT (t) =
5
2
5
1
2
5
t
,
t = 0, 1, . . .
Using the formulas for the mean and the variance of the Pascal random variable B, we
obtain
25
E[T ] = E[B] 5 =
5 = 7.5,
2
and
5 1 (2/5)
var(T ) =
By setting the derivative to zero, we nd = k/(n + 1), provided that k/(n + 1) > 1/2.
If neither condition (k + 1)/(n + 1) < 1/2 and k/(n + 1) > 1/2 holds, we must have
the rst possibility, with the maximum attained at = 1/2. To summarize, the MAP
estimat
more data, the probability of error cannot increase, regardless of the observed value of
X (see Problem 8.9).
Solution to Problem 8.8. (a) Let K be the number of heads observed before the
first tail, and let pK|Hi (k) be the PMF of K when hypothesis Hi is
any given x, the value of is constrained to lie on a particular interval, the posterior
PDF of is uniform over that interval, and the conditional mean is the midpoint of
that interval. In particular,
x
+ 27.5, if 55 x 60,
2
x 2.5,
if 60 x 75,
x
+ 35,
E[
and the conditional mean squared error for the LMS estimator is
x2n 1
x1n 1
E ( )2 | X1 = x1 , . . . ,Xn = xn = E
=
n 1 x2n 1
n 2 x1n 1
2
+
2
X1 = x1 , . . . , Xn = xn
n 1 x3n 1
n 3 x1n 1
.
We plot in Fig. 8.3 the estimators and the corresponding conditio
Hence the mean squared error is
9
10
2
(1 2 ) = 1
3
.
10
3=
Solution to Problem 8.15. The conditional mean squared error of the MAP esti
mator = X is
E ( )2 | X = x = E 2 2 + 2 | X = x
= x2 2xE[ | X = x] + E[2 | X = x]
100
i
2
= x 2x
101 x
100
i=x
i=x
10
Solution to Problem 8.16.
(a) The LMS estimator is
g(X) = E[ | X] =
1 X,
if 0 X < 1,
2
X 1 , if 1 X 2.
2
(b) We rst derive the conditional variance E ( g(X)2 | X = x . If x [0, 1], the
conditional PDF of is uniform over the interval [0, x], and
2
E g(X)
The covariance of and X is
cov(, X) = E (X E[X])( E[])
100
=
=1
1
100
x=1
1
(x 25.75)( 50)
= 416.63.
Applying the linear LMS formula yields
cov(, X)
416.63
= E[] +
X E[X] = 50 +
(X 25.75) = 0.85X + 28.11.
var(X)
490.19
The mean squared error of the linea
g(X)
(c) The expectations E
2
and E var( | X) are equal because by the law
of iterated expectations,
E
g(X)
2
2
= E E g(X)
|X
= E var( | X) .
Recall from part (b) that
x2 /12,
if 0 x < 1,
1/12,
var( | X = x) =
if 1 x 2.
It follows that
1
E var( | X) =
v
Z
1
Z
x
E[X] =
0
0
2
x d dx +
3
2
Z
Z
x
x1
1
2
1
17
37
x d dx =
+
=
,
3
12
18
36
cov(X, ) = E[X] E[X]E[] =
37
11 7
.
36
9 9
Thus, the linear LMS estimator is
= 7+
9
37
36
11
9
71
162
7
9
X
11
9
= 0.5626 + 0.1761X.
Its mean squared error is
"
#
2 = E (
For k = 5, the posterior PMF can be explicitly calculated for m = 0, . . . , 5
pM |X (0 | X = 5) 0.0145,
pM |X (1 | X = 5) 0.0929,
pM |X (2 | X = 5) 0.2402,
pM |X (3 | X = 5) 0.3173,
pM |X (4 | X = 5) 0.2335,
pM |X (5 | X = 5) 0.1015,
It follows that the
We are interested in the event T < Y . We have
P(T < Y | Y = y) = 1 ey ,
y 0.
Thus,
Z
Z
9ye3y 1 ey dy =
fY (y)P(T < Y | Y = y) dy =
P(T < Y ) =
0
0
7
,
16
as can be verified by carrying out the integration.
We now describe an alternative method for obtain
(c) We have
1 t
e
P(A D)
1
2
=
=
.
P(A | D) =
1
1 3t
P(D)
t
1
+
e2t
e + e
2
2
(d) We first find E[X 2 ]. We use the fact that the second moment of an exponential
random variable T with parameter is equal to E[T 2 ] = E[T ]2 +var(T ) = 1/2 +1/2 =
2/2 . Con
good on the way back, so that pi,i+1 = p(1 p). For i = 0, the transition probability
pi,i+1 = p0,1 is just the probability that the weather is good on the way back, so that
p0,1 = p. The transition probabilities pii are then easily determined because the
Thus, P(Y2 = 2 | Y0 = 1, Y1 = 2) = P(Y2 = 2 | Y0 = Y1 = 2), which implies that Yn
does not have the Markov property.
Solution to Problem 7.4. (a) We introduce a Markov chain with state equal to
the distance between spider and y. Let n be the initial dista
(b) Using Bayes rule, we have
P(X1000 = i | X1001 = j) =
P(X1000 = i, X1001 = j)
i pij
=
.
P(X1001 = j)
j
Solution to Problem 7.15. Let i = 0, 1 . . . , n be the states, with state i indicating
that there are exactly i white balls. The nonzero transition
Solution to Problem 7.17. (a) The states form a recurrent class, which is aperiodic
since all possible transitions have positive probability.
(b) The Chapman-Kolmogorov equations are
2
rik (n 1)pkj ,
rij (n) =
for n > 1, and i, j = 1, 2,
k=1
starting with
Using also the normalization equation, we obtain
1 = 2 =
6
,
13
3 =
1
.
13
(c) Because the class cfw_6, 7 is periodic, there are no steady-state probabilities. In
particular, the sequence r66 (n) alternates between 0 and 1, and does not converge.
(d) (i)
Solution to Problem 7.36.
Dene the state to be the number of operational
machines. The corresponding continuous-time Markov chain is the same as a queue with
arrival rate and service rate (the one of Example 7.15). The required probability
is equal to the
We also have fX| (30 | ) = e30 , so the posterior is
f|X ( | 30) =
1/ 5
0
2 e30
(
,
)2 e30
if [0, 1/ 5],
d
0,
otherwise.
The MAP rule selects that maximizes the posterior (or equivalently its numerator, since the denominator is a positive constant). By se
CHAPTER 8
Solution to Problem 8.1. There are two hypotheses:
H0 : the phone number is 2537267,
H1 : the phone number is not 2537267,
and their prior probabilities are
P(H0 ) = P(H1 ) = 0.5.
Let B be the event that Artemisia obtains a busy signal when dial
Therefore,
6
.
=
130
The conditional expectation estimator is
1/ 5
E | X = (30, 25, 15, 40, 20) =
0
1/ 5
0
7 e130 d
.
( )6 e130 d
Solution to Problem 8.4. (a) Let X denote the random variable representing the
number of questions answered correctly. For ea
100
MAP Estimate
Conditional Expectation Estimator
90
80
70
60
50
40
30
20
10
0
0
20
40
60
80
100
X
Figure 8.2: MAP and LMS estimates of as a function of X in Problem 8.11.
Using the definition of conditional expectation we obtain
Z
1
f|X1 ,.,Xn ( | x1 ,
Solution to Problem 8.18. (a) The conditional CDF of X is given by
FX| (x | ) = P(X x | = ) = P( cos W x | = ) = P cos W
x
.
We note that the cosine function is one-to-one and decreasing over the interval [0, /2],
so for 0 x ,
FX| (x | ) = P W cos1
x
=1
It is worth noting that limx0 E[ | X = x] = 0 and that limxl E[ | X = x] = l, as
one would expect.
(b) The linear LMS estimator is
cov(, X)
= E[] +
(X E[X]).
2
X
Since is uniformly distributed between 0 and l, it follows that E[] = l/2. We
obtain E[X] an
Similarly,
E
max Xi = +
i=1,.,n
n
,
n+1
and it follows that
E[n ] =
1
E
2
Solution to Problem 9.10.
max Xi + min Xi 1 = .
i=1,.,n
(a) To compute c(), we write
1=
i=1,.,n
c()ek =
pK (k; ) =
k=0
k=0
c()
,
1 e
which yields c() = 1 e .
(b) The PMF of K is a s