This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Chapter 5 Sums of Random Variables and
LongTerm Averages 5.1 Sums of Random Variables
5.1 £[X + Y + Z] = 8[X] + 8[Y] + £[Z] = 0
a) From Eqn. 5.3 we have VAR(X + Y + Z) = VAR(X) + VAR(Y) + VAR(Z)
+200V(X, Y) + 2COV(X, Z) + 2'COV(Y, Z) 1+1+1+2(i—)+2(0)+2(—i—)=3 b) From Eqn. 5.3 we have
VAR(X + Y + Z) = VAR(X) + VAR(Y) + VAR(Z) = 3 5.4 a) By Eqn. 5.7 we have <I>z(w> = <I>X(w)<1>y(w) = eaIwIemwl = ewnws b) Taking the inverse transform: fz(z) = $510.0) = 1;”— => Z is also Cauchy 62 Sums of Random Variables and Long—Term Averages 510 G542) = 8[zX1+"’+X"]=8[2X1]...£[2X"]=G'X1(z)...GXk(z) = [P2 + q]"‘ [P2 + «11%.an + q]"’°
_ [p2 +q]n1++nk where the second equality follows from the independence of the Xg’s. The result states
that X k is Binomial with parameters 721 + + nk and p. This is obvious since Sk is the
number of heads in m + + nk tosses. 5.12 a) Note ﬁrst that
€[S/N = n] = S X4 = nE[X] ,
k=1 thus
E[S’] = S[£[S’/N]] = 8[N8[X]] = E[N]£[X], since E[X] is a constant. Next conSider 5W] = 5[5[52/Nll Which requires that we'ﬁnd £[52IN = n] = 5 [ixiixj] = iiqxixj] i=1 j=1 n£[X2] + n(n — 1)8[X]2 since E[Xng] = £[X2] ifz' = j and E[Xin] = 5pc]2 in 7e j. Thus em = £[N£[X2] + N(N — 1)£[X]2]
8[N]8[X2] + 5[N2]5[X2] — swam? Then VAR(S) = 5,152] — 5L9]2
= 8[N]8[X2] + £[N2]5[X2] — gm]qu — S[N]28[X]2
= 8[N]VAR[X] + VAR[N]8[X]2 5.2. The Sample Mean and the Laws of Large Numbers 63 b) First note that
spS/N = n] = ski21 Xi] = 8[2X‘]...8[zX"] = axe)“
Then glzsl = glglzslN]
= glG§(Z)l
= glelw=Gx(z)
GN(GX(2)) 5.2 The Sample Mean and the Laws of ‘ Large
Numbers 5.15 PHEEQ— '26] H P[N(t)  Atl 2 6t] VAR[N(t)]
3 W32— by Chebyshev Inq.
_ i _ 1‘.
_ e2t2 _ 5%
5.18 For n = 10, Eqn. 5.20 gives
12 l 1
_ > _ = _ __
P[M1o 0 < 5] _ 1 1052 1 10 52 1 Since M10 is Gaussian with mean 0 and variance 5 Pan — 0 < 5] = P[—e < M10 < e] = 1 — 2Q(\/ﬁe)
= 1 — 2Q(3.16€)
Similarly for n = 100 we obtain
1 1
P[M100’0] <5] — PUMloo —' < 5] = 1 — 64 For example if 8 = % 1
PllMlol < 5] IV 1
Pllel < '2‘]
1
PllMlool < 5] Z 1
PHMlool < Sums of Random Variables and Long— Term Averages 1
1—i_0'/_4—.6 1 — 2Q(1.58) = 1 — 2(5.44(10—2)) = .89 1 — 262(5) = 1 — 2(2.87)10'7 Note the signiﬁcant discrepancies between the bounds and the exact values. 5.3 The Central Limit Theorem 5.22 The relevant parameters are n = 1000, m = up = 500, 0’2 = npq = 250. The
Central Limit Theorem then gives: P[400 g N g 600] ~ P[500 g N g 550] ~ P 400—500<N—m<600—500
V250 _ a _ V250 Q(——6.324) — Q(6.324) = 1 — 2Q(6.324)
1 — 254(10‘10) 5.29 The total number of errors $100 is the sum of iid Bernoulli random variables 5100 = X1 + + X100
€[Sloo] = 100p = 15
VAR[S100] = 100pq = 12.75 The Central Limit Theorem gives:
P[Sloo S 20] 1 — P[Sloo > _ 1_P 310045 > 20—15
_ «12.75 «12.75 N 1 — Q(1.4) = 0.92 5.4. Conﬁdence Intervals A 65 5.4 Conﬁdence Intervals 5.31 The 2th measurement is X; = m + N. Where 8[N.] = 0 and VAR[N.] = 10. The
sample mean is M106 = 100 and the variance is a = Eqn. 5.37 with 2.,” = 1.96 gives 100_ 1.9mm},100+ 1.96m
«2 ﬁt") ) = (98.9,101.1) 5.37 The sample mean and variance of the batch sample means are M10 = 24.9 and
Vﬁ, = 3.42. The mean number of heads in a batch is p = 5 [M10] = 5 [X] = 50p.
From Table 5.2, with 1 — a = 95% and n — 1 = 9 we have 20/23 = The conﬁdence interval for u is m , 10+ ‘/1—0 The conﬁdence interval for p = M10 / 50 is then (M10 — z“’2'9V1° M za/2’9V1") = (2358,2622) (23.58 26.22 —50—, T) = (0.4716,0.5244) 5.5 Convergence of Sequences of Random Variables 5.40 We’ll consider Un(€): 66 Sums of Random Variables and LongTerm Averages 5.45 We are given that Xn —) X ms and Yn —> Y ms. Consider 5[((Xn + Yn)  (X + Y))2] = E[((Xn  X) + (Yn * Y))2]
= E[(Xn  XV] + E[(Yn  Y)2]
+2E[(Xn — X )(Yn — Y)] The ﬁrst two terms approach zero since Xn —> X and Y“ —> Y in mean square sense. We
need to show that the last term also goes to zero. This requires the Schwarz Inequality: E[ZW] g ‘/E[Zz]‘/E[W2] . When the inequality is applied to the third term we have: E[((Xn + Yn) — (X + Y))2] S E[(Xn  XV] + E[(Yn * Y)2]
+2\/E[(Xn  X )zlx/EKYn — Y)2l
= (\/E[(Xn  XV] + \/E[(Yn  Y)2])2 —r 0 as n —> oo .
To prove the Schwarz Inequality we take
0 S E[(Z + aW)2] and minimize with respect to a: 
o dL:(E[z2] + 2aE[ZW] + a2E[W2])
2E[ZW] + 2aE[W2] II
o 5.6. LongTerm Arrival Rates and Associated Averages 67 E[ZW] — E[Wz] . Thus => minimum attained by a* = E[ZW]2 E[ZW]2 o s E[(Z + a‘Wﬂ = E[Zz] Z 2 mm + E[W2] => S E[Z2]
: E[ZW] S y/E[Z2]\/E[W2] as required 5.6 LongTerm Arrival Rates and Associated
Averages 5.51 Let Y be the bus interdeparture time, then Y=X1+X2+...+Xm and 8[Y] =m8[X,] =mT 1 l
.'. 1 _t b = — = _
ong erm us departure rate E[Y] T 5.53 Show {N(t) Z n} <=> {Sn S t}. a) We ﬁrst show that {N(t) 2 n} : {Sn _<_ t}.
2X1+m+Xn =Sn
=> {Sn 3 t} Next we show that {Sn S t} => {N(t) 2 If {Sn S t} then the nth event occurs
before time t => N (t) is at least n => {N(t) 2 n} «/ 68 Sums of Random Variables and Long— Term Averages
P[N(t) Sn] = 1—P[N(t)2n+1]
1  P[SnH S t] 1 _ (1 _ i (circa) k=0 b) " at,“ _at
:(k!)e k=0 since 5"“ is an Erlang RV. Thus N (t) is a Poisson random variable. 5.59 The interreplacement time is Xi _ X; if X, < 3T that is, item breaks down before 3T
’ _ 3T if X, 2 3T that is, item is replaced at time 3T where the X, are iid~ exponential random variables with mean 8[X,~] = T.
The mean of X, is: ~ 3T 1
€[X.~] = / wits—“Vde + 3TP[X > 3T] = T(1 — e3)
o
a) Therefore the
longterm 1 1
replacement = ——;— = ——
rate 51X] T(1 — 6‘3)
b) Let
_ _ 1 X. 2 3T i.e., a good item is replaced
0’ ‘ o X. < 3T
Then e10] = P[X; 2 3T] = 63 long term rate at which working components are replaced is 5.6. Long—Term Arrival Rates and Associated Averages 69 5.63 a) Since the age a(t) is the time that has elapsed from the last arrival up to time t, then
2 X‘ X‘ x.
Cj = /0 ’a(t’)dt’= /0 ’t’dt’ = 7’ The ﬁgure below shows the relation between a(t) and the Cj’s. C(t)
0<———> <——><——> t
.731 $2 373 b) 3390 T = 5—[JT] = 28[X] c) From the above ﬁgure: .1t,,_.1N“’X:',,
limt/0 a.(t )dt — hth/o a(t )dt t—soo t—voo .
1:1 1 N“)
lim — Z Cj t—boo t j=1
E [X 2]
25m from part b) d) For the residual life in a cycle x X‘ '4’
0'. = / Jr(t’)dt’= / ’(Xj—t'mt': 3% =0,
0 0 .1 => same cost as for age of a cycle 70 Sums of Random Variables and Long Term Averages 5.7 A Computer Method for Evaluating the Distri
bution of a Random Variable Using the Discrete
Fourier Transform _ 1 1 11
5.66a) Co — §+§+3—
1 1 ~1r 1~1r
c1 = ‘3‘+§€J%+§€JZT=0
1 1" 1 1r
C2 = §+§ej43+§e387=0 5.8 Problems Requiring Cumulative Knowledge 578X—1U+(l)2U + +(l)nU >1
0 n—2n 2 71—1 2 In— This “lowpass ﬁlter” weighs recent samples more heavily than older samples. Note
that we can also write Xn as follows: 1 l
=— _ — = >
Xn 2 nl+2Un X0 0, n_1 We will see in Chapter 6 that Xn is an autoregressive random process. a> mm = EEEerw—Jsﬂam i=0 i=0 = ;E[v11<%>”=Em (14%)") 2
0 since E[U] = 0. 5.8. Problems Requiring Cumulative Knowledge n—l 1 23'
' E[U2] since the Uj are iid 1) ) where E[U2]=a2 H
q
“l N’ u
A
H

A
I‘PI VAR(Xn) = Em] — 13%,]2 = a; (1 — (an) . We could also obtain these results as follows: E[Xn] E EXM] + %E[Un] E[Xo] = 0
= éELX _1] + %E[U] n 2 1 This ﬁrstorder difference equation has solution E[X,,] = E[U] (1 — . For the second moment, we have: 1 1 2
E [(5an + Em) ] l 2 l l 2
4E[Xn—1]+ ‘lUnl W
0 l 2 l 2
4E[X _1] + 4E[U] E [X3] This ﬁrstorder difference equation has solution:
2 1 n
E[ n] 3 1 4 b) The characteristic function of Xn is: 111 1 n
. FEES) Un—j
¢Xn(w)=E[eM"1 = E e E [ejgzim] E [eﬁU'H] [ejﬁul] 71 72 Sums of Random Variables and Long Term Averages Un are iid Gaussian KV’s with characteristic function
<I>Un(w) = E[ej“’Un] = €sz Therefore II
('b ‘I’Xn(w) II
(1: Thus X" is a zero—mean Gaussian random variable with the variance found in part a).
As n > 00 so Xn approaches a zero—mean Gaussian random variable with variance 0'2 / 3. c) The result in Part b) shows that Xn converges in distribution to a Gaussian
random variable X with zero mean and variance 0'2 / 3. To determine whether Xn converges in meansquare sense consider the Cauchy
Criterion in Eq. 5.50. Consider Xn and Xn+mz 1n+m—l 1 n 1n+m—1 1 j’ 2
s[(Xn+m—Xn)21 = 8 ('2' (a) Uri—5 (a) Um")
— J—
1 n+m—l 1 j 2
= 25K (a) UH) n+m 1 n+m—1 1  1 j+j’
2 Z (a) EIUn—er—wl j=n jl=n U2 n+m1 1 21'
I Z (5) 1:1» = ﬂirt—:21) —r 0 asn,m—>oo Therefore Xn converges in mean—square sense. To determine almost—sure convergence of Xn would take us beyond the scope of the
text. See Gray and Davisson, page 183 for a discussion on how this is done. ...
View
Full Document
 Spring '10
 Majungsoo
 Probability theory, Xn

Click to edit the document details