stat210a_2007_hw11_solutions

# stat210a_2007_hw11_solutions - UC Berkeley Department of...

This preview shows pages 1–3. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: UC Berkeley Department of Statistics STAT 210A: Introduction to Mathematical Statistics Problem Set 11- Solutions Fall 2007 Issued: Thursday, December 19 Due: Thursday, December 29 Problem 11.1 (a) P0 1 n n i=1 Xi − µ0 ≥ ε = P0 et Pn i=1 Xi ≥ etn(µ0 +ε) , ∀ t > 0 Xi ≤ inf =e (b) P1 1 n n i=1 E et 2 Pn i=1 t>0 − nε2 2σ 0 etn(µ0 +ε) , f or t = , (By M arkov ′ s inequality ) ε 2 σ0 Xi ≤ γ = P1 e−t Pn i=1 Xi ≥ e−tnγ , ∀ t > 0 Xi ≤ inf =e Thus, − E e−t n(γ −µ1 )2 2 2σ1 Pn i=1 t>0 e−tnγ , (By M arkov ′ s inequality ) µ1 − γ >0 2 σ1 , f or t = 1 1 lim log P0 n→∞ n n 1 1 log P1 n→∞ n n lim n i=1 n i=1 Xi ≥ γ Xi ≤ γ =− =− (γ − µ0 )2 2 2σ0 (µ1 − γ )2 2 2σ1 (1) (2) If (1)> (2), (1) dominates the limit in the problem, vice verse. Thus, to minimize the limit, (1)= (2), i.e. γ −µ0 σ0 = µ1 −γ σ1 . Thus, the threshold γ = 1 σ0 µ1 +σ1 µ0 σ0 +σ1 Problem 11.2 a) We prove the result in two steps: i) First, we prove the result for k = 1. In that case, we have: X(1) = min1≤i≤n Xi , so P(X(1) > x) = P(Xi > x, ∀i = 1, . . . , n) = exp(−nλx) which proves that X(1) ∼ E(nλ) and as a result nX(1) ∼ E(λ); ii) Now, we prove the result for some k. We ﬁrst notice that, for t ≤ x: P(Xk > x|Xk > t) = P(Xk > x) P(Xk > t) = exp (λ(x − t)) and as a result, given that Xk > t, we have that Xk has the same distribution as t + Yk where Yk Exp(λ); Now, suppose we are sequentially adding the observed values to the ordered sample X(1) , X(2) , . . .. When the time comes to put X(k) in the sequence, there are n − k +1 each one of them known to be at least as large as X(k−1) . Zj . From independence and the above result, we have that Zj = X(k−1) + Yj iid independent exponential variables remaining to be added to the ordered sample Let Zj , j = 1, . . . , n − k +1 be the remaining variables. We are given that X(k−1) ≤ d that X(k) − X(k−1) is distributed according to min1≤j ≤n−k+1 Yj . Repeating the argument used for X(1) , that yields: X(k) − X(k−1) ∼ Exp ((n − k + 1)λ) and the result follows by rescaling X(k) − X(k−1) . b) As the moment generating function of the exponential distribution is given by M (t) = 2 λ2 4! − 22 20 2 var(Dn ) = =4 4 λ λ From the the weak Law of Large Numbers, we have: 2 E(Dn ) = n i=1 2 (Dn ) → p ¯ Xn → p with Yj ∼ Exp(λ). Furthermore, X(k) = X(k−1) + min1≤j ≤n−k+1 Yj . It follows 1− t λ , we have that: 2 λ2 1 λ 2 It follows from Slutsky’s theorem that: ¯2 p Xn → Now, by the central limit theorem, 1 √ n As a result: 1 λ2 √ n Slutsky’s theorem then yield: 1 √ n ¯ Xn 2 −1 n i=1 2 ¯2 d Di − 2Xn → N (0, 20) n i=1 2 Di − n i=1 2 Di − 1 λ2 2 λ2 → N (0, d 20 ) λ4 2 λ2 → N (0, 20) d c) Under the null hypothesis that Xi follows an exponential family, Tn has the same asymptotic distribution regardless of the value of the parameter of the distribution. A deviation from the null in this case, corresponds to a deviation from the hypothesis that Xi ∼ Exp(λ) for any λ. Problem 11.3 the Poisson distribution. Thus, by Bernstein-von Mises theorem, p(v |X1 , . . . , Xn ) → N where v = √ n(θ M LE − θ ∗ ) and θ M LE = 1 θ∗ 1 n n i=1 Xi . d The prior is strictly positive on (0, ∞) and regularity conditions for MLE are satisﬁed for 1 I (θ ∗ ) 0, , For the Poisson distribution, I (θ ∗ ) = Thus, for large n, p(θ |X1 , . . . , Xn ) ≈ N d 1 n n Xi , i=1 θ∗ n . This makes sense because as n increase, θ is closer to θ ∗ and the variance is getting smaller. Problem 11.4 Deﬁning the conditional density on Rn , we know that: + p(X |θ ) = 3 I(Mn ≤ θ ) θ and so the posterior of θ given X is such that: π (θ |X ) = I(θ − Mn ≥ 0) θn λ(θ ) ∞ λ(t) Mn tn dt as: Now, deﬁne Yn = n(θ − Mn ), whose density at y can be computed as a linear transform of θ πY (y |X ) = = 1+ For n → ∞, we know: Mn y n y Mn + n y λ(Mn + ) n n y 1+ nMn → θ∗ →0 → θ∗ → λ(θ ∗ ), → exp ∞ 0 I(t ∞ p p p p y I(y ≥ 0) · λ(Mn + n ) Mn + y n ∞ λ(t) n Mn tn dt y I(y ≥ 0) · λ(Mn + n ) n ∞ 0 I(t Mn n λ(t)dt t y nMn ≥ Mn ) by continuity of λ y , θ∗ Mn n λ(t)dt t Now, it is enough to prove that ∞ 0 ≥ Mn ) min{1, → λ(θ ∗ ). We know that: ∞ I(t ≥ Mn ) Mn t n λ(t) dt ≤ n 0 Mn t n }λ(t) dt ≤ λ(t)dt = 1 0 and I(t ≥ Mn ) Mn t → δ(θ ∗ ), poitwise as n → ∞ where δ(x) denotes the Dirac delta at x. Using the dominated convergence theorem: ∞ n→∞ 0 lim I(t ≥ Mn ) Mn t n ∞ λ(t)dt = 0 n→∞ lim I(t ≥ Mn ) Mn t n λ(t)dt = λ(θ ∗ ) and the result follows. Problem 11.5 (a) We have: µ(F ) = EF (X ) 4 where X ∼ F . Hence µ(G) = EG (Y ) for Y ∼ G. It follows that µ(G) = EG (Y ) = µ(G) = aEF (X ) = aµ(F ). In addition: G(x) = P (Y ≤ y ) = P X ≤ and it follows that dF (x) = dG( x ). Now: a θ (G) = = x=ay y =F a y a = y y log dG(y ) µ(G) µ(G) y y log dF (ay ) aµ(F ) aµ(F ) y y log dF (ay ) aµ(F ) aµ(F ) (b) Letting δxi denote a Dirac-delta at xi , we can write: 1 ˆ dF = n It follows that: ˆ θ (F ) = = = √ d n δxi (x) i=1 x log µ(F ) 1 n 1 n n i=1 n i=1 x µ(F ) ˆ dF (x) x µ(F ) δxi (x) x log µ(F ) xi log µ(F ) xi µ(F ) p (c) Note that N (0, 1). If √ = →0 p √ First, notice that,using Slutsky’s theorem: n 1 n n i=1 n(Yn − Zn ) → 0, then it is okay. Xi ¯ log Xn · √ Xi ¯ Xn 1 n· n 1 − n n i=1 n i=1 n(Xn − Yn ) → N (0, 1) and (Yn − Zn ) → 0 do NOT imply p √ n(Xn − Zn ) → d Xi log µ(F ) Xi µ(F ) · √ 1 n· n n 1 1 ¯ n − µ(F ) X Xi log Xi − ¯ log Xn log µ(F ) − ¯ µ(F ) Xn Xi i=1 Now, by CLT, √ n 1 n p ¯ ∵ Xn − µ(F ) → 0 and n i=1 1 x and log x x is continuous at µ(F ) > 0. Xi log µ(F ) Xi µ(F ) − θ (F ) 5 →N d 0, V ar Xi log µ(F ) Xi µ(F ) Thus, converges weakly to B (t). Hence with same argument: √ ˜ n θn − θ (F ) 1 Xi Xi log µ(F ) µ(F ) √ ˆ Or, letting B (t) denote a Brownian Bridge, we know that n F (F −1 (t)) − F (F −1 (t)) d ˜ n θn − θ (F ) → N √ 0, V ar = 0 → Problem 11.6 d 1 0 F −1 (t) log µ(F ) F −1 (t) log µ(F ) F −1 (t) µ(F ) F −1 (t) µ(F ) √ ˆ n dF (F −1 (t)) − dF (F −1 (t)) dB (t) Note that when E(Xi ) = µ, V ar (Xi ) = σ 2 , m4 = E(Xi − E(Xi ))4 , 1 E n V ar 1 n n i=1 n i=1 ¯ (Xi − X )2 ¯ (Xi − X )2 n i=1 = = n−1 2 σ n (n − 1)2 m4 (3 − n)σ 4 − n2 n n(n − 1) 2 2 1 n ∴E 1 n 1 n n i=1 n ¯ (Xi − X )2 = 1 n ¯ (Xi − E(X )) + E(X ) − X 2 −2 n n n i=1 j =1 (Xi − E(X ))(Xj − E(X )) i=1 n i=1 ¯ (Xi − X )2 = σ2 + V ar ¯ (Xi − X )2 1 2 2 2 n−1 2 σ− σ= σ n n n n 2 1 1 − 1 = V ar Xi2 − 2 n n n i=1 i<j = 1− − 1 n 2 Cov n V ar Xi2 , i<j Xi Xj = n n−1 V ar (X1 X2 ) + 2n C ov (X1 X2 , X1 X3 ) 2 2 4(n − 1) Cov n4 4 1 2 Xi Xj V ar (X1 ) + 4 V ar n n i<j n Xi Xj Xi2 , i=1 i<j Xi Xj i=1 i<j 2 Xi Xj = n(n − 1)Cov (X1 , X1 X2 ) 6 V ar (X1 X2 ) = E(X 2 ) 2 Cov (X1 X2 , X1 X3 ) = E(X 2 )(E(X ))2 − (E(X ))4 1 n n − (E(X ))4 2 Cov (X1 , X1 X2 ) = E(X 3 )E(X ) − E(X 2 )(E(X ))2 ∴ V ar i=1 1 n ¯ (Xi − X )2 = (n − 1)2 E(X − E(X ))4 (3 − n)E(X 2 − E(X 2 ))2 − n2 n n(n − 1) 1 n n i=1 (Yi (a) E(Yi∗ ) = Thus, n i=1 Yi , V ∗ ) = (n−1)t E(T n ar (Yi∗ ) = by arguments above. ¯ − Y )2 (b) By arguments in (a) and above, it is straight forward. If you have any question about grading or solutions, please come to see me(GSI, Choongsoon Bae). I can’t be perfect. :-) 7 ...
View Full Document

## This note was uploaded on 10/17/2009 for the course STAT 210a taught by Professor Staff during the Fall '08 term at Berkeley.

### Page1 / 7

stat210a_2007_hw11_solutions - UC Berkeley Department of...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online