This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: or one that relies on the sample mean being an unbiased estimate of θ/ 2: ˆ Θ = 2 n n X i =1 X i . Solution to Problem 9.9. The PDF of X i is f X i ( x i ) = n 1 , if θ ≤ x i ≤ θ + 1, , otherwise. The likelihood function is f X ( x 1 , . . . , x n ; θ ) = f X 1 ( x 1 ; θ ) ··· f X n ( x n ; θ ) = n 1 , if θ ≤ min i =1 ,...,n x i ≤ max i =1 ,...,n x i ≤ θ + 1, , otherwise. Any value in the feasible interval max i =1 ,...,n X i − 1 , min i =1 ,...,n X i maximizes the likelihood function and is therefore a ML estimator. Any choice of estimator within the above interval is consistent. The reason is that min i =1 ,...,n X i converges in probability to θ , while max i =1 ,...,n X i converges in probability to θ + 1 (cf. Example 5.6). Thus, both endpoints of the above interval converge to θ . Let us consider the estimator that chooses the midpoint ˆ Θ n = 1 2 max i =1 ,...,n X i + min i =1 ,...,n X i − 1 of the interval of ML estimates. We claim that it is unbiased. This claim can be verified purely on the basis of symmetry considerations, but nevertheless we provide a detailed calculation. We first find the CDFs of max i =1 ,...,n X i and min i =1 ,...,n X i , then their PDFs (by differentiation), and then E [ ˆ Θ n ]. The details are very similar to the ones for the preceding problem. We have by straightforward calculation, f min i X i ( x ) = n ( θ + 1 − x ) n − 1 , if θ ≤ x ≤ θ + 1, , otherwise, f max i X i ( x ) = n ( x − θ ) n − 1 , if θ ≤ x ≤ θ + 1, , otherwise. Hence E min i =1 ,...,n X i = n Z θ +1 θ x ( θ + 1 − x ) n − 1 dx = − n Z θ +1 θ ( θ + 1 − x ) n dx + ( θ + 1) n Z θ +1 θ ( θ + 1 − x ) n − 1 dx = − n Z 1 x n dx + ( θ + 1) n Z 1 x n − 1 dx = − n n + 1 + θ + 1 = θ + 1 n + 1 . 121 Similarly, E max i =1 ,...,n X i = θ + n n + 1 , and it follows that E [ ˆ Θ n ] = 1 2 E max i =1 ,...,n X i + min i =1 ,...,n X i − 1 = θ. Solution to Problem 9.10. (a) To compute c ( θ ), we write 1 = ∞ X k =0 p K ( k ; θ ) = ∞ X k =0 c ( θ ) e − θk = c ( θ ) 1 − e − θ , which yields c ( θ ) = 1 − e − θ . (b) The PMF of K is a shifted geometric distribution with parameter p = 1 − e − θ (shifted by 1 to the left, so that it starts at k = 0). Therefore, E [ K ] = 1 p − 1 = 1 1 − e − θ − 1 = e − θ 1 − e − θ = 1 e θ − 1 , and the variance is the same as for the geometric with parameter p , var( K ) = 1 − p p 2 = e − θ (1 − e − θ ) 2 . (c) Let K i be the number of photons emitted the i th time that the source is triggered. The joint PMF of K = ( K 1 , . . . , K n ) is p K ( k 1 , . . . , k n ; θ ) = c ( θ ) n n i =1 e − θk i = c ( θ ) n e − θs n , where s n = n X i =1 k i ....
View
Full
Document
This note was uploaded on 01/11/2011 for the course MATH 170 taught by Professor Staff during the Spring '08 term at UCLA.
 Spring '08
 staff
 Math

Click to edit the document details