This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: CME308: Assignment 2 Due: Thursday, 22 April, 2010 Due Date: This assignment is due on Thursday, 22 April, 2010, by 5pm under the door of 380-383V. See the course website for the policy on incentives for L A T E X solutions. Topics: • Convergence in distribution and in probability and important variations on these ideas including Slut- sky’s Lemma, Continuous Mapping Theorem, etc. The notes from Professor Glynn cover these and you can also find extensive coverage on Wikipedia. • MLE and its properties. See Notes 1. • The Delta Method. See Chapter 3 of Professor Glynn’s notes (or Appendix A2), Casella & Berger, or Wikipedia. Problem 1 (10 pts): This problem will cover some useful convergence results. 1. Suppose X n D → c , where c is a non-random constant. Show that in this case, this implies X n p → c . ( Hint: Express P ( | X n- c | > δ ) in terms of the distribution function of X n .) 2. Suppose X n D → X and Y n p → a , where a is a constant. Show that X n Y n D → aX . 3. Using the assumption of 2, show that X n + Y n D → X + a . 4. Extend parts 2 and 3 to the vector case so that X n ,Y n ,a ∈ R p and the product in 2 becomes the standard vector product. Solution: 1. Using the hint, P ( | X n- c | > δ ) = F X n ( c- δ ) + 1- F X n ( c + δ ) → F X ( c- δ ) + 1- F X ( c + δ ) = 0 + 1- 1 = 0 . 2. Remark: As an important aside, if we have U n p → U and V n p → V , then we can also say that U n and V n converge jointly, i.e. ( U n ,V n ) p → ( U,V ). This comes from the general result that a random vector Z n converges in probability to Z if and only if each element Z ( i ) n of the vector converges in probability to Z ( i ) . However, we are interested in convergence in distribution so suppose that U n D → U and V n D → V . In this case, we cannot necessarily conclude the joint convergence ( U n ,V n ) D → ( U,V ). This is because information about the marginal distributions of the vector ( U n ,V n ) (i.e. information about U n and V n ) is generally insufficient in determining the joint distribution and convergence. There are two times (probably more but for our purposes just these two) when we can in fact conclude joint convergence of ( U n ,V n ). The first involves the Cram` er-Wold Theorem which states that Z n ,Z ∈ R d , Z n D → Z ⇐⇒ ∀ c ∈ R d , c T Z n D → c T Z. The right-to-left arrow of the if and only if statement shows that we need more than just each element of the vector to converge in distribution. We need all linear combinations to converge in distribution. The other time when we can conclude convergence of ( U n ,V n ) is when V n is converging to a non- random constant and therefore converging in probability to that non-random constant (from part 1)....
View Full Document
This note was uploaded on 06/17/2010 for the course CME 308 taught by Professor Peterglynn during the Spring '08 term at Stanford.
- Spring '08