This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: UC Berkeley Department of Statistics STAT 210A: Introduction to Mathematical Statistics Problem Set 1 Solutions Fall 2006 Issued: Thursday, August 31, 2006 Due: Thursday, September 7, 2006 Problem 1.1 Solution to 1. Let: Y n = , with probability 1 1 n n, with probability 1 n Clearly, E ( Y n ) = 1 for all n and, hence, lim n E ( Y n ) = 1. However, for all > 0, P (  Y n  ) 1 n and hence, for all > 0, lim n P (  Y n  ) = 0 so Y n p 0. 2. Let: Y n =  n, with probability 1 2 n , with probability 1 1 n n, with probability 1 2 n Hence E ( Y n ) = 0 and var ( Y n ) = 2 ( n ) 2 1 2 n = 1 for all n and, therefore, lim n var ( Y n ) = 1. As in item a, for all > 0, 0 P (  Y n  ) 1 n and hence, for all > 0, lim n P (  Y n  ) = 0 so Y n p 0. Problem 1.2 See Examples from section 2.2 in Large Sample Theory, by Erich Lehmann: 1. We have that E ( X ) 2 = E h P n i =1 ( X i ) 2 n 2 i . Given independence, E ( X ) 2 = P n i =1 2 i n 2 0 establishing convergence in quadratic mean ( L 2convergence). Convergence in probability follows from convergence in quadratic mean. 1 2. Once it is proved that the var ( X n ) var ( X n ), L 2 convergence of X n implies L 2 convergence of X n . Convergence in probability follows. The statement is true in both the original form X = P i X i i P i 1 i and the corrected form X = P i X i 2 i P i 1 2 i For the corrected form: Write the mean as the estimate for in the regression model X i = +...
View
Full
Document
This note was uploaded on 10/17/2009 for the course STAT 210a taught by Professor Staff during the Fall '08 term at University of California, Berkeley.
 Fall '08
 Staff
 Statistics, Probability

Click to edit the document details