0 we started with the premise n a n o p 1 59 we have

Info iconThis preview shows pages 59–65. Sign up to view the full content.

View Full Document Right Arrow Icon
0. We started with the premise n a n o p 1 . 59
Background image of page 59

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
We have similar definitions for being bounded in probability. If n W n is bounded in probability we write W n O p n which is read “ W n is of order no greater than n in probability.” The definition applies to nonrandom sequences. The notation O p 1 again comes with 0. The definitions imply that if W n o p n then W n O p n . In practice we need relatively little of the o p / O p apparatus. It is useful for theoretical work, but we will mainly use o p 1 , O p 1 , O p n 1/2 , and O p n 1/2 . 60
Background image of page 60
EXAMPLE : Earlier we considered a standardized partial sum. We can appply the O p apparatus to the partial sum and sample average. Let X i : i 1,2,. .. be random variables with finite second moment that are pairwise uncorrelated, and assume E X i 0 and Var X i E X i 2 2 .Let Y n be the partial sum and X ̄ n Y n / n be the sample average. Using Chebyshev’s inequality we showed that n 1/2 Y n O p 1 which means Y n O p n 1/2 . We can also write this as n 1/2 X ̄ n O p 1 ,which means X ̄ n O p n 1/2 . 61
Background image of page 61

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Note that when Y n is the partial sum, we can also write Y n o p n 1/2 for any 0, that is, n 1/2 Y n p 0. On the other hand, Y n O p n 1/2 for and 0. In other words, n 1/2 is the function of n that we must divide by in order to obtain a sequence that is bounded but does not have a variance shrinking to zero. We will make a stronger statement about such sequences in the next section. If E X i 0, we need to use n 1/2 X ̄ n n 1/2 Y n n as the sequence that is O p 1 . 62
Background image of page 62
6 . Convergence in Distribution Convergence in distribution is important for approximating probabilities involving averages and functions of them. It is very important for approximating the sampling distributions of estimators in a variety of settings (later). Consider the following general problem. We assume X i : i 1,2,. .. is a sequence of i.i.d. random variables with finite second moment. Let E X i , 2 Var X i .Let X ̄ n be the sample average of the first n variables in the sequence. 63
Background image of page 63

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
We immediately know the first and second moments of X ̄ n : E X ̄ n Var X ̄ n 2 / n If the common population distribution is Normal , 2 , then we know X ̄ n ~ Normal , 2 / n . But what if we do not know the population distribution (distribution of each X i ), or it is too complicated to be able to find D X ̄ n ? It is very helpful to be able to approximate the distribution of X ̄ n for a broad range of population distributions.
Background image of page 64
Image of page 65
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

Page59 / 88

0 We started with the premise n a n o p 1 59 We have...

This preview shows document pages 59 - 65. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online