slides_7_converge

# For nonrandom sequences we drop the p subscript a n o

• Notes
• 88

This preview shows pages 59–64. Sign up to view the full content.

For nonrandom sequences, we drop the “ p ” subscript: a n o n if n a n 0 and we say “ a n is of order less than n .” We get o p 1 when 0 in the definition: n 0 1. If then W n o p n   W n o p n , as is easily seen by n W n n n n W n n n W n o 1   o p 1 o p 1 , because n 0 when 0. We started with the premise n a n o p 1 . 59

This preview has intentionally blurred sections. Sign up to view the full version.

We have similar definitions for being bounded in probability. If n W n is bounded in probability we write W n O p n which is read “ W n is of order no greater than n in probability.” The definition applies to nonrandom sequences. The notation O p 1 again comes with 0. The definitions imply that if W n o p n then W n O p n . In practice we need relatively little of the o p / O p apparatus. It is useful for theoretical work, but we will mainly use o p 1 , O p 1 , O p n 1/2 , and O p n 1/2 . 60
EXAMPLE : Earlier we considered a standardized partial sum. We can appply the O p apparatus to the partial sum and sample average. Let X i : i 1,2,... be random variables with finite second moment that are pairwise uncorrelated, and assume E X i 0 and Var X i E X i 2 2 . Let Y n be the partial sum and X ̄ n Y n / n be the sample average. Using Chebyshev’s inequality we showed that n 1/2 Y n O p 1 which means Y n O p n 1/2 . We can also write this as n 1/2 X ̄ n O p 1 ,which means X ̄ n O p n 1/2 . 61

This preview has intentionally blurred sections. Sign up to view the full version.

Note that when Y n is the partial sum, we can also write Y n o p n 1/2 for any 0, that is, n 1/2 Y n p 0. On the other hand, Y n O p n 1/2 for and 0. In other words, n 1/2 is the function of n that we must divide by in order to obtain a sequence that is bounded but does not have a variance shrinking to zero. We will make a stronger statement about such sequences in the next section. If E X i 0, we need to use n 1/2 X ̄ n n 1/2 Y n n as the sequence that is O p 1 . 62
6 . Convergence in Distribution Convergence in distribution is important for approximating probabilities involving averages and functions of them. It is very important for approximating the sampling distributions of estimators in a variety of settings (later). Consider the following general problem. We assume X i : i 1,2,... is a sequence of i.i.d. random variables with finite second moment. Let E X i , 2 Var X i . Let X ̄ n be the sample average of the first n variables in the sequence.

This preview has intentionally blurred sections. Sign up to view the full version.

This is the end of the preview. Sign up to access the rest of the document.
• Fall '12
• Jeff
• Probability, Probability theory, Convergence, WLLN

{[ snackBarMessage ]}

### What students are saying

• As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

Kiran Temple University Fox School of Business ‘17, Course Hero Intern

• I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

Dana University of Pennsylvania ‘17, Course Hero Intern

• The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

Jill Tulane University ‘16, Course Hero Intern