Lecture 16 Large sample results (Consistency)

Lecture 16 Large sample results (Consistency) - Economics...

Info icon This preview shows pages 1–6. Sign up to view the full content.

View Full Document Right Arrow Icon
Economics 326 Methods of Empirical Research in Economics Lecture 16: Large sample results: Consistency Vadim Marmer University of British Columbia March 29, 2011
Image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Why we need the large sample theory I We have shown that the OLS estimator ˆ β has some desirable properties: I ˆ β is unbiased if the errors are strongly exogenous: E ( U j X ) = 0 . I If in addition the errors are homoskedastic then d Var ° ˆ β ± = s 2 / n i = 1 ( X i ° ¯ X ) 2 is an unbiased estimator of the conditional variance of the OLS estimator ˆ β . I If in addition the errors are normally distributed (given X ) then T = ° ˆ β ° β ± / q d Var ° ˆ β ± has a t distribution which can be used for hypotheses testing. 1/20
Image of page 2
Why we need the large sample theory I If the errors are only weakly exogenous: E ( X i U i ) = 0 , the OLS estimator is in general biased. I If the errors are heteroskedastic: E ° U 2 i j X i ± = h ( X i ) , the "usual" variance formula is invalid; we also do not have an unbiased estimator for the variance in this case. I If the errors are not normally distributed conditional on X then T - and F -statistics do not have t and F distributions under the null hypothesis. I The asymptotic or large sample theory allows us to derive approximate properties and distributions of estimators and test statistics by assuming that the sample size n is very large. 2/20
Image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Convergence in probability and LLN I Let θ n be a sequence of random variables indexed by the sample size n . We say that θ n converges in probability if lim n ! P ( j θ n ° θ j ± ε ) = 0 for all ε > 0 . I We denote this as θ n ! p θ or p lim θ n = θ . I An example of convergence in probability is a Law of Large Numbers (LLN): Let X 1 , X 2 , . . . , X n be a random sample such that E ( X i ) = μ for all i = 1 , . . . , n , and de°ne ¯ X n = 1 n n i = 1 X i . Then, under certain conditions, ¯ X n ! p μ . 3/20
Image of page 4
LLN I Let X 1 , . . . , X n be a sample of independent identically distributed (iid) random variables. Let EX i = μ . If Var ( X i ) = σ 2 < then ¯ X n !
Image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 6
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern