N is a smooth function of the estimator x n and so we

Info icon This preview shows pages 43–51. Sign up to view the full content.

View Full Document Right Arrow Icon
̂ n is a smooth function of the estimator X ̄ n and so we can use a calculus approximation. 43
Image of page 43

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Start with the general scalar case, and let g for Θ . Assume that Θ is an open set. Further, assume that g  is continuously differentiable on Θ , and denote its derivative by g 1 . We approximate the distribution of n ̂ n using linearization. Namely, we will approximate the distribution of n ̂ n using the asymptotic distribution of n ̂ n . 44
Image of page 44
We need to apply the mean value theorem (MVT) along with the consistency and n -asymptotic normality of ̂ n . Because ̂ n p and Θ is an open set, ̂ n is in an open interval around with probability approaching one (wpa1). We will ignore that nicety and just act as if ̂ n is in the interval for n sufficiently large. 45
Image of page 45

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
We can apply the MVT as follows: g ̂ n g g 1 ̈ n    ̂ n where ̈ n is the mean value, which we know is on the line segment connecting and ̂ n . Because ̂ n p we also know ̈ n p (even though we do not generally know ̈ n ). 46
Image of page 46
By Slutsky’s theorem, because g 1  is continuous, g 1 ̈ n p g 1 . Now we can use standard results from asymptotics: n g ̂ n g  g 1 ̈ n  n ̂ n  g 1  n ̂ n  g 1 ̈ n g 1  n ̂ n g 1  n ̂ n  o p 1 O p 1 g 1  n ̂ n  o p 1 47
Image of page 47

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
By the asymptotic equivalence lemma, n g ̂ n g  has the same asymptotic distribution as g 1  n ̂ n  . Let c be the asymptotic variance of n ̂ n , that is, n ̂ n d Normal 0, c  . It follows immediately that n ̂ n n g ̂ n g  d Normal 0, g 1  2 c  , that is Avar n ̂ n  dg d 2 c dg d 2 Avar n ̂ n  48
Image of page 48
This approach to deriving the asymptotic variance of smooth functions of an estimator is called the delta method . It has widespread use an applied econometrics. The relationship between asymptotic variances is exactly as if we could compute the finite sample variances where ̂ n is a linear function of ̂ n : ̂ n a ̂ n a dg d 49
Image of page 49

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Assuming c is also continuous on Θ , we can consistently estimate the asymptotic variance of n
Image of page 50
Image of page 51
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern