N is a smooth function of the estimator x n and so we

Info iconThis preview shows pages 43–51. Sign up to view the full content.

View Full Document Right Arrow Icon
̂ n is a smooth function of the estimator X ̄ n and so we can use a calculus approximation. 43
Background image of page 43

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Start with the general scalar case, and let g for Θ . Assume that Θ is an open set. Further, assume that g  is continuously differentiable on Θ , and denote its derivative by g 1 . We approximate the distribution of n ̂ n using linearization. Namely, we will approximate the distribution of n ̂ n using the asymptotic distribution of n ̂ n . 44
Background image of page 44
We need to apply the mean value theorem (MVT) along with the consistency and n -asymptotic normality of ̂ n . Because ̂ n p and Θ is an open set, ̂ n is in an open interval around with probability approaching one (wpa1). We will ignore that nicety and just act as if ̂ n is in the interval for n sufficiently large. 45
Background image of page 45

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
We can apply the MVT as follows: g ̂ n g g 1 ̈ n  ̂ n where ̈ n is the mean value, which we know is on the line segment connecting and ̂ n . Because ̂ n p we also know ̈ n p (even though we do not generally know ̈ n ). 46
Background image of page 46
By Slutsky’s theorem, because g 1  is continuous, g 1 ̈ n p g 1 . Now we can use standard results from asymptotics: n g ̂ n g  g 1 ̈ n  n ̂ n  g 1  n ̂ n  g 1 ̈ n g 1  n ̂ n g 1  n ̂ n  o p 1 O p 1 g 1  n ̂ n  o p 1 47
Background image of page 47

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
By the asymptotic equivalence lemma, n g ̂ n g  has the same asymptotic distribution as g 1  n ̂ n  . Let c be the asymptotic variance of n ̂ n , that is, n ̂ n d Normal 0, c  . It follows immediately that n ̂ n n g ̂ n g  d Normal 0, g 1  2 c  , that is Avar n ̂ n  dg d 2 c dg d 2 Avar n ̂ n  48
Background image of page 48
This approach to deriving the asymptotic variance of smooth functions of an estimator is called the delta method . It has widespread use an applied econometrics. The relationship between asymptotic variances is exactly as if we could compute the finite sample variances where ̂ n is a linear function of ̂ n : ̂ n a ̂ n a dg d 49
Background image of page 49

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Assuming c is also continuous on Θ , we can consistently estimate the asymptotic variance of n
Background image of page 50
Image of page 51
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

Page43 / 77

n is a smooth function of the estimator X n and so we can...

This preview shows document pages 43 - 51. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online