This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: Efficient Minimum Distance Estimation with Multiple Rates of Convergence * Bertille Antoine and Eric Renault February 19, 2008 Abstract: This paper extends the asymptotic theory of GMM inference to allow sample counterparts of the estimating equations to converge at (multiple) rates, different from the usual square-root of the sample size. In this setting, we provide consistent estimation of the structural parameters. In addition, we define a convenient rotation in the parameter space (or reparametrization) which permits to disentangle the different rates of convergence. More precisely, we identify special linear combinations of the structural parameters associated with a specific rate of convergence. Finally, we demonstrate the validity of usual inference procedures, like the overi- dentification test and Wald test, with standard formulas. It is important to stress that both estimation and testing work without requiring the knowledge of the various rates. However, the assessment of these rates is crucial for (asymptotic) power considerations. Possible applications include econometric problems with two dimensions of asymptotics, due to trimming, tail estimation, infill asymptotic, social interactions, kernel smoothing or any kind of regularization. JEL classification: C32; C12; C13; C51. Keywords: GMM; Mixed-rates asymptotics; Set estimation; Control variables; Rotation in the coordinate system. * We would like to thank M. Carrasco, A. Guay, J. Jacod, Y. Kitamura, P. Lavergne, L. Magee, A. Shaikh, V. Zinde-Walsh, and seminar participants at University of British Columbia, University of Montreal, and Yale University for helpful discussions. Simon Fraser University. Email: email@example.com. University of North Carolina at Chapel Hill, CIRANO and CIREQ. Email: firstname.lastname@example.org 1 1 Introduction The cornerstone of GMM asymptotic distribution theory is that when an estimator T of some vector of parameters (with as true unknown value) is defined through a minimum distance problem: T = arg min m T ( ) m T ( ) / (1.1) h T ( T- ) i inherits the asymptotic normality of [ Tm T ( )] by a first-order expansion argument: T ( T- ) =- m T ( ) m T ( ) - 1 m T ( ) Tm T ( ) + o P (1) (1.2) while Plim T [ m T ( )] = 0 = (1.3) It turns out that, for many reasons (see section 2 for a list of examples and a review of the literature), including local smoothing, trimming, infill asymptotic, or any kind of non- root T asymptotics, the asymptotic normality of m T ( ) may come at a non-standard rate of convergence: T m T ( ) / is asymptotically a non-degenerated gaussian variable for some 6 = 1 / 2 . This does not invalidate the first-order expansion argument (1.2) when realizing that h T ( T- ) i is asymptotically equivalent to T m T ( ) / . For instance, Robert (2006) has....
View Full Document
- Fall '08