This preview shows pages 1–4. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: ECON 5350 Class Notes Nonlinear Regression Models 1 Introduction In this chapter, we examine regression models that are nonlinear in the parameters and give a brief overview of methods to estimate such models. 2 Nonlinear Regression Models The general form of the nonlinear regression model is y i = h ( x i ;&; i ) ; (1) which is more commonly written in a form with an additive error term y i = h ( x i ;& ) + i : (2) Below are two examples 1. h ( x i ;&; i ) = & x & 1 1 i x & 2 2 i exp( i ) . This is an intrinsically linear model because by taking natural loga rithms, we get a model that is linear in the parameters, ln( y i ) = & + & 1 ln( x 1 i ) + & 2 ln( x 2 i ) + i . This can be estimated with standard linear procedures such as OLS. 2. h ( x i ;& ) = & x & 1 1 i x & 2 2 i . Since the error term in (2) is additive, there is no transformation that will produce a linear model. This is an intrinsically nonlinear model (i.e., the relevant &rstorder conditions are nonlinear in the parameters). Below we consider two methods for estimating such a model linearizing the underlying regression model and nonlinear optimization of the objective function. 2.1 Linearized Regression Model and the GaussNewton Algorithm Consider a &rstorder Taylor series approximation of the regression model around & y i = h ( x i ;& ) + i h ( x i ;& ) + g ( x i ;& )( & & & ) + i where g ( x i ;& ) = ( @h=@& 1 j & = & ;:::;@h=@& k j & = & ) : 1 Collecting terms and rearranging gives Y = X & + where Y & Y h ( X;& ) + g ( X;& ) & X & g ( X;& ) : The matrix X is called the pseudoregressor matrix. Note also that will include higherorder approxima tion errors. 2.1.1 GaussNewton Algorithm Given an initial value for & , we can estimate & with the following iterative LS procedure b t +1 = [ X ( b t ) X ( b t )] & 1 [ X ( b t ) Y ( b t )] = [ X ( b t ) X ( b t )] & 1 [ X ( b t ) ( X ( b t ) b t + e t )] = b t + [ X ( b t ) X ( b t )] & 1 X ( b t ) e t = b t + W t t g t where W t = [2 X ( b t ) X ( b t )] & 1 , t = 1 and g t = 2 X ( b t ) e t . The iterations continue until the di/erence between b t +1 and b t is su ciently small. This is called the GaussNewton algorithm. Interpretations for W t , t and g t will be given below. A consistent estimator of 2 is s 2 = 1 n k X n i =1 ( y i h ( x i ;b )) 2 : 2.1.2 Properties of the NLS Estimator Only asymptotic results are available for this estimator. Assuming that the pseudoregessors are wellbehaved (i.e., plim 1 n X X = Q , a &nite positive de&nite matrix), then we can apply the CLT to show that b asy N [ &; 2 n ( Q ) & 1 ] , where the estimate of & 2 n ( Q ) & 1 is s 2 ( X X ) & 1 : 2 2.1.3 Notes 1. Depending on the initial value, b , the GaussNewton algorithm can lead to a local (as opposed to global) minimum or head o/ on a divergent path.global) minimum or head o/ on a divergent path....
View
Full
Document
This note was uploaded on 11/27/2011 for the course ECON 101 taught by Professor Robert during the Fall '08 term at Montgomery College.
 Fall '08
 Robert

Click to edit the document details