{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

lecture7 - ECON 5350 Class Notes Nonlinear Regression...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon
ECON 5350 Class Notes Nonlinear Regression Models 1 Introduction In this chapter, we examine regression models that are nonlinear in the parameters and give a brief overview of methods to estimate such models. 2 Nonlinear Regression Models The general form of the nonlinear regression model is y i = h ( x i ; °; ± i ) ; (1) which is more commonly written in a form with an additive error term y i = h ( x i ; ° ) + ± i : (2) Below are two examples 1. h ( x i ; °; ± i ) = ° 0 x ° 1 1 i x ° 2 2 i exp( ± i ) . This is an intrinsically linear model because by taking natural loga- rithms, we get a model that is linear in the parameters, ln( y i ) = ° 0 + ° 1 ln( x 1 i ) + ° 2 ln( x 2 i ) + ± i . This can be estimated with standard linear procedures such as OLS. 2. h ( x i ; ° ) = ° 0 x ° 1 1 i x ° 2 2 i . Since the error term in (2) is additive, there is no transformation that will produce a linear model. This is an intrinsically nonlinear model (i.e., the relevant °rst-order conditions are nonlinear in the parameters). Below we consider two methods for estimating such a model ±linearizing the underlying regression model and nonlinear optimization of the objective function. 2.1 Linearized Regression Model and the Gauss-Newton Algorithm Consider a °rst-order Taylor series approximation of the regression model around ° 0 y i = h ( x i ; ° ) + ± i h ( x i ; ° 0 ) + g ( x i ; ° 0 )( ° ° ° 0 ) + ± i where g ( x i ; ° 0 ) = ( @[email protected]° 1 j ° = ° 0 ; :::; @[email protected]° k j ° = ° 0 ) : 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Collecting terms and rearranging gives Y 0 = X 0 ° + ± 0 where Y 0 ± Y ° h ( X; ° 0 ) + g ( X; ° 0 ) ° 0 X 0 ± g ( X; ° 0 ) : The matrix X 0 is called the pseudoregressor matrix. Note also that ± 0 will include higher-order approxima- tion errors. 2.1.1 Gauss-Newton Algorithm Given an initial value for ° 0 , we can estimate ° with the following iterative LS procedure b t +1 = [ X 0 ( b t ) 0 X 0 ( b t )] ° 1 [ X 0 ( b t ) 0 Y 0 ( b t )] = [ X 0 ( b t ) 0 X 0 ( b t )] ° 1 [ X 0 ( b t ) 0 ( X 0 ( b t ) b t + e 0 t )] = b t + [ X 0 ( b t ) 0 X 0 ( b t )] ° 1 X 0 ( b t ) 0 e 0 t = b t + W t ² t g t where W t = [2 X 0 ( b t ) 0 X 0 ( b t )] ° 1 , ² t = 1 and g t = 2 X 0 ( b t ) 0 e 0 t . The iterations continue until the di/erence between b t +1 and b t is su¢ ciently small. This is called the Gauss-Newton algorithm. Interpretations for W t , ² t and g t will be given below. A consistent estimator of ³ 2 is s 2 = 1 n ° k X n i =1 ( y i ° h ( x i ; b )) 2 : 2.1.2 Properties of the NLS Estimator Only asymptotic results are available for this estimator. Assuming that the pseudoregessors are well-behaved (i.e., plim 1 n X 0 0 X 0 = Q 0 , a °nite positive de°nite matrix), then we can apply the CLT to show that b asy ² N [ °; ³ 2 n ( Q 0 ) ° 1 ] , where the estimate of ± 2 n ( Q 0 ) ° 1 is s 2 ( X 0 0 X 0 ) ° 1 : 2
Background image of page 2
2.1.3 Notes 1. Depending on the initial value, b 0 , the Gauss-Newton algorithm can lead to a local (as opposed to global) minimum or head o/ on a divergent path.
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 4
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}