This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: Econ 513, Fall 2004, USC Department of Economics Maximum Likelihood Estimation: Computational Issues How do we compute the mle? A number of numerical methods exist for this type of problem. (for ease of comparison with later optimization prob- lems and the material in the reader we reformulate this as minimizing minus the log likelihood function - this obviously does not affect the substance of the problem.) One leading method is NewtonRaphson . The idea is to ap- proximate the objective function Q ( ) =- L ( ) around some starting value by a quadratic function and find the exact minimum for that quadratic approximation. We use 0 = Q ( min ) Q ( ) + 2 Q ( )( - min ) Redo the quadratic approximation around the minimum of the initial quadratic approximation and find the new minimum. Do this repeatedly and the solution will converge to the minimum of the objective function. Formally, given a starting value , define iteratively k +1 = k- h 2 Q ( k ) i- 1 Q ( k ) . In the case of an exponential duration model the matrix of second derivatives is 2 Q ( ) = N X i =1 y i x i x i exp( x i ) , which is positive definite if x i x i is positive definite. Hence the objective function is globally convex and if there is a solution to the first order condi- tions, it is the unique mle. In this application the NewtonRaphson algorithm works very well. Another class of algorithms does not require the calculation of the second derivatives. Most of these methods separate out the choice of direction and the choice of steplength . Let A k be any positive definite matrix, and consider iterations of the type k +1 = k- k A k Q ( k ) . 1 The choice k = 1 and A k = h 2 Q ( k ) i- 1 corresponds to NewtonRaphson....
View Full Document