Lecture14-2003 - Lecture XIV Practical Optimization...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
Lecture XIV Practical Optimization Problems and Computer Code I. Unconstrained Optimization A. Precision Agriculture : Using mathematica to solve optimization problems B. Estimation of the Inverse Hyperbolic Sine Transformation : Using Gauss to optimize likelihood functions. 1. Most courses in statistical inference present the concept of maximum likelihood. Specifically, given a sample of data points, we can write the likelihood function of a set of parameters as: {} L x i i n =− = 1 2 2 2 2 2 1 σπ µ σ exp Taking the natural log of this expression yields { } ln( ) ln( ) ln( ) L x i i n = 1 2 2 1 2 2 2 2 2 1 πσ Taking the derivative with respect to µ yields ∂µ ln( ) Lx nx n x i i n i i n i i n = = = = = = = 2 1 1 1 0 1 Thus, we derive a maximum likelihood estimate of the mean. In this case the maximum likelihood is “closed form”. 2. Another potential likelihood function involves the inverse hyperbolic sine transformation to normality. This distribution allows variables to be both skewed and kurtotic. Ramirez, Moss and Boggess estimated corn, soybean and wheat yields over time using this transformation. The log-likelihood function for this distribution with a linear time trend can be written as: { } () [] ln( ) ln( ) ln * L n z z vy i z vv i i n i i n ii i −+ =− − = ++ == ∑∑ 2 1 2 1 2 1 1 2 2 2 1 22 1 01 θ αα θθ The parameters to be estimated, or maximized with respect to are α 0 , α 1 , θ , µ , and σ . Guess what, there exist no closed form solution to those first order conditions. 3. Several numerical solvers are available for this problem, but I want to outline the solution in Gauss.
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
AEB 6533 – Static and Dynamic Optimization Lecture 14 Professor Charles B. Moss 2 4. Gauss’s optimization routine (for Gauss 3.x) is a subroutine called optmum. To use optmum you must a. Load the optmum library b. Create a subroutine which uses the parameters to be estimated to calculate the value of the objective function (log-likelihood function). i. You may also want to specify the objective function gradients and the Hessian matrix. ii. Note that optmum minimizes the specified objective function. Therefore, you want to specify negative of the likelihood function. c. Initialize the algorithm i. Initialize by specifying the starting values of the choice variables. ii. Initialize by choosing the an optimization algorithm. d. Call the optimization function. 5. An example : Instead of focusing on the more complex example, I want to discuss the estimation of the mean and variance. library optmum; #include optmum.ext; opset; load x[20,1]=indta.dta; proc ml(b); local err; err=x-b[1]; retp( rows(err)*ln(b[2])+sumc((x-b[1]).^2./b[2]) ); endp; b0={10,1.5}; {fhat,bhat,ghat,retcode}=optprt(optmum(&ml,b0)); Some notes a. First note b is a gauss vector with b[1] containing the value of the mean and b[2] containing the value of the variance.
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 11/08/2011 for the course AEB 6533 taught by Professor Moss during the Fall '08 term at University of Florida.

Page1 / 17

Lecture14-2003 - Lecture XIV Practical Optimization...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online