The other parametersepsfandepsxare not used. The termination condition is not basedon the gradient, as the nameepsgwould indicate.The following is a list of termination conditions which are taken into account in the sourcecode.•The iteration is greater than the maximum.if (itr.gt.niter) go to 250•The number of function evaluations is greater than the maximum.if (nfun.ge.nsim) go to 250•The directionnal derivative is positive, so that the directiondis not a descent direction forf.if (dga.ge.0.0d+0) go to 240•The cost function set the indic flag (theindparameter) to 0, indicating that the optimizationmust terminate.call simul (indic,n,xb,fb,gb,izs,rzs,dzs)[...]go to 25017
Subscribe to view the full document.
•The cost function set the indic flag to a negative value indicating that the function cannotbe evaluated for the givenx. The step is reduced by a factor 10, but gets below a limit sothat the algorithm terminates.call simul (indic,n,xb,fb,gb,izs,rzs,dzs)[...]step=step/10.0d+0[...]if (stepbd.gt.steplb) goto 170[...]go to 250•The Armijo condition is not satisfied and step size is below a limit during the line search.if (fb-fa.le.0.10d+0*c*dga) go to 280[...]if (step.gt.steplb) go to 270•During the line search, a cubic interpolation is computed and the computed minimum isassociated with a zero step length.if(c.eq.0.0d+0) goto 250•During the line search, the step length is lesser than a computed limit.if (stmin+step.le.steplb) go to 240•The rank of the approximated Hessian matrix is lesser thannafter the update of theCholesky factors.if (ir.lt.n) go to 2501.7.5An exampleThe following script illustrates that the gradient may be very slow, but the algorithm continues.This shows that the termination criteria is not based on the gradient, but on the length of thestep. The problem has two parameters so thatn= 2. The cost function is the followingf(x) =xp1+xp2(1.7)wherep≥0 is an even integer. Here we choosep= 10. The gradient of the function isg(x) =∇f(x) = (pxp-11, pxp-12)T(1.8)and the Hessian matrix isH(x) =p(p-1)xp-2100p(p-1)xp-22(1.9)18
The optimum of this optimization problem is atx?= (0,0)T.(1.10)The following Scilab script defines the cost function, checks that the derivatives are correctlycomputed and performs an optimization. At each iteration, the norm of the gradient of the costfunction is displayed so that one can see if the algorithm terminates when the gradient is small.function[f,g,ind]= myquadratic(x,ind)p = 10i find == 1|ind == 2|ind == 4thenf= x (1)ˆp + x (2)ˆp ;endi find == 1|ind == 2|ind == 4theng (1) = p*x (1)ˆ( p-1)g (2) = p*x (2)ˆ( p-1)endi find == 1thenmprintf(”|x|=%e,f=%e,|g|=%e\n”,norm(x ) , f ,norm( g ))endendfunctionfunctionf= quadfornumdiff(x)f= myquadratic(x,2)endfunctionx0 = [-1.21 . 0 ] ;[f,g]= myquadratic(x0,4) ;mprintf(”Computedf (x0)=%f\n”,f ) ;mprintf(”Computedg(x0)=\n”) ;disp(g ’ ) ;mprintf(”Expectedg(x0)=\n”) ;disp(derivative( quadfornumdiff , x0 ’ ) )nap = 100i t e r= 100epsg = %eps[fopt,xopt,gradopt]=optim(myquadratic,x0,. . .
As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.
Temple University Fox School of Business ‘17, Course Hero Intern
I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.
University of Pennsylvania ‘17, Course Hero Intern
The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.