Optimization Examples-1

Optimization Examples-1 - Optimization An Introduction with...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Optimization An Introduction with examples One way to look at optimization is simply to find the minimum of a function. For a single variable function such as y = (x-1)2, the minimum can be obtained by inspection or using calculus by setting the derivative of the function to zero and solving for x. However, there is another method that can be demonstrated using this simple function that provides the basis for complex optimization problems. Consider an iterative method of finding the minimum based on an initial guess: xk +1 = xk - f ( xk ) which simply states that the current estimate for x that minimizes the function is the previous estimate minus some factor times the derivative of the function at the previous estimate. This method is illustrated below (x0 = 3) for different values of . As can be seen, convergence to the solution x=1 is highly dependent on the value of and can become unstable by a poor choice (note the overshoot and oscillation for = 0.6 and 0.8). The parameter is sometimes called the step size and is essentially an acceleration factor that affects the rate of convergence. One final note - the astute reader may notice the similarity between this method and Newton-Raphson method for finding roots of equations. k 1 2 3 4 5 6 7 x k y'(x k) 3 2.2 1.72 1.432 1.259 1.156 1.093 4 2.4 1.44 0.864 0.518 0.311 0.187 0.2 0.2 0.2 0.2 0.2 0.2 0.2 x k+1 2.2 1.72 1.432 1.259 1.156 1.093 1.056 3 3.8 3.3 2.8 2.3 1.8 1.3 0.8 0.3 -0.2 -1 4 3 2 2 y = (x-1)2 2.5 2 1.5 1 0.5 0 Convergence to x = 1 (k=10) = 0.2 0 1 2 3 = 0.2 0 2 4 6 8 10 k 1 2 3 4 5 6 7 x k y'(x k) 3 1.4 1.08 1.016 1.003 1.001 1.000 4 0.8 0.16 0.032 0.006 0.001 0.000 0.4 0.4 0.4 0.4 0.4 0.4 0.4 x k+1 1.4 1.08 1.016 1.003 1.001 1.000 1.000 3 2.5 Convergence to x = 1 (k=10) 1 1.5 0 -1 -2 -3 -1 4 3 2 1 0 -1 -2 -3 -4 -5 -6 -1 0 1 2 3 3 2.5 2 1 1.5 2.5 2 1 y = (x-1)2 0 1 = 0.4 2 3 0.5 0 0 3 = 0.4 2 4 6 8 10 k 1 2 3 4 5 6 7 x k y'(x k) 3 4 0.6 -0.8 1.08 0.16 0.984 -0.032 1.003 0.006 0.999 -0.001 1.000 0.000 0.6 0.6 0.6 0.6 0.6 0.6 0.6 x k+1 0.6 1.08 0.984 1.003 0.999 1.000 1.000 Convergence to x = 1 (k=10) = 0.6 y= (x-1)2 = 0.6 0.5 0 0 2 4 Overshoot 6 8 10 k 1 2 3 4 5 6 7 x k y'(x k) 3 4 -0.2 -2.4 1.72 1.44 0.568 -0.864 1.259 0.518 0.844 -0.311 1.093 0.187 0.8 0.8 0.8 0.8 0.8 0.8 0.8 x k+1 -0.2 1.72 0.568 1.259 0.844 1.093 0.944 3 1 -1 -3 -5 -7 -9 -1 0 1 y = (x-1)2 Convergence to x = 1 (k=10) = 0.8 1.5 1 0.5 = 0.8 2 3 0 -0.5 0 2 Overshoot 4 6 8 10 Multi-Parameter Optimization A Worked Example Multi-parameter optimization is a bit more complex, since the values of more than one parameter must be simultaneously adjusted to find the minimum of the desired function (sometimes called an objective function). Since this tutorial is focused on worked examples, we state without theory or derivation the final result, in this case, for the LevenbergMarquardt method, since it is among the more popular methods used. In this equation, -1 [a] is the estimate of the parameter list, ak +1 = ak - k H (ak ) + k I J (ak ) > 0 is the step size as in the previous page, J is the Jacobian matrix of partial derivatives of the objective function (i.e. function to be minimized), H is the Hessian matrix of 2nd partial derivatives of the objective function, I is the identity matrix, and 0 is a conditioning parameter that influences the `search' direction. In the example on the previous page, there was only one direction (along the curve) the solution could progress since it is a single dimensional problem. In the case of a surface z = f(x,y), there are multiple paths that could be taken in x and y to find the function minimum. In practice, should be kept as small as possible as long as the objective function is smaller than in the previous iteration. A popular automated method is to divide by 4 if smaller, otherwise multiply by 2. Solution convergence can be highly sensitive to and . [ [ ] {[ } Example: find the minimum of z = f(x,y) = sin(x)cos(y), using x0 = 1, y0 = 0.5 [a0 ] = k 0 x0 1 = y0 0.5 [ak ] 1.000 0.500 1.000 0.500 0.998 0.502 0.990 0.509 z x cos( x) cos( y ) J = = z y - sin( x) sin( y ) J H + I 2z 2 x H = z x y [H + I] -1 z y x - sin( x) cos( y ) - cos( x) sin( y ) = 2 z - cos( x) sin( y ) - sin( x) cos( y ) 2 y -1 z 0.7385 [H + I] J 2 [ak+1] 1.000 0.500 0.998 0.502 0.990 0.509 0.959 0.535 0.4742 2000 1999.262 -0.259 -0.4034 -0.259 1999.262 0.4744 500 -0.4036 0.4754 125 -0.4043 0.4793 -0.4070 31 499.262 -0.259 124.263 -0.261 30.520 -0.267 -0.259 499.262 -0.261 124.263 -0.267 30.520 0.0005 0.0000 -0.0002371 0.0000 0.0005 0.0002018 0.0020 0.0000 -0.0009498 0.0000 0.0020 0.0008079 0.0080 0.0000 -0.0038188 0.0000 0.0080 0.0032455 0.0328 0.0003 -0.0155881 0.0003 0.0328 0.0132004 2.5 0.7381 2 1 First 4 iterations 0.7365 2 2 0.7302 2 3 2.5 2.0 1.5 1.0 0.5 0.0 -0.5 -1.0 -1.5 -2.0 1 x0 = 1 x Solution Convergence (14 iterations) y 2.0 1.5 1.0 0.5 0.0 -0.5 The solution converged after about 13 iterations but required an adjustment to at iteration #10. The final solution obtained: x = -1.571 = -/2 y=0 z = -1 y0 = 0.5 z = f(x,y) 2 3 4 5 6 7 8 9 10 11 12 13 14 -1.0 -1.5 -2.0 ...
View Full Document

Ask a homework question - tutors are online