This preview shows page 1. Sign up to view the full content.
Unformatted text preview: 10/14/2010 Golden Section Search Very similar to a bisection search (the key difference is that we are looking for a minimum instead of a zero) A bracketing method (to get started we need a pair of x values that bracket the solution). At each iteration the interval containing the solution gets smaller and smaller. By continuing until the interval becomes appropriately small any desired degree of accuracy can be achieved. Step 1: Pick two interior points by applying the rules shown XL
0 XU 20 40 60 d
80 d 100 120 0 1 2 3 4 5 6 7 8 9 10 X2 = XU‐d
d ( 1)( X U X L ) X1 = XL+d 1 5 1.6180339887... 2 1 10/14/2010 Step 2: Move one of the two “walls” in. If f(X1) > f(X2), XU gets moved to X1. Otherwise XL gets moved to X2. XL
0 XU 20 40 60 80 100 120 0 1 2 3 4 5 6 7 8 9 10 X2 X1 Step 3: Renumber the unused interior point (X2 becomes X1 and vice versa) and use the formulae from step 1 to create whichever interior point is missing. XL
0 XU 20 40 60 d
80 100 120 0 1 2 3 4 5 6 7 8 9 10 X2 = XU‐d X1 (was X2) d ( 1)( X U X L ) 1 5 1.6180339887... 2 2 10/14/2010 This gets us back to where we were at the end of step 1. Steps 2 and 3 are now repeated until XL and XU become acceptably close to each other (i.e. until the minimum is located to the desired degree of accuracy). The key feature of the algorithm is that the interior point that does not become a “wall” (step 2) is reused as an interior point on the next iteration. There is only one new point (and hence only one function evaluation) per iteration. This magic works because φ (the Golden Ratio) is used in positioning the interior points. l1 L1+L2 Solving this equation gives: l2 The Golden Ratio is the ratio of l1 to l1 that makes l1 l1 l2 l2 l1 1 5 l1 1.6180339887... 2 l2 XL X2 X1 XU Assume that the original interval is L. (1‐r)L (1‐r)L L XL X2 XU Assume that XL is moved to X2 (as the situation is symmetrical the other possible assumption gives exactly the same result) The original X2 becomes X1 and must be correctly positioned. (1‐r)L From the diagrams: (1‐r)2L (1 r ) 2 L (1 r ) L L
r 1 5 2 Dividing through by L and solving for the positive root gives: 3 10/14/2010 Every iteration reduces the interval containing the solution by a factor of (φ‐1) = 0.6180339887… Not as good as a bisection search (which halves the interval at every iteration) but still pretty good. Let Δx0 be the original interval width. Interval width after n iterations = x 0 ( 1) n If the midpoint of the interval is used as the final answer (a reasonable approach) the maximum error is half of the interval width. Max error after n iterations = E MAX x 0 ( 1) n 2 Number of iterations required to reduce EMAX to EDESIRED = 2E 2E ln DESIRED ln DESIRED 0 0 2.078087 ln 2 E DESIRED x x n 0 ln(1 ) 0.481212 x Notes on text: The discussion of error in the text makes pretty heavy weather of what is really a pretty simple concept. Instead of using the interval midpoint as the final answer the text takes whichever internal point is not about to become one of XL and XU. The error formula assumes that XL and XU have yet to be updated (i.e. start of iteration values are assumed) but the Matlab code updates them before applying the formula (i.e. end of iteration values are used). This is a mistake. As usual the text uses relative error (the error is divided by the magnitude of the solution). The Matlab code in the text completely negates the advantages of the Golden Section search. Both interior points are recalculated on every iteration and the function is evaluated twice per iteration. If one is going to do this it would make more sense to locate the interior points just on either side of the interval midpoint (and so get rid of nearly half of the interval on every iteration). 4 10/14/2010 function x = golden (f, xL, xU, Edes, display) % GOLDEN Finds a minimum by performing a golden section search. % Inputs: f = a function of one variable % xL = lower bound of region containing minimum % xU = upper bound of region containing minimum % Edes = function stops when x within Edes of minimum % display = display option (0 = no display (default), 1 = display) % Outputs: x ‐ estimate of minimum if nargin < 5; display = 0; end p1 = ((1 + sqrt(5)) / 2) ‐ 1; % golden ratio ‐ 1 if display fprintf ... (' k xL end x2 x1 xU Emax\n'); % set up for first iteration x1 = xL + p1 * (xU ‐ xL); fx1 = f(x1); x2 = xU ‐ p1 * (xU ‐ xL); fx2 = f(x2); for k = 1 : 1000 Emax = (xU ‐ xL) / 2; if display fprintf ('%5d %12.6f %12.6f %12.6f %12.6f %12.6f\n', k, xL, x2, x1, xU, Emax); end if Emax <= Edes x = (xL + xU) / 2; return; end if fx2 < fx1 xU = x1; x1 = x2; fx1 = fx2; % old x2 becomes new x1 x2 = xU ‐ p1 * (xU ‐ xL); fx2 = f(x2); % brand new x2 required else xL = x2; x2 = x1; fx2 = fx1; % old x1 becomes new x2 x1 = xL + p1 * (xU ‐ xL); fx1 = f(x1); % brand new x1 required end end error ('Golden section search has not converged.'); end 5 10/14/2010 Example Golden Section Search:
>> f = @(x) 5 * x^3 + 115.3 * x^2  700 * x + 757.5; >> minx = golden (f, 2, 8, 0.001, 1) k xL x2 x1 1 2.000000 4.291796 5.708204 2 2.000000 3.416408 4.291796 3 3.416408 4.291796 4.832816 4 3.416408 3.957428 4.291796 5 3.957428 4.291796 4.498447 6 3.957428 4.164079 4.291796 7 3.957428 4.085145 4.164079 8 4.085145 4.164079 4.212862 9 4.085145 4.133929 4.164079 10 4.133929 4.164079 4.182712 11 4.133929 4.152562 4.164079 12 4.152562 4.164079 4.171196 13 4.152562 4.159680 4.164079 14 4.159680 4.164079 4.166797 15 4.159680 4.162398 4.164079 16 4.159680 4.161360 4.162398 17 4.161360 4.162398 4.163040 18 4.162398 4.163040 4.163437 minx = 4.1632 xU 8.000000 5.708204 5.708204 4.832816 4.832816 4.498447 4.291796 4.291796 4.212862 4.212862 4.182712 4.182712 4.171196 4.171196 4.166797 4.164079 4.164079 4.164079 Emax 3.000000 1.854102 1.145898 0.708204 0.437694 0.270510 0.167184 0.103326 0.063859 0.039467 0.024392 0.015075 0.009317 0.005758 0.003559 0.002199 0.001359 0.000840 fminbnd Details: Options may be specified (details as for fzero). fminbnd combines a golden section search and parabolic interpolation (see text) >> opts = optimset('display', 'iter'); >> bestL = fminbnd (Tfunc, L1, L3, opts) % example from last lecture Func‐count x f(x) Procedure 1 502.564 14802.2 initial 2 503.036 14802.1 golden 3 503.328 14802.1 golden 4 503.095 14802.1 parabolic 5 503.095 14802.1 parabolic 6 503.095 14802.1 parabolic 7 503.095 14802.1 parabolic Optimization terminated: the current x satisfies the termination criteria using OPTIONS.TolX of 1.000000e‐004 bestL = 503.0946 6 ...
View
Full
Document
This note was uploaded on 10/18/2010 for the course ECOR2606 2606 taught by Professor Aaaa during the Spring '10 term at Carleton CA.
 Spring '10
 aaaa

Click to edit the document details