This preview shows pages 1–2. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: Ma1c Analytic Recitation 4/23/09 1 Stirlings Formula and Computation Time There is a very nice formula that you should be aware of; Stirlings formula, which says: n ! 2 n h n e i n Where the means that the two sides have the same limit as n goes to . In practice, the relative error is quite low for pretty much all values of n . One interesting application of this is to provide a simple formula for the lower bound on computation time to sort a list. Sorting a list is the same thing as figuring out what permutation of the list puts it in order. Suppose that we consider one comparison to be single operation each comparison returns < , > , or =, depending on the inputs. If we look at the computation tree for any algorithm which sorts n items, it is at most 4 valent (each vertex has at most 3 children), and there are at least n ! leaves at the bottom (corresponding to the n ! different permutations needed to put the items in order). This means that the depth of the tree is at least log 3 n !. Using Stirlings formula, we see that this is log 3 n ! = log 3 2 n h n e i n = n log 3 n- n (1 + log 3 e ) + 1 2 log 3 n + C Therefore, we can see that sorting a list requires at least O ( n log n ) operations, so we know that algorithms such as mergesort are optimal, at least as far at time is concerned. 2 Lagranges Multipliers Last week, you had a few problems about optimizing functions globally by finding stationary points and such. It is common that you would want to optimize a function given some constraint. A nice way to do this is to use the method of Lagrange Multipliers: if you have a function f : R n R and constraints g 1 , g m : R n R (i.e. g ( x ) = 0 , ,g ( x ) = 0), then where f has a relative extremum subject to the constraints we have: f = 1 g 1 + + m g m This may seem arbitrary, but there is good intuition here: suppose that you are at some point in the domain of f . To increase the function, you want to follow the gradient. If the gradient points outside of the region to which you are constrained (the feasible region), then you can imagine being held in the feasible region and having the gradient be a force vector pulling you in that direction. Then even though the gradient points outside the feasible region, you will be pulled in the component of the gradient inside the feasible region, and you will continue to move until the gradient points in a direction perpendicular to the feasible region, at which point you are at a local maximum. This method does not work if the g i are not independent. The book has an example in three dimensions on p. 317, but a simple example is to consider finding extrema of f ( x,y ) = x 2 + y 2 subject to the constraint g ( x,y ) = x 2- ( y- 1) 3 = 0. Obviously, there is a local (actually global) minimum at (0 , 1), but g (0 , 1) = 0,...
View Full Document