This preview shows pages 1–11. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: 1 Introduction to computational complexity Introduction to data structures 2 Why a Special Treatment? Specific Network Problems and LP formulations exploit special structure intuitive: flows on arcs vs variables, constraints integrality property: integer w/o inefficiency of IP 3 Next topic: computational complexity What is an efficient algorithm? How do we measure efficiency? Worst case analysis 4 Measuring Computational Complexity Consider the following algorithm for adding two m n matrices A and B with coefficients a( , ) and b( , ). begin for i = 1 to m do for j = 1 to n do c(i,j) := a(i,j) + b(i,j) end What is the running time of this algorithm? Lets measure it as precisely as we can as a function of n and m. Is it 2nm, or 3nm, or what? Worst case versus average case How do we measure the running time? What are the basic steps that we should count? 5 Compute the running time precisely. Operation Number (as a function of m,n) Additions Assignments Comparisons ? number of counts when we add two matrices. 6 Towards Computational Complexity 1. We will ignore running time constants. 2. Our running times will be stated in terms of relevant problem parameters, e.g., nm. 3. We will measure everything in terms of worst case or most pessimistic analysis (performance guarantees.) 4. All arithmetic operations are assumed to take one step, (or a number of steps that is bounded by a constant). 7 A Simpler Metric for Running Time. Operation Number (as a function of m,n) TOTAL no of arithmetic operations c mn for some constant c Algorithm takes O(mn) steps Algo A runs in O(f(n)) time if for some numbers c and n 0 , the time taken by the algorithm is at most c f(n) for all n n 0 Asymptotic growth rate 8 Simplifying Assumptions and Notation MACHINE MODEL: Random Access Machine (RAM). This is the computer model that everyone is used to. It allows the use of arrays, and it can select any element of an array or matrix in O(1) steps. c(i,j) := a(i,j) + b(i,j). Integrality Assumption. All numbers are integral (unless stated otherwise.) 9 Size of a problem The size of a problem is the number of bits needed to represent the problem. The size of the n m matrix A is not nm. If each matrix element has K bits, the size is nmK e.g., if max 2 107 < a ij < 2 108 , then K = 108. K = O( log (a max )). 10 Polynomial Time Algorithms We say that an algorithm runs in polynomial time if the number of steps taken by an algorithm on any instance I is bounded by a polynomial in the size of I . We say that an algorithm runs in exponential time if it does not run in polynomial time....
View
Full
Document
 Spring '10
 zeynephuygur

Click to edit the document details