This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: Reductions and NP-Completeness A giraffe with its long neck is a very different beast than a mouse, which is different from a snake. However, Darwin and gang observed that the first two have some key similarities, both being social, nursing their young, and having hair. The third is completely different in these ways. Studying similarities and differences between things can reveal subtle and deep understandings of their underlining nature that would not have been noticed by studying them one at a time. Sometimes things that at first appear to be completely different, when viewed in another way, turn out to be the same except for superficial cosmetic differences. This section will teach how to use reductions to discover these similarities between different optimization problems. Reduction P 1 ≤ poly P 2 : We say that we can reduce problem P 1 to problem P 2 if we can write a polynomial ( n Θ(1) ) time algorithm for P 1 using a supposed algorithm for P 2 as a subroutine. (Note we may or may not actually have an algorithm for P 2 .) The standard notation for this is P 1 ≤ poly P 2 . Why Reduce? A reduction lets us compare the time complexities and underlying structures of these two problems. They are useful in providing algorithms for new problems ( upper bounds ), for giving evidence that there are no fast algorithms for certain problems ( lower bounds ), and for classifying problems according to their difficulty. Upper Bounds: From the reduction P 1 ≤ poly P 2 alone, we cannot conclude that there is a poly- nomial time algorithm for P 1 . But it does tell us that if there is a polynomial time algorithm for P 2 , then there is one for P 1 . This is useful in two ways. First, it allows us to construct algorithms for new problems from known algorithms for other problems. Moreover, it tells us that P 1 is “at least as easy as” P 2 . Hotdogs ≤ poly Linear Programming: Section ?? describes how to solve the problem of making a cheap hotdog using an algorithm for solving linear programming. Bipartite Matching ≤ poly Network Flows: We will give an algorithm for Bipartite Matching in Section 0.4 that uses the network flows algorithm. Lower Bounds: The contrapositive of the last statement is that if there is not a polynomial time algorithm for P 1 , then there cannot be one for P 2 (otherwise there would be one for P 1 .) This tells us that P 2 is “at least as hard as” P 1 . (Any Optimization Problem) ≤ poly CIR-SAT: This small looking statement proved by Steve Cook in 1971 has become one of the foundations of theoretical computer science. There are many interesting optimization problems. Some people worked hard on discov- ering fast algorithms for this one and others did the same for that one. Cooks theorem 1 shows that it is sufficient to focus on the optimization problem CIR-SAT, because if you can solve it quickly then you can solve them all quickly. However, after many years of working hard, people have given up and highly suspect that at least one optimization...
View Full Document
- Winter '12
- Computational complexity theory, potential solution, Bipartite graph, decision problem, Ialg, Algalg