A Short History of Computational Complexity

A Short History of Computational Complexity - A Short...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: A Short History of Computational Complexity Lance Fortnow * NEC Research Institute 4 Independence Way Princeton, NJ 08540 Steve Homer † Computer Science Department Boston University 111 Cummington Street Boston, MA 02215 November 14, 2002 1 Introduction It all started with a machine. In 1936, Turing developed his theoretical computational model. He based his model on how he perceived mathematicians think. As digital computers were developed in the 40’s and 50’s, the Turing machine proved itself as the right theoretical model for computation. Quickly though we discovered that the basic Turing machine model fails to account for the amount of time or memory needed by a computer, a critical issue today but even more so in those early days of computing. The key idea to measure time and space as a function of the length of the input came in the early 1960’s by Hartmanis and Stearns. And thus computational complexity was born. In the early days of complexity, researchers just tried understanding these new measures and how they related to each other. We saw the first notion of efficient computation by using time polynomial in the input size. This led to complexity’s most important concept, NP-completeness, and its most fundamental question, whether P = NP . The work of Cook and Karp in the early 70’s showed a large number of combinatorial and logical problems were NP-complete, i.e., as hard as any problem computable in nondeterministic polynomial time. The P = NP question is equivalent to an efficient solution of any of these problems. In the thirty years hence this problem has become one of the outstanding open questions in computer science and indeed all of mathematics. In the 70’s we saw the growth of complexity classes as researchers tried to encompass different models of computations. One of those models, probabilistic computation, started with a proba- bilistic test for primality, led to probabilistic complexity classes and a new kind of interactive proof system that itself led to hardness results for approximating certain NP-complete problems. We have also seen strong evidence that we can remove the randomness from computations and most recently a deterministic algorithm for the original primality problem. In the 80’s we saw the rise of finite models like circuits that capture computation in an inherently different way. A new approach to problems like P = NP arose from these circuits and though they have had limited success in separating complexity classes, this approach brought combinatorial techniques into the area and led to a much better understanding of the limits of these devices. * URL: http://www.neci.nj.nec.com/homepages/fortnow. Email: [email protected]...
View Full Document

This note was uploaded on 02/07/2011 for the course PHYS 101 taught by Professor Aster during the Spring '11 term at East Tennessee State University.

Page1 / 26

A Short History of Computational Complexity - A Short...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online