lect27-np-complete - Lecture Notes CMSC 251 i j 7 2 6 4 1 2...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Lecture Notes CMSC 251 i j 7 2 6 4 1 2 3 4 1 2 3 4 p 4 p 3 p 2 p 1 p m[i,j] 5 158 88 120 48 104 84 Final order 3 4 A 3 A 2 A 1 A 4 A 3 A 2 A 1 A 2 3 2 1 1 3 3 1 3 2 1 s[i,j] i j 2 3 4 Figure 34: Chain Matrix Multiplication. In the figure below we show an example. This algorithm is tricky, so it would be a good idea to trace through this example (and the one given in the text). The initial set of dimensions are h 5 , 4 , 6 , 2 , 7 i meaning that we are multiplying A 1 ( 5 × 4 ) times A 2 ( 4 × 6 ) times A 3 ( 6 × 2 ) times A 4 ( 2 × 7 ). The optimal sequence is (( A 1 ( A 2 A 3 )) A 4 ) . Lecture 27: NP-Completeness: General Introduction (Tuesday, May 5, 1998) Read: Chapt 36, up through section 36.4. Easy and Hard Problems: At this point of the semester hopefully you have learned a few things of what it means for an algorithm to be efficient, and how to design algorithms and determine their efficiency asymptotically. All of this is fine if it helps you discover an acceptably efficient algorithm to solve your problem. The question that often arises in practice is that you have tried every trick in the book, and still your best algorithm is not fast enough. Although your algorithm can solve small problems reasonably efficiently (e.g. n ≤ 20 ) the really large applications that you want to solve (e.g. n ≥ 100 ) your algorithm does not terminate quickly enough. When you analyze its running time, you realize that it is running in exponential time , perhaps n √ n , or 2 n , or 2 (2 n ) , or n ! , or worse. Towards the end of the 60’s and in the eary 70’s there were great strides made in finding efficient solutions to many combinatorial problems. But at the same time there was also a growing list of problems for which there seemed to be no known efficient algorithmic solutions. The best solutions known for these problems required exponential time. People began to wonder whether there was some unknown paradigm that would lead to a solution to these problems, or perhaps some proof that these problems are inherently hard to solve and no algorithmic solutions exist that run under exponential time. Around this time a remarkable discovery was made. It turns out that many of these “hard” problems were interrelated in the sense that if you could solve any one of them in polynomial time, then you could solve all of them in polynomial time. The next couple of lectures we will discuss some of these problems and introduce the notion of P, NP, and NP-completeness. Polynomial Time: We need some way to separate the class of efficiently solvable problems from ineffi- ciently solvable problems. We will do this by considering problems that can be solved in polynomial time....
View Full Document

This note was uploaded on 01/13/2012 for the course CMSC 351 taught by Professor Staff during the Fall '11 term at University of Louisville.

Page1 / 5

lect27-np-complete - Lecture Notes CMSC 251 i j 7 2 6 4 1 2...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online