Chapter17 - Chapter 17 Limitations of What problems cannot...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Chapter 17: Limitations of What problems cannot be solved on a computer? Computing What problems cannot be solved on a computer in a "reasonable" amount of time? These aren't just philosophical questions; their answers will determine how practical it is to pursue computerbased solutions to real-world problems like hurricane prediction, disease control, and economic forecasting. Chapter 17 Limitations of Computing Page 1 Computability To examine the limits of what it is possible to do with a computer, Alan Turing (1912-1954) developed a simplified mathematical model, called a Turing machine. A Turing machine consists of three parts: 1) 1) A tape of cells from which symbols can be read and into which symbols can be written, A read/write head that moves back and forth across the tape, reading the symbol inside the current cell and/or writing a new symbol into the current cell, and keeps track of what "state" the A control unit that machine is in, and uses that state and the current symbol under the read/write head to: a) a) a) Determine which symbol to place in the current cell, Determine whether to move the read/write head one cell to the left or right, and Determine the next state of the machine. Chapter 17 Limitations of Computing Page 2 Contro l Unit Read/Write Head 1) Tape State Transition Diagram A state transition diagram may be used to define a Turing machine. Each / / transition signifies reading on the tape, replacing it with , and then moving the read/write head in the direction. */*/L 1/0/L State: START State: ADD State: CARRY State: NO CARRY State: NO CARRY State: RETURN State: RETURN State: RETURN State: RETURN State: HALT * 1 0 1 * * 1 0 1 * * 1 0 0 * * 1 1 0 * START ADD 0/1/L CARRY 0/1/L 1/0/L */1/L * 1 1 0 * */*/R NO CARRY */*/R 0/0/L, 1/1/L OVERFLOW */*/R * 1 1 0 * * 1 1 0 * HALT */*/- RETURN 0/0/R, 1/1/R * 1 1 0 * The state transition diagram above defines a Turing machine that increments a binary number on the tape by one. * 1 1 0 * Chapter 17 Limitations of Computing Page 3 * 1 1 0 * The Church-Turing Computer scientists commonly accept the Church-Turing Thesis states that the set of functions that can be Thesis, which calculated on a computer is exactly the same as the set of functions for which a Turing machine can be devised. There are problems that have been proven to be non-computable (i.e., no Turing machine can be devised to calculate their solutions). One classical example: The Halting Problem Given a program with a set of input values, does the program halt on that input, or does it get stuck in an infinite loop? Chapter 17 Limitations of Computing Page 4 Complexity algorithm is a measure of how many steps The time complexity of an are executed when the associated program is run. void printA() { cout << 0 << cout << 0 << cout << 0 << cout << 0 << } endl; endl; endl; endl; Number of Output Statements Executed: 4 Time Complexity: O(1) Number of Output Statements Executed: 100 Time Complexity: O(1) Number of Output Statements Executed: n Time Complexity: O(n) Number of Output Statements Executed: n2 Time Complexity: void printB() { int i; for (i = 1; i <= 100; i++) cout << 0 << endl; } void printC(int n) { int i; for (i = 1; i <= n; i++) cout << 0 << endl; } void printD(int n) { int i,j; for (i = 1; i <= n; i++) for (j = 1; j <= n; j++) cout << 0 << endl; } The "big-O" notation provides information regarding the program's "order of complexity". O(1) indicates that the execution time doesn't relate to the size of the number n. O(n) indicates that the execution time increases linearly as n increases. O(n2) indicates that the execution time increases quadratically as n increases. Chapter 17 Limitations of Computing Page 5 Logarithmic/Polynomial/Expone An algorithm is said to have logarithmic time complexity if the number of ntial steps in its execution is bounded by some logarithmic function: k log2(n) Essentially, this means that doubling the size of the problem (n) would only increase the execution time by a constant amount (k). An algorithm is said to have polynomial time complexity if the number of steps in its execution is bounded by some polynomial function: aknk+ak-1nk-1+...+a2n2+a1n+a0 An algorithm is said to have exponential time complexity if the number of steps in its execution is bounded by some exponential function: k(2n) Essentially, this means that increasing the size of the problem (n) by one would double the execution time. log2(n) 2 3 4 n 5 10 20 n2 25 100 400 n3 125 1000 8000 2n 31 1024 1048576 Chapter 17 Limitations of Computing Page 6 Big-O An algorithm's time complexity is dominated by its most significant term. For example, an algorithm that executes in time n2+10n is considered to be O(n2) because, as n increases, the n2 term ultimately dominates the n term. Additional examples: 5n + n2 + 0.125n3 is O(n3) log2(n) + n2 + 2n is O(2n) 100n + max(n2, 1000 - n3) is O(n2) n3 - (n3 - 80n2 - 800n) is O(n3) 1000000 + 0.000001log2(n) is O(log2(n)) Chapter 17 Limitations of Computing Page 7 P and NP Problems A problem is said to be a P problem if it can be solved with a deterministic, polynomial-time algorithm. (Deterministic algorithms have each step clearly specified.) A problem is said to be an NP problem if it can be solved with a nondeterministic, polynomial-time algorithm. In essence, at a critical point in the NP problem's algorithm, a decision must be made, and it is assumed that some magical "choice" function (also called an oracle) always chooses correctly. For example, take the Satisfiability Problem: Given a set of n boolean variables b1, b2, ... bn, and a boolean function f (b1, b2, ..., bn), are there any values that can be assigned to the variables so the function value will evaluate to TRUE? combination of To try every boolean values would take exponential time, but the nondeterministic solution at right has polynomial time complexity. for (i = 1; i <= n; i++) bi = choice(true, false); if (f(b1, b2,..., bn) == true) satisfiable = true; else satisfiable = false; Chapter 17 Limitations of Computing Page 8 The Knapsack Problem The Knapsack Problem involves taking n valuable jewels J1, J2, ...,Jn, with respective weights w1, w2,..., wn, and prices p1, p2,..., pn, and placing some of them in a knapsack that is capable of supporting a combined weight of M. A Nondeterministic Polynomial Solution: TotalWorth = 0; TotalWeight = 0; for (i = 1; i <= n; i++) { bi = choice(true, false); if (b1 == true) { The problem is to pack the TotalWorth+= pi; maximum worth of gems without TotalWeight += wi; } exceeding the capacity of the } knapsack. if (TotalWeight <= M) (It's not as easy as it sounds; cout << "Woo-hoo!" << endl; three lightweight $1000 gems else cout << "Doh!" << endl; might be preferable to one heavy $2500 gem, and one 20-pound gem worth a lot of money might be preferable to twelve 1-pound gems that are practically Chapter 17 Limitations of Computing Page 9 ...
View Full Document

Ask a homework question - tutors are online