This preview shows pages 1–2. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Algorithm Analysis Algorithm An algorithm is a finite set of welldefined instructions that takes some set of values as input and produces some set of values as output to solve a problem. An algorithm is thus a sequence of computational steps that transform the input into the output. Computational Complexity Suppose computers were infinitely fast and computer memory was free. Would you have any reason to study algorithms? The answer is yes, if for no other reason than that you would still like to demonstrate that your solution method terminates and does so with the correct answer. If computers were infinitely fast, any correct method for solving a problem would do. You would probably want your implementation to be within the bounds of good software engineering practice, but you would most often use whichever method was the easiest to implement. Of course, computers cannot be infinitely fast, and memory cannot be free. Computing time is therefore a bounded resource, and so is space in memory. These resources should be used wisely, and algorithms that are efficient in terms of time or space will help you do so. Therefore, given several algorithms to solve a problem, we want to determine the amount of memory they use and how much time they require, so that we can choose the most efficient algorithm among them. This is called an algorithm analysis . So we introduce the term complexity , or efficiency to analyze algorithms. And there are two major complexities. Time complexity and space complexity , which are respectively how many steps does it take to perform a computation, and how much memory is required to perform that computation. Analyzing an algorithm: Maximum Contiguous Subsequence Sum Problem Analyzing an algorithm has come to mean predicting the resources (time, memory and etc.) that the algorithm requires. And most often it its computation time that we want to measure. Here, we give several algorithms to solve the problem called Maximum Contiguous Subsequence Sum and analyze them to determine which algorithm is efficient, or which algorithm has the smallest time complexity. The problem is defined as following: Maximum Contiguous Subsequence Sum problem: Given an array with integers a 1 ,a 2 ,...,a n , find the maximum value of ∑ j k = i a k . For instance, if the input is { 2 , 11, 4, 13 , 5 , 2 } , then the answer is 20, which represents the contiguous subsequence encompassing item index 2 through 4 (shown in boldface type). The time taken by an algorithm solving this problem depends on the size of the input: finding the maximum value in sequence of thousand numbers takes longer than finding the answer for five numbers. In general, the time taken by an algorithm grows with the size of the input....
View
Full
Document
This note was uploaded on 04/20/2010 for the course CS CS206 taught by Professor Lee during the Fall '09 term at 카이스트, 한국과학기술원.
 Fall '09
 Lee

Click to edit the document details