{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Lecture 12 -Big-O

# Lecture 12 -Big-O - Big-O(Computer Science Notes Running...

This preview shows pages 1–2. Sign up to view the full content.

Big-O (Computer Science Notes) Running Times of Programs An important aspect of designing a computer programs is figuring out how well it runs, in a range of likely situations. Designers need to estimate how fast it will run, how much memory it will require, how reliable it will be, and so forth. More typically, the designer has to analyze the behavior of a large C or Java program. It’s not feasible to figure out exactly how long such a program will take. The transformation from standard programming languages to machine code is way too complicated. It’s more useful to develop an analysis that abstracts away from unimportant details, so that it will be portable and durable. This abstraction process has two key components: Ignore multiplicative constants Ignore behavior on small inputs, concentrating on how well programs handle large inputs. (Aka asymptotic analysis.) Multiplicative constants are extremely sensitive to details of the implementation, hardware platform, etc Hard-to-address problems more often arise when a program’s use expands to larger examples. For example, a small database program developed for a community college might have trouble coping if deployed to handle (say) all registration records for U. Illinois Function Growth: The Idea So, suppose that you model the running time of a program as a function F(n), where n is some measure of the size of the input problem N might be the number of entries in a database application. For a numerical program, n might be the magnitude or the number of digits in an input number. Then, to compare the running times of two programs, we need to compare the growth rates of the two running time functions. Suppose that f(x) = x and g(x) = x^2. For small positive inputs, x^2 is smaller. For the input 1, they have the same value, and then g gets bigger and rapidly diverges to become much larger than f. We’d like to say that g is “bigger,” because it has bigger outputs for large inputs. Because we are only interested in the running times of algorithms, we’ll only consider behavior on positive inputs. And we’ll only worry about functions whose

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 3

Lecture 12 -Big-O - Big-O(Computer Science Notes Running...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online