You have a large text file of people. Each person is represented by one line in the text file. The line starts with their ID number and after that, has the person's name. The lines are sorted by ID number in ascending order.
There are n lines in this file. You write a search function that returns the name of a person whose ID number is given.
The simplest way to do that would be to program a loop that goes through each line and compares the ID number in the line against the given ID number. If there is a match, then it returns the name in that line. This is very inefficient, because the worst-case scenario is that this program needs to go through almost everyone- the person we are looking for could be last.
Using the fact that the file is sorted will greatly speed up the process, by allowing us to use a binary search algorithm:
We go to the middle line of the file first and compare the ID found there (P) to the given ID (Q). (If the number of lines is even, we go above or below the arithmetic middle.)
If P=Q then our algorithm terminates - we have found the person we are looking for.
If P is less than Q, that means that the person we are looking for is in the second half of the file. We now repeat our algorithm on the second half of the file.
If P is greater than Q, that means that the person we are looking for is in the first half of the file. We now repeat our algorithm on the first half of the file.
Of what order is the worst-case number of comparison operations that are needed for this algorithm to terminate?
Recently Asked Questions
- how does beta-adrenergic receptor activation lead to increased myocardial contractility?
- 17.Your company's CEO does not have authority over some of the activities involved in a) a functional process b) a workgroup process c) an enterprise process
- Please refer to the attachment to answer this question. This question was created from Scavenger Hunt Biology 2 2017 (2).docx.