This preview shows page 1. Sign up to view the full content.
Unformatted text preview: C H A P T E R 10 Recursion CONTENTS What Is Recursion?
Tracing a Recursive Method
Recursive Methods That Return a Value
Recursively Processing an Array
Recursively Processing a Linked Chain
The Time Efficiency of Recursive Methods
The Time Efficiency of countDown
The Time Efficiency of Computing xn
A Simple Solution to a Difficult Problem
A Poor Solution to a Simple Problem
Tail Recursion
Mutual Recursion
PREREQUISITES Chapter
Chapter
Chapter
Chapter 1
5
6
9 Java Classes
List Implementations That Use Arrays
List Implementations That Link Data
The Efficiency of Algorithms OBJECTIVES After studying this chapter, you should be able to
G
G
G
G Determine whether a given recursive method will end successfully in a finite amount of time
Write a recursive method
Estimate the time efficiency of a recursive method
Identify tail recursion and replace it with iteration R epetition is a major feature of many algorithms. In fact, repeating things rapidly is
a key ability of computers. Two problemsolving processes involve repetition; they
223 224 CHAPTER 10 Recursion
are called iteration and recursion. In fact, most programming languages provide two kinds of
repetitive constructs, iterative and recursive.
You know about iteration because you know how to write a loop. Regardless of the loop construct you use—for, while, or do—your loop contains the statements that you want to repeat and a
mechanism for controlling the number of repetitions. You might have a counted loop that counts
repetitions as 1, 2, 3, 4, 5, or 5, 4, 3, 2, 1. Or the loop might execute repeatedly while a boolean
variable or expression is true. Iteration often provides a straightforward and efﬁcient way to implement a repetitive process.
At times, iterative solutions are elusive or hopelessly complex. Discovering or verifying such
solutions is not a simple task. In these cases, recursion can provide an elegant alternative. Some
recursive solutions can be the best solutions, some provide insight for ﬁnding a better iterative solution, and some should not be used at all because they are grossly inefﬁcient. Recursion, however,
remains an important problemsolving strategy.
This chapter will show you how to think recursively. What Is Recursion?
10.1 You can build a house by hiring a contractor. The contractor in turn hires several subcontractors to
complete portions of the house. Each subcontractor might hire other subcontractors to help. You use
the same approach when you solve a problem by breaking it into smaller problems. In one special
variation of this problemsolving process, the smaller problems are identical except for their size.
This special process is called recursion.
Suppose that you can solve a problem by solving an identical but smaller problem. How will
you solve the smaller problem? If you use recursion again, you will need to solve an even smaller
problem that is just like the original problem in every other respect. How will replacing a problem
with another one ever lead to a solution? One key to the success of recursion is that eventually you
will reach a smaller problem whose solution you know because either it is obvious or it is given.
The solution to this smallest problem is probably not the solution to your original problem, but it
can help you reach it. Either just before or just after you solve a smaller problem, you usually contribute a portion of the solution. This portion, together with the solutions to the other, smaller problems, provides the solution to the larger problem.
Let’s look at an example. 10.2 Example: The countdown. It’s New Year’s Eve and the giant ball is falling in Times Square. The
crowd counts down the last ten seconds: “10, 9, 8, . . .” Suppose that I ask you to count down to 1
beginning at some positive integer like 10. You could shout “10” and then ask a friend to count
down from 9. Counting down from 9 is a problem that is exactly like counting down from 10,
except that there is less to do. It is a smaller problem.
To count down from 9, your friend shouts “9” and asks a friend to count down from 8. This
sequence of events continues until eventually someone’s friend is asked to count down from 1. That
friend simply shouts “1.” No other friend is needed. You can see these events in Figure 101.
In this example, I’ve asked you to complete a task. You saw that you could contribute a part of
the task and then ask a friend to do the rest. You know that your friend’s task is just like the original
task, but it is smaller. You also know that when your friend completes this smaller task, your job
will be done. What is missing from the process just described is the signal that each friend gives to
the previous person at the completion of a task.
When you count down from 10, I need you to tell me when you are done. I don’t care how—or
who—does the job, as long as you tell me when it is done. I can take a nap until I hear from you.
Likewise, when you ask a friend to count down from 9, you do not care how your friend ﬁnishes the What Is Recursion? 225 job. You just want to know when it is done so you can tell me that you are done. You can take a nap
while you are waiting. Note: Recursion is a problemsolving process that breaks a problem into identical but smaller
problems. Figure 101 Counting down from 10 10 !
You count down
from 9. 9!
You count down
from 8. 2!
You count down
from 1. 8!
You count down
from 7. 1!
I'm done Several friends later... I'm done I'm done.
Several friends later...
I'm done
too. Ultimately, we have a group of napping people waiting for someone to say “I’m done.” The
ﬁrst person to make that claim is the person who shouts “1,” as Figure 101 illustrates, since that
person needs no help in counting down from 1. At this time in this particular example, the problem
is solved, but I don’t know that because I’m still asleep. The person who shouted “1” says “I’m 226 CHAPTER 10 Recursion
done” to the person who shouted “2.” The person who shouted “2” says “I’m done” to the person
who shouted “3,” and so on, until you say “I’m done” to me. The job is done; thanks for your help;
I have no idea how you did it, and I don’t need to know! 10.3 What does any of this have to do with Java? In the previous example, you play the role of a Java
method. I, the client, have asked you, the recursive method, to count down from 10. When you ask
a friend for help, you are invoking a method to count down from 9. But you do not invoke another
method; you invoke yourself! Note: A method that calls itself is a recursive method. The invocation is a recursive call or
recursive invocation. The following Java method counts down from a given positive integer, displaying one integer
per line.
/** Task: Counts down from a given positive integer.
* @param integer an integer > 0 */
public static void countDown(int integer)
{
System.out.println(integer);
if (integer > 1)
countDown(integer  1);
} // end countDown Since the given integer is positive, the method can display it immediately. This step is analogous to
you shouting “10” in the previous example. Next the method asks whether it is ﬁnished. If the given
integer is 1, there is nothing left to do. But if the given integer is larger than 1, we need to count
down from integer  1. We’ve already noted that this task is smaller but otherwise identical to the
original problem. How do we solve this new problem? We invoke a method, but countDown is such
a method. It does not matter that we have not ﬁnished writing it at this point! 10.4 Will the method countDown actually work? Shortly we will trace the execution of countDown both
to convince you that it works and to show you how it works. But traces of recursive methods are
messy, and you usually do not have to trace them. If you follow certain guidelines when writing a
recursive method, you can be assured that it will work.
In designing a recursive solution, you need to answer certain questions: Note: Questions to answer when designing a recursive solution
G
G
G What part of the solution can you contribute directly?
What smaller but identical problem has a solution that, when taken with your contribution,
provides the solution to the original problem?
When does the process end? That is, what smaller but identical problem has a known solution, and have you reached this problem, or base case? For the method countDown, we have the following answers to these questions:
G
G The method countDown displays the given integer as the part of the solution that it contributes
directly. This happens to occur ﬁrst here, but it need not always occur ﬁrst.
The smaller problem is counting down from integer  1. The method solves the smaller problem when it calls itself recursively. What Is Recursion? G 227 The if statement asks if the process has reached the base case. Here the base case occurs when
integer is 1. Because the method displays integer before checking it, nothing is left to do
once the base case is identiﬁed. Note: Design guidelines for successful recursion
To write a recursive method that behaves correctly, you generally should adhere to the following
design guidelines:
G
G
G The method deﬁnition must contain logic that involves a parameter to the method and leads
to different cases. Typically, such logic includes an if statement or a switch statement.
One or more of these cases should provide a solution that does not require recursion. These
are the base cases, or stopping cases.
One or more cases must include a recursive invocation of the method. These recursive
invocations should in some sense take a step toward a base case by using “smaller” arguments or solving “smaller” versions of the task performed by the method. Programming Tip: Inﬁnite recursion
A recursive method that does not check for a base case, or that misses the base case, will execute
“forever.” This situation is known as inﬁnite recursion.
10.5 Before we trace the method countDown, we should note that we could have written it in other ways.
For example, a ﬁrst draft of this method might have looked like this:
public static void countDown(int integer)
{
if (integer == 1)
System.out.println(integer);
else
{
System.out.println(integer);
countDown(integer  1);
} // end if
} // end countDown Here, the programmer considered the base case ﬁrst. The solution is clear and perfectly acceptable,
but you might want to avoid the redundant println statement that occurs in both cases. 10.6 Removing the redundancy just mentioned could result in either the version given earlier in
Segment 10.3 or the following one:
public static void countDown(int integer)
{
if (integer >= 1)
{
System.out.println(integer);
countDown(integer  1);
} // end if
} // end countDown When integer is 1, this method will produce the recursive call countDown(0). This turns out to be
the base case for this method, and nothing is displayed. 228 CHAPTER 10 Recursion
All three versions of countDown produce correct results; there are probably others as well.
Choose the one that is clearest to you. 10.7 The version of countDown just given in Segment 10.6 provides us an opportunity to compare it with
the following iterative version:
// Iterative version.
public static void countDown(int integer)
{
while (integer >= 1)
{
System.out.println(integer);
integer;
} // end while
} // end countDown The two methods have a similar appearance. Both compare integer with 1, but the recursive version uses an if, and the iterative version uses a while. Both methods display integer. Both compute integer  1. Programming Tip: An iterative method contains a loop. A recursive method calls itself.
Although some recursive methods contain a loop and call themselves, if you have written a while
statement within a recursive method, be sure that you did not mean to write an if statement.
Question 1 Write a recursive void method that skips n lines of output, where n is a positive integer. Use System.out.println() to skip one line.
Question 2 Describe a recursive algorithm that draws a given number of concentric circles. The innermost circle should have a given diameter. The diameter of each of the other
circles should be 4/3 the diameter of the circle just inside it. Tracing a Recursive Method
10.8 Now let’s trace the method countDown given in Segment 10.3:
public static void countDown(int integer)
{
System.out.println(integer);
if (integer > 1)
countDown(integer  1);
} // end countDown Suppose that the client invokes this method with the statement
countDown(3); This call is like any other call to a nonrecursive method. The argument 3 is copied into the parameter integer and the following statements are executed:
System.out.println(3);
if (3 > 1)
countDown(3  1); // first recursive call Tracing a Recursive Method 229 A line containing 3 is displayed, and the recursive call countDown(2) occurs, as Figure 102a
shows.
Execution of the method is suspended until the results of countDown(2) are known. In this particular method deﬁnition, no statements appear after the recursive call. So although it appears that
nothing will happen when execution resumes, it is here that the method returns to the client.
Figure 102 The effect of the method call countDown(3)
(a) (b) (c) countDown(3) countDown(2) countDown(1) Display 3
Call countDown(2) 10.9 Display 2
Call countDown(1) Display 1 Continuing our trace, countDown(2) causes the following statements to execute:
System.out.println(2);
if (2 > 1)
countDown(2  1); // second recursive call A line containing 2 is displayed, and the recursive call countDown(1) occurs, as shown in
Figure 102b. Execution of the method is suspended until the results of countDown(1) are known.
The call countDown(1) causes the following statements to execute:
System.out.println(1);
if (1 > 1) A line containing 1 is displayed, as Figure 102c shows, and no other recursive call occurs.
Figure 103 illustrates the sequence of events from the time that countDown is ﬁrst called.
The numbered arrows indicate the order of the recursive calls and the returns from the method.
After 1 is displayed, the method completes execution and returns to the point (arrow 4) after the
call countDown(1). Execution continues from there and the method returns to the point (arrow 5)
after the call countDown(2). Ultimately, a return to the point after the initial recursive call in the
client occurs.
Although tracking these method returns seems like a formality that has gained us nothing, it is
an important part of any trace because some recursive methods will do more than simply return to
their calling method. You will see an example of such a method shortly. 10.10 Figure 103 appears to show multiple copies of the method countDown. In reality, however, multiple
copies do not exist. Instead, for each call to a method—be it recursive or not—Java records the state
of the method, including the values of its parameters and local variables. Each record, called an
activation record, is analogous to a piece of paper. The records are placed into an ADT called a
stack, much as you would stack papers one on top of the other. The record of the currently executing method is on top of the stack. In this way, Java can suspend the execution of a recursive method
and invoke it again with new argument values. The boxes in Figure 103 correspond roughly to activation records, although the ﬁgure does not show them in the order in which they would appear in a 230 CHAPTER 10 Recursion
stack. Figure 104 illustrates the activation records that enter and leave the stack as a result of the
call countDown(3).
Figure 103 Tracing the recursive call countDown(3) // Client
public static void main(...)
{
countDown(3);
...
} // end main 6 5 4 public static void countDown(3)
{
System.out.println(3);
if (3 > 1)
countDown(3 – 1);
} // end countDown
public static void countDown(2)
{
System.out.println(2);
if (2 > 1)
countDown(2 – 1);
} // end countDown 1 2 3 public static void countDown(1)
{
System.out.println(1);
if (1 > 1)
} // end countDown Note: The stack of activation records A recursive method uses more memory than an iterative method, in general, because a stack of
activation records is used to implement the recursion. Programming Tip: Stack overﬂow
Too many recursive calls can cause the error message “stack overﬂow.” This means that the stack
of activation records has become full. In essence, the method has used too much memory. Inﬁnite recursion or largesize problems are the likely causes of this error.
Question 3 Write a recursive void method countUp(n) that counts up from 1 to n, where n
is a positive integer. Hint: A recursive call will occur before you display anything. Recursive Methods That Return a Value Figure 104 231 The stack of activation records during the execution of a call to countDown(3)
(b) (a) (c)
countDown(1):
...
// 1 is displayed
... countDown(2):
...
// 2 is displayed
countDown(1);
...
countDown(3):
...
// 3 is displayed
countDown(2);
... countDown(2):
...
countDown(1);
... countDown(3):
...
countDown(2);
... countDown(3):
...
countDown(2);
... (e) (d) (f) countDown(2):
... countDown(3):
...
countDown(2);
... countDown(3):
...
Stack is empty Recursive Methods That Return a Value
10.11 The recursive method countDown in the previous sections is a void method. Valued methods can
also be recursive. The guidelines for successful recursion given in Segment 10.4 apply to valued
methods as well, with an additional note. Recall that a recursive method must contain a statement
such as an if that chooses among several cases. Some of these cases lead to a recursive call, but at
least one case has no recursive call. For a valued method, each of these cases must provide a value
for the method to return. 10.12 Example: Compute the sum 1 + 2 + . . . + n for any integer n > 0. The given value for this problem is the integer n. Beginning with this fact will help us to ﬁnd the smaller problem because its
input will also be a single integer. The sum always starts at 1, so that can be assumed. 232 CHAPTER 10 Recursion
Suppose that I have given you a positive integer n and asked you to compute the sum of the ﬁrst
n integers. You need to ask a friend to compute the sum of the ﬁrst m integers for some positive integer m. What should m be? Well, if your friend computes 1 + . . . + (n  1), you can simply add n to
that sum to get your sum. Thus, if sumOf(n) is the method call that returns the sum of the ﬁrst n
integers, adding n to your friend’s sum occurs in the expression sumOf(n1) + n.
What small problem can be the base case? That is, what value of n results in a sum that you
know immediately? One possible answer is 1. If n is 1, the desired sum is 1.
With these thoughts in mind, we can write the following method:
/** @param n an integer > 0
* @return the sum 1 + 2 + ... + n */
public static int sumOf(int n)
{
int sum;
if (n == 1)
sum = 1;
// base case
else
sum = sumOf(n  1) + n; // recursive call
return sum;
} // end sumOf 10.13 The deﬁnition of the method sumOf satisﬁes the design guidelines for successful recursion. You
should be conﬁdent that the method will work correctly without tracing its execution. However, a
trace will be instructive here because it will not only show you how a valued recursive method
works, but also demonstrate actions that occur after a recursive call is complete.
Suppose that the client invokes this method with the statement
System.out.println(sumOf(3)); The computation occurs as follows:
1.
2.
3. sumOf(3)
sumOf(2)
sumOf(1) is sumOf(2) + 3; sumOf(3) suspends execution, and sumOf(2) begins.
is sumOf(1) + 2; sumOf(2) suspends execution, and sumOf(1) begins.
returns 1. Once the base case is reached, the suspended executions resume, beginning with the most recent.
Thus, sumOf(2) returns 1 + 2, or 3; then sumOf(3) returns 3 + 3, or 6. Figure 105 illustrates this
computation as a stack of activation records.
Question 4 Write a recursive valued method that computes the product of the integers
from 1 to n, where n > 0. Note: Should you trace a recursive method?
We have shown you how to trace the execution of a recursive method primarily to show you
how recursion works and to give you some insight into how a typical compiler implements
recursion. Should you ever trace a recursive method? Usually no. You certainly should not
trace a recursive method while you are writing it. If the method is incomplete, your trace will
be, too, and you are likely to become confused. If a recursive method does not work, follow the
suggestions given in the next programming tip. You should trace a recursive method only as a
last resort. Recursive Methods That Return a Value Figure 105 233 The stack of activation records during the execution of a call to sumOf(3)
(a) (b) (c)
sumOf(1):
return 1; sumOf(2):
return sumOf(1) + 2; sumOf(2):
return sumOf(1) + 2; sumOf(3):
return sumOf(2) + 3; sumOf(3):
return sumOf(2) + 3; sumOf(3):
return sumOf(2) + 3; (d) (e) (f) 6 is displayed
sumOf(2):
return 1 + 2 = 3;
sumOf(3):
return sumOf(2) + 3; sumOf(3):
return 3 + 3 = 6; Programming Tip: Debugging a recursive method
If a recursive method does not work, answer the following questions. Any “no” answers should
guide you to the error.
G
G
G
G
G
G
G
G
G
G 10.14 Does the method have at least one parameter?
Does the method contain a statement that tests a parameter and leads to different cases?
Did you consider all possible cases?
Does at least one of these cases cause at least one recursive call?
Do these recursive calls involve smaller arguments, smaller tasks, or tasks that get closer
to the solution?
If these recursive calls produce or return correct results, will the method produce or return
a correct result?
Is at least one of the cases a base case that has no recursive call?
Are there enough base cases?
Does each base case produce a result that is correct for that case?
If the method returns a value, does each of the cases return a value? Our previous examples were simple so that you could study the construction of recursive methods.
Since you could have solved these problems iteratively with ease, should you actually use their
recursive solutions? Nothing is inherently wrong with these recursive methods. However, given the
way that typical presentday systems execute recursive methods, a stack overﬂow is likely for large
values of n. Iterative solutions to these simple examples would not have this difﬁculty and are easy
to write. Realize, however, that future operating systems might be able to execute these recursive
methods without difﬁculty. 234 CHAPTER 10 Recursion Recursively Processing an Array
Later in this book we will talk about searching an array for a particular item. We will also look at
algorithms that sort, or arrange, the items in an array into either ascending or descending order.
Some of the more powerful searching and sorting algorithms often are stated recursively. In this
section, we will process arrays recursively in ways that will be useful to us later. We have chosen a
simple task—displaying the integers in an array—for our examples so that you can focus on the
recursion without the distraction of the task. We will consider more complex tasks later in this book
and in the exercises at the end of this chapter. 10.15 Suppose that we have an array of integers and we want a method that displays it. So that we can display all or part of the array, the method will display the array elements whose indices range from
first through last. Thus, we can declare the method as follows:
/** Task: Displays the integers in an array.
* @param array an array of integers
* @param first the index of the first element displayed
* @param last
the index of the last element displayed,
*
first <= last */
public static void displayArray(int array, int first, int last) This task is simple and could readily be implemented using iteration. You might not imagine,
however, that we could also implement it recursively in a variety of ways. But we can and will. 10.16 Starting with array[first]. An iterative solution would certainly start at the ﬁrst element,
array[first], so it is natural to have our ﬁrst recursive method begin there also. If I ask you to display the array, you could display array[first] and then ask a friend to display the rest of the array. Displaying the rest of the array is a smaller problem than displaying the entire array. You wouldn’t
have to ask a friend for help if you had to display only one element—that is, if first and last were
equal. This is the base case. Thus, we could write the method displayArray as follows:
public static void displayArray(int array, int first, int last)
{
System.out.print(array[first] + " ");
if (first < last)
displayArray(array, first+1, last);
} // end displayArray For simplicity, we assume that the integers will ﬁt on one line. Notice that the client would follow a
call to displayArray with System.out.println() to get to the next line. 10.17 Starting with array[last]. Strange as it might seem, we can begin with the last element in the
array and still display the array from its beginning. Rather than displaying the last element right
away, you would ask a friend to display the rest of the array. After the elements array[first]
through array[last1] had been displayed, you would display array[last]. The resulting output
would be the same as in the previous segment.
The method that implements this plan follows:
public static void displayArray(int array, int first, int last)
{
if (first <= last)
{
displayArray(array, first, last1);
System.out.print (array[last] + " ");
}
} // end displayArray Recursively Processing an Array 10.18 235 Dividing the array in half. A common way to process an array recursively divides the array into
two pieces. You then process each of the pieces separately. Since each of these pieces is an array
that is smaller than the original array, each deﬁnes the smaller problem necessary for recursion. Our
ﬁrst two examples also divided the array into two pieces, but one of the pieces contained only one
element. Here we divide the array into two approximately equal pieces. To divide the array, we ﬁnd
the element at or near the middle of the array. The index of this element is
int mid = (first + last)/2; Figure 106 shows two arrays and their middle elements. Suppose that we include array[mid]
in the left “half” of the array, as the ﬁgure shows. In Part b, the two pieces of the array are equal in
length; in Part a they are not. This slight difference in length doesn’t matter.
Figure 106 Two arrays with their middle elements within their left halves
(a)
0 1 2 3 4 5 6 1 2 3 4 5 6 (b)
0 7 Once again, the base case is an array of one element. You can display it without help. But if the
array contains more than one element, you divide it into halves. You then ask a friend to display one
half and another friend to display the other half. These two friends, of course, represent two recursive calls in the following method:
public static void displayArray(int array, int first, int last)
{
if (first == last)
System.out.print(array[first] + " ");
else
{
int mid = (first + last)/2;
displayArray(array, first, mid);
displayArray(array, mid+1, last);
}
} // end displayArray Question 5 In Segment 10.18, suppose that the array’s middle element is not in either half
of the array. Instead you can recursively display the left half, display the middle element,
and then recursively display the right half. What is the implementation of displayArray if
you make these changes? Note: When you process an array recursively, you can divide it into two pieces. For example,
the ﬁrst or last element could be one piece, and the rest of the array could be the other piece. Or
you could divide the array into halves. 236 CHAPTER 10 Recursion 10.19 Displaying a list. In Chapter 5, we used an array to implement the ADT list. We implemented the
list’s display method iteratively, but here we’ll use recursion instead. Since display has no parameters and our recursive displayArray methods do, we write displayArray as a private method that
display calls. Since the array, entry, of list entries is a data ﬁeld of the class that implements the
list, it need not be a parameter of displayArray. The arguments in the call to displayArray would
be zero for the ﬁrst index and length1 for the last index, where length is a data ﬁeld of the list’s
class. Finally, display is not a static method, so displayArray cannot be static.
We can use any version of displayArray given previously. Using the version in Segment
10.16, the revised methods appear as follows:
public void display()
{
displayArray(0, length1);
System.out.println();
} // end display
private void displayArray(int first, int last)
{
System.out.print(entry[first] + " ");
if (first < last)
displayArray(first+1, last);
} // end displayArray Note: A recursive method that is part of an implementation of an ADT often is private,
because its necessary parameters make it unsuitable as an ADT operation. Recursively Processing a Linked Chain
10.20 Again, for simplicity, let’s recursively display the data in a chain of linked nodes. Once again, we’ll
implement the method display for the ADT list, but this time let’s use the linked implementation
from Chapter 6. As it did in Segment 10.19, display will call a private recursive method. We will
name that method displayChain. Since displayChain will be recursive, it needs a parameter. That
parameter should represent the chain, so it will be a reference to the ﬁrst node in the chain.
Dividing a linked chain into pieces is not as easy as dividing an array, since we cannot access any
particular node without traversing the chain from its beginning. Hence, the most practical approach
displays the data in the ﬁrst node and then recursively displays the data in the rest of the chain.
Suppose that we name displayChain’s parameter nodeOne. Then nodeOne.data is the data in
the ﬁrst node, and nodeOne.next is a reference to the rest of the chain. What about the base case?
Although a oneelement array was a ﬁne base case for displayArray, using an empty chain as the
base case is easier here because we can simply compare nodeOne to null. Thus, we have the following implementations for the methods display and displayChain:
public void display()
{
displayChain(firstNode);
System.out.println();
} // end display
private void displayChain(Node nodeOne)
{ The Time Efﬁciency of Recursive Methods 237 if (nodeOne != null)
{
System.out.print(nodeOne.data + " ");
displayChain(nodeOne.next);
}
} // end displayChain Note: When you write a method that processes a chain of linked nodes recursively, you use a
reference to the chain’s ﬁrst node as the method’s parameter. You then process the ﬁrst node followed by the rest of the chain.
10.21 Displaying a chain backwards. Suppose that you want to traverse a chain of linked nodes in
reverse order. In particular, suppose that you want to display the object in the last node, then the one
in the nexttolast node, and so on working your way toward the beginning of the chain. Since each
node references the next node but not the previous one, using iteration for this task would be difﬁcult. You could traverse to the last node, display its contents, go back to the beginning and traverse
to the nexttolast node, and so on. Clearly, however, this is a tedious and timeconsuming
approach. Alternatively, you could traverse the chain once and save a reference to each node. You
could then use these references to display the objects in the chain’s nodes in reverse order. A recursive solution would do this for you.
If a friend could display the nodes in reverse order, beginning with the second node, you could
display the ﬁrst node and complete the task. The following recursive solution implements this idea:
public void displayBackward()
{
displayChainBackward(firstNode);
System.out.println();
} // end displayBackward
private void displayChainBackward(Node nodeOne)
{
if (nodeOne != null)
{
displayChainBackward(nodeOne.next);
System.out.print(nodeOne.data + " ");
}
} // end displayChainBackward Question 6 Trace the previous method displayBackward for a chain of three nodes. The Time Efﬁciency of Recursive Methods
Chapter 9 showed you how to measure an algorithm’s time requirement by using Big Oh notation.
We used a count of the algorithm’s major operations as a ﬁrst step in determining an appropriate
growthrate function. For the iterative examples we examined, that process was straightforward. We
will use a more formal technique here to measure the time requirement of a recursive algorithm and
thereby choose the right growthrate function. 238 CHAPTER 10 Recursion The Time Efﬁciency of countDown
10.22 As a ﬁrst example, consider the countDown method given in Segment 10.3. The size of the problem
of counting down to 1 from a given integer is directly related to the size of that integer. Since Chapter 9 used n to represent the size of the problem, we will rename the parameter integer in countDown to n to simplify our discussion. Here is the revised method:
public static void countDown(int n)
{
System.out.println(n);
if (n > 1)
countDown(n  1);
} // end countDown When n is 1, countDown displays 1. This is the base case and requires a constant amount of
time. When n > 1, the method requires a constant amount of time for both the println statement
and the comparison. In addition, it needs time to solve the smaller problem represented by the
recursive call. If we let t(n) represent the time requirement of countDown(n), we can express these
observations by writing
t(n) = 1 + t(n  1) for n > 1
t(1) = 1
The equation for t(n) is called a recurrence relation, since the deﬁnition of the function t contains an occurrence of itself—that is, a recurrence. What we need is an expression for t(n) that is not
given in terms of itself. One way to ﬁnd such an expression is to pick a value for n and to write out
the equations for t(n), t(n  1), and so on, until we reach t(1). From these equations, we should be
able to guess at an appropriate expression to represent t(n). We then need only to prove that we are
right. This might sound harder than it is. 10.23 Solving a recurrence relation. To solve the previous recurrence relation for t(n), let’s begin with
n = 4. We get the following sequence of equations:
t(4) = 1 + t(3)
t(3) = 1 + t(2)
t(2) = 1 + t(1) = 1 + 1 = 2
Substituting 2 for t(2) in the equation for t(3) results in
t(3) = 1 + 2 = 3
Substituting 3 for t(3) in the equation for t(4) results in
t(4) = 1 + 3 = 4
It appears that
t(n) = n for n ≥ 1
We can start with a larger value of n, get the same result, and convince ourselves that it is true.
But we need to prove that this result is true for every n ≥ 1. This is not hard to do.
To prove that t(n) = n for n ≥ 1, we begin with the recurrence relation for t(n), since we know it
is true:
t(n) = 1 + t(n  1) for n > 1 The Time Efﬁciency of Recursive Methods 239 We need to replace t(n  1) on the right side of the equation. Now if t(n  1) = n  1 when n > 1, the
following would be true for n > 1:
t(n) = 1 + n  1 = n
Thus, if we can ﬁnd an integer k that satisﬁes the equation t(k) = k, the next higher integer will also
satisfy it. So will the next one and the next one. Since we are given that t(1) = 1, all integers larger
than 1 will satisfy the equation. This proof is an example of a proof by induction.
To conclude, we now know that countDown’s time requirement is given by the function t(n) = n.
Thus, the method is O(n).
Question 7 What is the Big Oh of the method sumOf given in Segment 10.12?
Question 8 Computing xn for some real number x and an integral power n ≥ 0 has a simple
recursive solution:
xn = x xn1
x0 = 1
a.
b. What recurrence relation describes this algorithm’s time requirement?
By solving this recurrence relation, determine the Big Oh of this algorithm. The Time Efﬁciency of Computing xn
10.24 We can compute xn for some real number x and an integral power n ≥ 0 more efﬁciently than the
approach that Question 8 suggests. To reduce the number of recursive calls and therefore the number of multiplications, we can express xn as follows:
xn = (xn/2)2 when n is even and positive
xn = x (x(n1)/2)2 when n is odd and positive
x0 = 1
This computation could be implemented by a method power(x, n) that contains the recursive call
power(x, n/2). Since integer division in Java truncates its result, this call is appropriate regardless
of whether n is even or odd. Thus, power(x, n) would invoke power(x, n/2) once, square the
result, and, if n is odd, also multiply the result by x. These multiplications are O(1) operations.
The recurrence relation that represents the method’s time requirement to compute xn is then
t(n) = 1 + t(n/2) when n ≥ 2
t(1) = 1
t(0) = 1 10.25 Since the recurrence relation involves n/2, our discussion will be simpler if n is a power of 2. So
let’s begin at n = 16 and write the following sequence of equations:
t(16) = 1 + t(8)
t(8) = 1 + t(4)
t(4) = 1 + t(2)
t(2) = 1 + t(1)
By substituting repeatedly, we get the following:
t(16) = 1 + t(8) = 1 + (1 + t(4)) = 2 + (1 + t(2)) = 3 + (1 + t(1)) = 4 + t(1)
Since 16 = 24, 4 = log2 16. This fact, together with the base case t(1) = 1, leads us to guess that
t(n) = 1 + log2 n 240 CHAPTER 10 Recursion
Now we need to prove that this guess is, in fact, true for n ≥ 1. It is true for n = 1, because
t(1) = 1 + log2 1 = 1
For n > 1, we know that the recurrence relation for t(n) is true:
t(n) = 1 + t(n/2)
We need to replace t(n/2). If our guess t(n) = 1 + log2 n were true for all values of n < k, we would
have t(k/2) = 1 + log2 (k/2), since k/2 < k. Thus,
t(k) = 1 + t(k/2)
= 1 + (1 + log2 (k/2))
= 2 + log2 (k/2)
= log2 4 + log2 (k/2)
= log2 (4k/2)
= log2 (2k)
= log2 2 + log2 k
= 1 + log2 k
To summarize, we assumed that t(n) = 1 + log2 n for all values of n < k and showed that t(k) = 1
+ log2 k. Thus, t(n) = 1 + log2 n for all n ≥ 1. Since power’s time requirement is given by t(n), the
method is O(log n). A Simple Solution to a Difﬁcult Problem
10.26 Figure 107 The Towers of Hanoi is a classic problem in computer science whose solution is not obvious. Imagine three poles and a number of disks of varying diameters. Each disk has a hole in its center so that
it can ﬁt over each of the poles. Suppose that the disks have been placed on the ﬁrst pole in order
from largest to smallest, with the smallest disk on top. Figure 107 illustrates this initial conﬁguration for three disks. The initial conﬁguration of the Towers of Hanoi for three disks. 1 2 3 The problem is to move the disks from the ﬁrst pole to the third pole so that they remain piled
in their original order. But you must adhere to the following rules:
1. Move one disk at a time. Each disk you move must be a topmost disk.
2. No disk may rest on top of a disk smaller than itself.
3. You can store disks on the second pole temporarily, as long as you observe the previous two rules. 10.27 The solution is a sequence of moves. For example, if three disks are on pole 1, the following
sequence of seven moves will move the disks to pole 3, using pole 2 temporarily:
Move a disk from pole 1 to pole 3
Move a disk from pole 1 to pole 2
Move a disk from pole 3 to pole 2 A Simple Solution to a Difﬁcult Problem Move a disk from pole 1 to pole 3
Move a disk from pole 2 to pole 1
Move a disk from pole 2 to pole 3
Move a disk from pole 1 to pole 3
Figure 108 illustrates these moves.
Figure 108 The sequence of moves for solving the Towers of Hanoi problem with three disks 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 241 242 CHAPTER 10 Recursion
Question 9 We discovered the previous solution for three disks by trial and error. Using
the same approach, what sequence of moves solves the problem for four disks?
With four disks, the problem’s solution requires 15 moves, so it is somewhat difﬁcult to ﬁnd by
trial and error. With more than four disks, the solution is much more difﬁcult to discover. What we
need is an algorithm that produces a solution for any number of disks. Even though discovering a
solution by trial and error is hard, ﬁnding a recursive algorithm to produce the solution is fairly easy. Note: Invented in the late 1800s, the Towers of Hanoi problem was accompanied by this leg end. A group of monks was said to have begun moving 64 disks from one tower to another.
When they ﬁnish, the world will end. When you ﬁnish reading this section, you will realize that
the monks—or their successors—could not have ﬁnished yet. By the time they do, it is quite
plausible that the disks, if not the world, will have worn out! 10.28 Figure 109 A recursive algorithm solves a problem by solving one or more smaller problems of the same type.
The problem size here is simply the number of disks. So imagine that the ﬁrst pole has four disks,
as in Figure 109a, and that I ask you to solve the problem. Eventually, you will need to move the
bottom disk, but ﬁrst you need to move the three disks on top of it. Ask a friend to move these three
disks—a smaller problem—according to our rules, but make the destination pole 2. Allow your
friend to use pole 3 as a spare. Figure 109b shows the ﬁnal result of your friend’s work.
When your friend tells you that the task is complete, you move the one disk left on pole 1 to pole
3. Moving one disk is a simple task. You don’t need help—or recursion—to do it. This disk is the
largest one, so it cannot rest on top of any other disk. Thus, pole 3 must be empty before this move.
After the move, the largest disk will be ﬁrst on pole 3. Figure 109c shows the result of your work.
Now ask a friend to move the three disks on pole 2 to pole 3, adhering to the rules. Allow your
friend to use pole 1 as a spare. When your friend tells you that the task is complete, you can tell me
that your task is complete as well. Figure 109d shows the ﬁnal results. The smaller problems in a recursive solution for four disks (a)
1 2 3 1 2 3 1 2 3 1 2 3 (b) (c) (d) A Simple Solution to a Difﬁcult Problem 10.29 243 Before we write some pseudocode to describe the algorithm, we need to identify a base case. If
only one disk is on pole 1, we can move it directly to pole 3 without using recursion. With this as
the base case, the algorithm is as follows:
Algorithm to move numberOfDisks disks from startPole to endPole using tempPole
as a spare according to the rules of the Towers of Hanoi problem if (numberOfDisks == 1)
Move disk from startPole
else
{ } to endPole Move all but the bottom disk from startPole to tempPole
Move disk from startPole to endPole
Move all disks from tempPole to endPole At this point, we can develop the algorithm further by writing
Algorithm solveTowers(numberOfDisks, startPole, tempPole, endPole)
if (numberOfDisks == 1)
Move disk from startPole to endPole
else
{
solveTowers(numberOfDisks1, startPole, endPole, tempPole)
Move disk from startPole to endPole
solveTowers(numberOfDisks1, tempPole, startPole, endPole)
} If we choose zero disks as the base case instead of one disk, we can simplify the algorithm a
bit, as follows:
Algorithm solveTowers(numberOfDisks, startPole, tempPole, endPole)
if (numberOfDisks > 0)
{
solveTowers(numberOfDisks1, startPole, endPole, tempPole)
Move disk from startPole to endPole
solveTowers(numberOfDisks1, tempPole, startPole, endPole)
} Although somewhat easier to write, the second version of the algorithm executes many more recursive calls.
Question 10 For two disks, how many recursive calls are made by each of the two algorithms just given?
Your knowledge of recursion should convince you that both forms of the algorithm are correct.
Recursion has enabled us to solve a problem that appeared to be difﬁcult. But is this algorithm efﬁcient? Could we do better if we used iteration? 10.30 Efﬁciency. Let’s look at the efﬁciency of our algorithm. How many moves occur when we begin
with n disks? Let m(n) denote the number of moves that solveTowers needs to solve the problem
for n disks. Clearly,
m(1) = 1 244 CHAPTER 10 Recursion
For n > 1, the algorithm uses two recursive calls to solve problems that have n  1 disks each. The
required number of moves in each case is m(n  1). Thus, you can see from the algorithm that
m(n) = m(n  1) + 1 + m(n  1) = 2 m(n  1) + 1
From this equation, you can see that m(n) > 2 m(n  1). That is, solving the problem with n disks
requires more than twice as many moves as solving the problem with n  1 disks.
It appears that m(n) is related to a power of 2. Let’s evaluate the recurrence for m(n) for a few
values of n:
m(1) = 1, m(2) = 3, m(3) = 7, m(4) = 15, m(5) = 31, m(6) = 63
It seems that
m(n) = 2n  1
We can prove this conjecture by using mathematical induction, as follows. 10.31 Proof by induction that m(n) = 2n  1. We know that m(1) = 1 and 21  1 = 1, so the conjecture is
true for n = 1. Now assume that it is true for n = 1, 2, . . . , k, and consider m(k + 1).
m(k + 1) = 2 m(k) + 1
= 2 (2k  1) + 1
= 2k+1  1 (use the recurrence relation)
(we assumed that m(k) = 2k  1) Since the conjecture is true for n = k + 1, it is true for all n ≥ 1. 10.32 Exponential growth. The number of moves required to solve the Towers of Hanoi problem grows
exponentially with the number of disks n. That is, m(n) = O(2n). This rate of growth is alarming, as
you can see from the following values of 2n:
25 = 32
210 = 1024
220 = 1,048,576
230 = 1,073,741,824
240 = 1,099,511,627,776
250 = 1,125,899,906,842,624
260 = 1,152,921,504,606,846,976
Remember the monks mentioned at the end of Segment 10.27? They are making 264  1 moves. It
should be clear that you can use this exponential algorithm only for small values of n, if you want to
live to see the results.
Before you condemn recursion and discard our algorithm, you need to know that you cannot
do any better. Not you, not the monks, not anyone. We demonstrate this observation next by using
mathematical induction. 10.33 Proof that Towers of Hanoi cannot be solved in fewer than 2n  1 moves. We have shown that
our algorithm for the Towers of Hanoi problem requires m(n) = 2n  1 moves. Since we know that at
least one algorithm exists—we found one—there must be a fastest one. Let M(n) represent the number
of moves that this optimal algorithm requires for n disks. We need to show that M(n) = m(n) for n ≥ 1.
Our algorithm solves the problem with one disk in one move. We cannot do better, so we have
that M(1) = m(1) = 1. If we assume that M(n  1) = m(n  1), consider n disks. Looking back at
Figure 109b, you can see that at one point in our algorithm the largest disk is isolated on one pole
and n  1 disks are on another. This conﬁguration would have to be true of an optimal algorithm as
well, for there is no other way to move the largest disk. Thus, the optimal algorithm must have
moved these n  1 disks from pole 1 to pole 2 in M(n  1) = m(n  1) moves. A Poor Solution to a Simple Problem 245 After moving the largest disk (Figure 109c), the optimal algorithm moves n  1 disks from
pole 2 to pole 3 in another M(n  1) = m(n  1) moves. Altogether, the optimal algorithm makes at
least 2 M(n  1) + 1 moves. Thus,
M(n) ≥ 2 M(n  1) + 1
Now apply the assumption that M(n  1) = m(n  1) and then the recurrence for m(n) given in Segment 10.30 to get
M(n) ≥ 2 m(n  1) + 1 = m(n)
We have just shown that M(n) ≥ m(n). But since the optimal algorithm cannot require more
moves than our algorithm, the expression M(n) > m(n) cannot be true. Thus, we must have M(n) =
m(n) for all n ≥ 1. 10.34 Finding an iterative algorithm to solve the Towers of Hanoi problem is not as easy as ﬁnding a
recursive algorithm. We now know that any iterative algorithm will require at least as many moves
as the recursive algorithm. An iterative algorithm will save the overhead—space and time—of
tracking the recursive calls, but it will not really be more efﬁcient than solveTowers. An algorithm
that uses both iteration and recursion to solve the Towers of Hanoi problem is discussed in the section “Tail Recursion,” and an entirely iterative algorithm is the subject of Project 2 at the end of this
chapter. A Poor Solution to a Simple Problem
Some recursive solutions are so inefﬁcient that you should avoid them. The problem that we will
look at now is simple, occurs frequently in mathematical computations, and has a recursive solution
that is so natural that you are likely to be tempted to use it. Don’t! 10.35 Example: Fibonacci numbers. Early in the 13th century, the mathematician Leonardo Fibonacci
proposed a sequence of integers to model the number of descendants of a pair of rabbits. Later
named the Fibonacci sequence, these numbers occur in surprisingly many applications.
The ﬁrst two terms in the Fibonacci sequence are 1 and 1. Each subsequent term is the sum of
the preceding two terms. Thus, the sequence begins as 1, 1, 2, 3, 5, 8, 13, . . . Typically, the
sequence is deﬁned by the equations
F0 = 1
F1 = 1
Fn = Fn  1 + Fn  2 when n ≥ 2
You can see why the following recursive algorithm would be a tempting way to generate the
sequence:
Algorithm Fibonacci(n) if (n <= 1)
return 1
else
return Fibonacci(n1) + Fibonacci(n2) 10.36 This algorithm makes two recursive calls. That fact in itself is not the difﬁculty. Earlier, you saw
perfectly good algorithms—displayArray in Segment 10.18 and solveTowers in Segment
10.29—that make several recursive calls. The trouble here is that the same recursive calls are made
repeatedly. A call to Fibonacci(n) invokes Fibonacci(n1) and then Fibonacci(n2). But the 246 CHAPTER 10 Recursion
call to Fibonacci(n1) has to compute Fibonacci(n2), so the same Fibonacci number is computed twice.
Things get worse. The call to Fibonacci(n1) calls Fibonacci(n3) as well. The two previous calls to Fibonacci(n2) each invoke Fibonacci(n3), so Fibonacci(n3) is computed three
times. Figure 1010a illustrates the dependency of F6 on previous Fibonacci numbers and so indicates the number of times a particular number is computed repeatedly by the method Fibonacci.
In contrast, Figure 1010b shows that an iterative computation of F6 computes each prior term
once. The recursive solution is clearly less efﬁcient. The next segments will show you just how
inefﬁcient it is.
Figure 1010 The computation of the Fibonacci number F6 (a) recursively; (b) iteratively
(a) F2 is computed 5 times
F3 is computed 3 times
F4 is computed 2 times
F5 is computed once
F4
F6 is computed once
F3
F2 F6
F4 F5 F2 F2 F1 F1 F0 F1 F0 F2 F3 F3
F1 F2 F1 F1 F0 F1 F0 F1 F0
(b) 10.37 F0 = 1
F1 = 1
F2 = F1 ϩ F0 = 2
F3 = F2 ϩ F1 = 3
F4 = F3 ϩ F2 = 5
F5 = F4 ϩ F3 = 8
F6 = F5 ϩ F4 = 13 The time efﬁciency of the algorithm Fibonacci. We can investigate the efﬁciency of the
Fibonacci algorithm by using a recurrence relation, as we did in Segments 10.22 through 10.25.
First, notice that Fn requires one add operation plus the operations that Fn1 and Fn  2 require. So if
t(n) represents the time requirement of the algorithm in computing Fn, we have
t(n) = 1 + t(n  1) + t(n  2) for n ≥ 2
t(1) = 1
t(0) = 1
This recurrence relation looks like the recurrence for the Fibonacci numbers themselves. It should not
surprise you then that t(n) is related to the Fibonacci numbers. In fact, if you look at Figure 1010a
and count the occurrences of the Fibonacci numbers F2 through F6, you will discover a Fibonacci
sequence.
To ﬁnd a relationship between t(n) and Fn, let’s expand t(n) for a few values of n:
t(2) = 1 + t(1) + t(0) = 1 + F1 + F0 = 1 + F2 > F2
t(3) = 1 + t(2) + t(1) > 1 + F2 + F1 = 1 + F3 > F3
t(4) = 1 + t(3) + t(2) > 1 + F3 + F2 = 1 + F4 > F4 Tail Recursion 247 We guess that t(n) > Fn for n ≥ 2. Notice that t(0) = 1 = F0 and t(1) = 1 = F1. These do not satisfy the
strict inequality of our guess.
We now prove that our guess is indeed fact. (You can skip the proof on your ﬁrst reading.) 10.38 Proof by induction that t(n) > Fn for n ≥ 2. Since the recurrence relation for t(n) involves two
recursive terms, we need two base cases. In the previous segment, we already showed that t(2) > F2
and t(3) > F3. Now if t(n) > Fn for n = 2, 3, . . . , k, we need to show that t(k + 1) > Fk + 1. We can do
this as follows:
t(k + 1) = 1 + t(k) + t(k  1) > 1 + Fk + Fk  1 = 1 + Fk + 1 > Fk + 1
We can conclude that t(n) > Fn for all n ≥ 2.
Since we know that t(n) > Fn for all n ≥ 2, we can say that t(n) = Ω(Fn). Recall that the Big
Omega notation means that t(n) is at least as large as the Fibonacci number Fn. It turns out that we
can compute Fn directly without using the recurrence relation given in Segment 10.35. It can be
shown that
n n Fn = ( a – b ) ⁄ 5 n
where a = ( 1 + 5 ) ⁄ 2 and b = ( 1 – 5 ) ⁄ 2 . Since 1 – 5 < 2, we have b < 1 and b < 1 . Therefore, we have
n Fn > ( a – 1 ) ⁄ 5
Thus, Fn = Ω(an), and since we know that t(n) = Ω(Fn), we have t(n) = Ω(an). Some arithmetic
shows that the previous expression for a equals approximately 1.6. We conclude that t(n) grows
exponentially with n. 10.39 At the beginning of this section, we observed that each Fibonacci number is the sum of the preceding two Fibonacci numbers in the sequence. This observation should lead us to an iterative solution
that is O(n). Although the clarity and simplicity of the recursive solution makes it a tempting
choice, it is much too inefﬁcient to use. Programming Tip: Do not use a recursive solution that repeatedly solves the same problem
in its recursive calls. Question 11 To compute the Fibonnaci number F8 in the least time, should you do so
recursively, iteratively, or directly by evaluating the expression
n n Fn = ( a – b ) ⁄ 5
as given in Segment 10.38? Tail Recursion
10.40 Tail recursion occurs when the last action performed by a recursive method is a recursive call. For
example, the following method countDown from Segment 10.6 is tail recursive:
public static void countDown(int integer)
{
if (integer >= 1)
{
System.out.println(integer); 248 CHAPTER 10 Recursion
countDown(integer  1);
} // end if
} // end countDown A method that implements the algorithm Fibonacci given in Segment 10.35 will not be tail recursive, even though a recursive call appears last in the method. A closer look reveals that the last
action is an addition.
The tail recursion in a method simply repeats the method’s logic with changes to parameters
and variables. Thus, you can perform the same repetition by using iteration. Converting a tailrecursive method to an iterative one is usually a straightforward process. For example, consider the
recursive method countDown just given. First replace the if statement with a while statement.
Then, instead of the recursive call, assign the call’s argument integer  1 to the method’s formal
parameter integer. Doing so gives us the following iterative version of the method:
public static void countDown(int integer)
{
while (integer >= 1)
{
System.out.println(integer);
integer = integer  1;
} // end while
} // end countDown This method is essentially the same as the iterative method given in Segment 10.7.
Because converting tail recursion to iteration is often uncomplicated, some compilers convert
tailrecursive methods to iterative methods to save the overhead involved with recursion. Most of
this overhead involves memory, not time. If you need to save space, you should consider replacing
tail recursion with iteration. 10.41 Example. Let’s replace the tail recursion in the algorithm solveTowers given in Segment 10.29:
Algorithm solveTowers(numberOfDisks, startPole, tempPole, endPole)
if (numberOfDisks > 0)
{
solveTowers(numberOfDisks1, startPole, endPole, tempPole)
Move disk from startPole to endPole
solveTowers(numberOfDisks1, tempPole, startPole, endPole)
} This algorithm contains two recursive calls. The second one is tail recursive, since it is the algorithm’s last action. Thus, we could try replacing the second recursive call with appropriate assignment statements and use a loop to repeat the method’s logic, including the ﬁrst recursive call, as
follows:
Algorithm solveTowers(numberOfDisks, startPole, tempPole, endPole)
while (numberOfDisks > 0)
{
solveTowers(numberOfDisks1, startPole, endPole, tempPole)
Move disk from startPole to endPole
numberOfDisks = numberOfDisks  1
startPole = tempPole
tempPole = startPole
endPole = endPole
} Mutual Recursion 249 This isn’t quite right, however. Obviously, assigning endPole to itself is superﬂuous. Assigning
tempPole to startPole and then assigning startPole to tempPole destroys startPole but leaves
tempPole unchanged. What we need to do is exchange tempPole and startPole. Let’s look at what is really happening here.
The only instruction that actually moves disks is Move disk from startPole to endPole. This
instruction moves the largest disk that is not already on endPole. The disk to be moved is at the bottom of a pole, so any disks that are on top of it need to be moved ﬁrst. Those disks are moved by the
ﬁrst recursive call. If we want to omit the second recursive call, what would we need to do instead
before repeating the ﬁrst recursive call? We must make sure that startPole contains the disks that
have not been moved to endPole. Those disks are on tempPole as a result of the ﬁrst recursive call.
Thus, we need to exchange the contents of tempPole and startPole.
Making these changes results in the following revised algorithm:
Algorithm solveTowers(numberOfDisks, startPole, tempPole, endPole)
while (numberOfDisks > 0)
{
solveTowers(numberOfDisks1, startPole, endPole, tempPole)
Move disk from startPole to endPole
numberOfDisksExchange the contents of tempPole and startPole
} This revised algorithm is unusual in that its loop contains a recursive call. The base case for
this recursion occurs when numberOfDisks is zero. Even though the method does not contain an if
statement, it does detect the base case, ending the recursive calls. Note: In a tailrecursive method, the last action is a recursive call. This call performs a repeti tion that can be done more efﬁciently by using iteration. Converting a tailrecursive method to an
iterative one is usually a straightforward process. Mutual Recursion
10.42 Some recursive algorithms make their recursive calls indirectly. For example, we might have the
following chain of events: Method A calls Method B, Method B calls Method C, and Method C
calls Method A. Such recursion—called mutual recursion or indirect recursion—is more difﬁcult
to understand and trace, but it does arise naturally in certain applications.
For example, the following rules describe strings that are valid algebraic expressions:
G
G
G
G An algebraic expression is either a term or two terms separated by a + or  operator.
A term is either a factor or two factors separated by a * or / operator.
A factor is either a variable or an algebraic expression enclosed in parentheses.
A variable is a single letter. Suppose that the methods isExpression, isTerm, isFactor, and isVariable determine whether a
string is, respectively, an expression, a term, a factor, or a variable. The method isExpression calls
isTerm, which in turn calls isFactor, which then calls isVariable and isExpression. Figure 1011
illustrates these calls.
Project 5 describes another example of mutual recursion. For a more detailed discussion of
algebraic expressions, see Chapter 20. 250 CHAPTER 10 Recursion
Figure 1011 An example of mutual recursion isExpression isFactor isExpression isTerm isFactor isExpression C HAPTER S UMMARY isTerm isTerm isFactor isVariable G Recursion is a problemsolving process that breaks a problem into identical but smaller problems. G The deﬁnition of a recursive method must contain logic that involves a parameter to the method and
leads to different cases. One or more of these cases are base cases, or stopping cases, because they provide a solution that does not require recursion. One or more cases include a recursive invocation of the
method that takes a step toward a base case by solving a “smaller” version of the task performed by the
method. G For each call to a method, Java records the values of the method’s parameters and local variables in an
activation record. The records are placed into an ADT called a stack that organizes them chronologically.
The record most recently added to the stack is of the currently executing method. In this way, Java can
suspend the execution of a recursive method and invoke it again with new argument values. G A recursive method that processes an array often divides the array into portions. Recursive calls to the
method work on each of these array portions.
G A recursive method that processes a chain of linked nodes needs a reference to the chain’s ﬁrst node as
its parameter. G A recursive method that is part of an implementation of an ADT often is private, because its necessary
parameters make it unsuitable as an ADT operation. G A recurrence relation expresses a function in terms of itself. You can use a recurrence relation to express
the work done by a recursive method. G Any solution to the Towers of Hanoi problem with n disks requires at least 2n  1 moves. A recursive
solution to this problem is clear and efﬁcient. G Each number in the Fibonacci sequence—after the ﬁrst two—is the sum of the previous two numbers.
Computing a Fibonacci number recursively is quite inefﬁcient, as the required previous numbers are
computed several times each. G Tail recursion occurs when the last action made by a recursive method is a recursive call. This recursive
call performs a repetition that can be done more efﬁciently by using iteration. Converting a tailrecursive
method to an iterative one is usually a straightforward process. G Indirect recursion results when a method calls a method that calls a method, and so on until the ﬁrst
method is called again. Mutual Recursion P ROGRAMMING T IPS 251 An iterative method contains a loop. A recursive method calls itself. Although some recursive methods
contain a loop and call themselves, if you have written a while statement within a recursive method,
be sure that you did not mean to write an if statement. G A recursive method that does not check for a base case, or that misses the base case, will not terminate
normally. This situation is known as inﬁnite recursion. G Too many recursive calls can cause the error message “stack overﬂow.” This means that the stack of
activation records has become full. In essence, the method uses too much memory. Inﬁnite recursion or
largesize problems are the likely causes of this error. G Do not use a recursive solution that repeatedly solves the same problem in its recursive calls. G E XERCISES G If a recursive method does not work, answer the following questions. Any “no” answers should guide
you to the error.
I Does the method have at least one parameter?
I Does the method contain a statement that tests a parameter and leads to different cases?
I Did you consider all possible cases?
I Does at least one of these cases cause at least one recursive call?
I Do these recursive calls involve smaller arguments, smaller tasks, or tasks that get closer to the
solution?
I If these recursive calls produce or return correct results, will the method produce or return a correct result?
I Is at least one of the cases a base case that has no recursive call?
I Are there enough base cases?
I Does each base case produce a result that is correct for that case?
I If the method returns a value, does each of the cases return a value? 1. Consider the method displayRowOfCharacters that displays any given character the
speciﬁed number of times on one line. For example, the call
displayRowOfCharacters('*', 5); produces the line
***** Implement this method in Java by using recursion.
2. Describe a recursive algorithm that draws concentric circles, given the diameter of the
outermost circle. The diameter of each inner circle should be threefourths the diameter
of the circle that encloses it. The diameter of the innermost circle should exceed 1 inch.
3. Write a method that asks the user for integer input that is between 1 and 10, inclusive. If
the input is out of range, the method should recursively ask the user to enter a new input
value.
4. The factorial of a positive integer n—which we denote as n!—is the product of n and the
factorial of n  1. The factorial of 0 is 1. Write two different recursive methods that each
return the factorial of n.
5. Write a recursive method that writes a given array backward. Consider the last element
of the array ﬁrst. 252 CHAPTER 10 Recursion
6. Repeat Exercise 5, but instead consider the ﬁrst element of the array ﬁrst.
7. A palindrome is a string that reads the same forward and backward. For example deed
and level are palindromes. Write an algorithm in pseudocode that determines whether a
string is a palindrome.
8. For three disks, how many recursive calls are made by each of the two solveTowers
algorithms given in Segment 10.29?
9. Write a recursive method that counts the number of nodes in a chain of linked nodes.
10. If n is a positive integer in Java, n%10 is its rightmost digit and n/10 is the integer
obtained by dropping the rightmost digit from n. Using these facts, write a recursive
method that displays an integer n in decimal. Now observe that you can display n in any
base between 2 and 9 by replacing 10 with the new base. Revise your method to
accommodate a given base.
11. Consider the method contains of the class AList, as given in Segment 5.10. Write a
private recursive method that contains can call, and revise the deﬁnition of contains
accordingly.
12. Repeat Exercise 11, but instead use the class LList and Segment 6.40.
13. Write four different recursive methods that each compute the sum of integers in an array
of integers. Model your methods after the displayArray methods given in Segments
10.15 through 10.18 and described in Question 5.
14. Write a recursive method that returns the smallest integer in an array of integers. If you
divide the array into two pieces—halves, for example—and ﬁnd the smallest integer in
each of the two pieces, the smallest integer in the entire array will be the smaller of the
these two integers. Since you will be searching a portion of the array—for example, the
elements array[first] through array[last]—it will be convenient for your method to
have three parameters: the array and two indices, first and last. You can refer to the
method displayArray in Segment 10.18 for inspiration. P ROJECTS 1. Implement the two solveTowers algorithms given in Segment 10.29. Represent the
towers by either single characters or strings. Each method should display directions that
indicate the moves that must be made. Insert counters into each method to count the
number of times it is called. These counters can be data ﬁelds of the class that contains
these methods. Compare the number of recursive calls made by each method for various
numbers of disks.
2. You can get a solution to the Towers of Hanoi problem by using the following iterative
algorithm. Beginning with pole 1 and moving from pole to pole in the order pole 1, pole
3, pole 2, pole 1, and so on, make at most one move per pole according to the following
rules:
G
G Move the topmost disk from a pole to the next possible pole in the specified
order. Remember that you cannot place a disk on top of a smaller one.
If the disk that you are about to move is the smallest of all the disks and you just
moved it to the present pole, do not move it. Instead, consider the next pole. Mutual Recursion 253 This algorithm should make the same moves as the recursive algorithms given in
Segment 10.29 and pictured in Figure 108. Thus, this iterative algorithm is O(2n) as
well.
Implement this algorithm.
3. Write an application or applet that animates the solution to the Towers of Hanoi
problem. The problem asks you to move n disks from one pole to another, one at a time.
You move only the top disk on a pole, and you place a disk only on top of larger disks on
a pole. Since each disk has certain characteristics, such as its size, it is natural to deﬁne a
class of disks.
Design and implement an ADT tower that includes the following operations:
G Add a disk to the top of the disks on the pole
G Remove the topmost disk
Also, design and implement a class that includes a recursive method to solve the
problem.
4. Java’s class Graphics has the following method to draw a line between two given points:
/** Task: Draws a line between the points (x1, y1) and
*
(x2, y2). */
public void drawLine(int x1, int y1, int x2, int y2) uses a coordinate system that measures points from the top left corner.
Write a recursive method that draws a picture of a 12inch ruler. Mark inches, half
inches, quarter inches, and eighth inches. Mark the half inches with marks that are
smaller than those that mark the inches. Mark the quarter inches with marks that are
smaller than those that mark the half inches, and so on. Your picture need not be full
size. Hint: Draw a mark in the middle of the ruler and then draw rulers to the left and
right of this mark.
Graphics 5. Imagine a row of n lights that can be turned on or off only under certain conditions, as
follows. The ﬁrst light can be turned on or off anytime. Each of the other lights can be
turned on or off only when the preceding light is on and all other lights are off. If all the
lights are on initially, how can you turn them off? For three lights numbered 1 to 3, you
can take the following steps, where 1 is a light that is on and 0 is a light that is off:
1 1 1 All on initially
0 1 1 Turn off light 1
0 1 0 Turn off light 3
1 1 0 Turn on light 1
1 0 0 Turn off light 2
0 0 0 Turn off light 1
You can solve this problem in general by using mutual recursion, as follows:
Algorithm turnOff(n)
// Turn off n lights that are initially on. if (n == 1) Turn off light 1 else
{
if (n > 2)
turnOff(n  2) 254 CHAPTER 10 Recursion
Turn off light n
if (n > 2)
turnOn(n  2)
turnOff(n  1)
} Algorithm turnOn(n)
// Turn on n lights that are initially off.
if (n == 1) Turn on light 1 else
{
turnOn(n  1)
if (n > 2)
turnOff(n  2)
Turn on light n
if (n > 2)
turnOn(n  2)
} a. Implement these algorithms in Java so that it produces a list of directions to turn off n
lights that initially are on.
b. What recurrence relation expresses the number of times that lights are switched on or
off during the course of solving this problem for n lights? ...
View
Full
Document
This note was uploaded on 04/29/2010 for the course CS 5503 taught by Professor Kaylor during the Spring '10 term at W. Alabama.
 Spring '10
 Kaylor
 Computer Science

Click to edit the document details