Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
99 views

Data Structures and Algorithms in Java 6th Edition 151 200

This document describes the advantages of using sentinel nodes in a doubly linked list implementation. It discusses how sentinel nodes simplify operations by eliminating special cases for empty lists or additions/deletions at the head or tail. The document then provides code for a DoublyLinkedList class that implements a doubly linked list with header and trailer sentinel nodes. Key methods like addFirst, addLast, removeFirst and removeLast are discussed and their implementations are simplified by using the sentinel nodes.

Uploaded by

longle18704
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
99 views

Data Structures and Algorithms in Java 6th Edition 151 200

This document describes the advantages of using sentinel nodes in a doubly linked list implementation. It discusses how sentinel nodes simplify operations by eliminating special cases for empty lists or additions/deletions at the head or tail. The document then provides code for a DoublyLinkedList class that implements a doubly linked list with header and trailer sentinel nodes. Key methods like addFirst, addLast, removeFirst and removeLast are discussed and their implementations are simplified by using the sentinel nodes.

Uploaded by

longle18704
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

3.4.

Doubly Linked Lists 133


Advantage of Using Sentinels

Although we could implement a doubly linked list without sentinel nodes (as we
did with our singly linked list in Section 3.2), the slight extra memory devoted to the
sentinels greatly simplifies the logic of our operations. Most notably, the header and
trailer nodes never change—only the nodes between them change. Furthermore,
we can treat all insertions in a unified manner, because a new node will always be
placed between a pair of existing nodes. In similar fashion, every element that is to
be deleted is guaranteed to be stored in a node that has neighbors on each side.
For contrast, we look at our SinglyLinkedList implementation from Section 3.2.
Its addLast method required a conditional (lines 39–42 of Code Fragment 3.15) to
manage the special case of inserting into an empty list. In the general case, the new
node was linked after the existing tail. But when adding to an empty list, there is
no existing tail; instead it is necessary to reassign head to reference the new node.
The use of a sentinel node in that implementation would eliminate the special case,
as there would always be an existing node (possibly the header) before a new node.

Inserting and Deleting with a Doubly Linked List

Every insertion into our doubly linked list representation will take place between
a pair of existing nodes, as diagrammed in Figure 3.20. For example, when a new
element is inserted at the front of the sequence, we will simply add the new node
between the header and the node that is currently after the header. (See Figure 3.21.)

header trailer
BWI JFK SFO

(a)
header trailer
BWI JFK PVD SFO

(b)
header trailer
BWI JFK PVD SFO

(c)
Figure 3.20: Adding an element to a doubly linked list with header and trailer sen-
tinels: (a) before the operation; (b) after creating the new node; (c) after linking the
neighbors to the new node.

www.it-ebooks.info
134 Chapter 3. Fundamental Data Structures

header trailer
BWI JFK SFO

(a)
header trailer
PVD BWI JFK SFO

(b)
header trailer
PVD BWI JFK SFO

(c)
Figure 3.21: Adding an element to the front of a sequence represented by a dou-
bly linked list with header and trailer sentinels: (a) before the operation; (b) after
creating the new node; (c) after linking the neighbors to the new node.

The deletion of a node, portrayed in Figure 3.22, proceeds in the opposite fash-
ion of an insertion. The two neighbors of the node to be deleted are linked directly
to each other, thereby bypassing the original node. As a result, that node will no
longer be considered part of the list and it can be reclaimed by the system. Because
of our use of sentinels, the same implementation can be used when deleting the first
or the last element of a sequence, because even such an element will be stored at a
node that lies between two others.
header trailer
BWI JFK PVD SFO

(a)
header trailer
BWI JFK PVD SFO

(b)
header trailer
BWI JFK SFO

(c)
Figure 3.22: Removing the element PVD from a doubly linked list: (a) before
the removal; (b) after linking out the old node; (c) after the removal (and garbage
collection).

www.it-ebooks.info
3.4. Doubly Linked Lists 135

3.4.1 Implementing a Doubly Linked List Class


In this section, we present a complete implementation of a DoublyLinkedList class,
supporting the following public methods:

size( ): Returns the number of elements in the list.


isEmpty( ): Returns true if the list is empty, and false otherwise.
first( ): Returns (but does not remove) the first element in the list.
last( ): Returns (but does not remove) the last element in the list.
addFirst(e): Adds a new element to the front of the list.
addLast(e): Adds a new element to the end of the list.
removeFirst( ): Removes and returns the first element of the list.
removeLast( ): Removes and returns the last element of the list.

If first( ), last( ), removeFirst( ), or removeLast( ) are called on a list that is empty,


we will return a null reference and leave the list unchanged.
Although we have seen that it is possible to add or remove an element at an
internal position of a doubly linked list, doing so requires knowledge of one or
more nodes, to identify the position at which the operation should occur. In this
chapter, we prefer to maintain encapsulation, with a private, nested Node class. In
Chapter 7, we will revisit the use of doubly linked lists, offering a more advanced
interface that supports internal insertions and deletions while maintaining encapsu-
lation.
Code Fragments 3.17 and 3.18 present the DoublyLinkedList class implemen-
tation. As we did with our SinglyLinkedList class, we use the generics framework
to accept any type of element. The nested Node class for the doubly linked list is
similar to that of the singly linked list, except with support for an additional prev
reference to the preceding node.
Our use of sentinel nodes, header and trailer, impacts the implementation in
several ways. We create and link the sentinels when constructing an empty list
(lines 25–29). We also keep in mind that the first element of a nonempty list is
stored in the node just after the header (not in the header itself), and similarly that
the last element is stored in the node just before the trailer.
The sentinels greatly ease our implementation of the various update methods.
We will provide a private method, addBetween, to handle the general case of an
insertion, and then we will rely on that utility as a straightforward method to imple-
ment both addFirst and addLast. In similar fashion, we will define a private remove
method that can be used to easily implement both removeFirst and removeLast.

www.it-ebooks.info
136 Chapter 3. Fundamental Data Structures

1 /∗∗ A basic doubly linked list implementation. ∗/


2 public class DoublyLinkedList<E> {
3 //---------------- nested Node class ----------------
4 private static class Node<E> {
5 private E element; // reference to the element stored at this node
6 private Node<E> prev; // reference to the previous node in the list
7 private Node<E> next; // reference to the subsequent node in the list
8 public Node(E e, Node<E> p, Node<E> n) {
9 element = e;
10 prev = p;
11 next = n;
12 }
13 public E getElement( ) { return element; }
14 public Node<E> getPrev( ) { return prev; }
15 public Node<E> getNext( ) { return next; }
16 public void setPrev(Node<E> p) { prev = p; }
17 public void setNext(Node<E> n) { next = n; }
18 } //----------- end of nested Node class -----------
19
20 // instance variables of the DoublyLinkedList
21 private Node<E> header; // header sentinel
22 private Node<E> trailer; // trailer sentinel
23 private int size = 0; // number of elements in the list
24 /∗∗ Constructs a new empty list. ∗/
25 public DoublyLinkedList( ) {
26 header = new Node<>(null, null, null); // create header
27 trailer = new Node<>(null, header, null); // trailer is preceded by header
28 header.setNext(trailer); // header is followed by trailer
29 }
30 /∗∗ Returns the number of elements in the linked list. ∗/
31 public int size( ) { return size; }
32 /∗∗ Tests whether the linked list is empty. ∗/
33 public boolean isEmpty( ) { return size == 0; }
34 /∗∗ Returns (but does not remove) the first element of the list. ∗/
35 public E first( ) {
36 if (isEmpty( )) return null;
37 return header.getNext( ).getElement( ); // first element is beyond header
38 }
39 /∗∗ Returns (but does not remove) the last element of the list. ∗/
40 public E last( ) {
41 if (isEmpty( )) return null;
42 return trailer.getPrev( ).getElement( ); // last element is before trailer
43 }
Code Fragment 3.17: Implementation of the DoublyLinkedList class. (Continues in
Code Fragment 3.18.)

www.it-ebooks.info
3.4. Doubly Linked Lists 137

44 // public update methods


45 /∗∗ Adds element e to the front of the list. ∗/
46 public void addFirst(E e) {
47 addBetween(e, header, header.getNext( )); // place just after the header
48 }
49 /∗∗ Adds element e to the end of the list. ∗/
50 public void addLast(E e) {
51 addBetween(e, trailer.getPrev( ), trailer); // place just before the trailer
52 }
53 /∗∗ Removes and returns the first element of the list. ∗/
54 public E removeFirst( ) {
55 if (isEmpty( )) return null; // nothing to remove
56 return remove(header.getNext( )); // first element is beyond header
57 }
58 /∗∗ Removes and returns the last element of the list. ∗/
59 public E removeLast( ) {
60 if (isEmpty( )) return null; // nothing to remove
61 return remove(trailer.getPrev( )); // last element is before trailer
62 }
63
64 // private update methods
65 /∗∗ Adds element e to the linked list in between the given nodes. ∗/
66 private void addBetween(E e, Node<E> predecessor, Node<E> successor) {
67 // create and link a new node
68 Node<E> newest = new Node<>(e, predecessor, successor);
69 predecessor.setNext(newest);
70 successor.setPrev(newest);
71 size++;
72 }
73 /∗∗ Removes the given node from the list and returns its element. ∗/
74 private E remove(Node<E> node) {
75 Node<E> predecessor = node.getPrev( );
76 Node<E> successor = node.getNext( );
77 predecessor.setNext(successor);
78 successor.setPrev(predecessor);
79 size−−;
80 return node.getElement( );
81 }
82 } //----------- end of DoublyLinkedList class -----------
Code Fragment 3.18: Implementation of the public and private update methods for
the DoublyLinkedList class. (Continued from Code Fragment 3.17.)

www.it-ebooks.info
138 Chapter 3. Fundamental Data Structures

3.5 Equivalence Testing


When working with reference types, there are many different notions of what it
means for one expression to be equal to another. At the lowest level, if a and b are
reference variables, then expression a == b tests whether a and b refer to the same
object (or if both are set to the null value).
However, for many types there is a higher-level notion of two variables being
considered “equivalent” even if they do not actually refer to the same instance of
the class. For example, we typically want to consider two String instances to be
equivalent to each other if they represent the identical sequence of characters.
To support a broader notion of equivalence, all object types support a method
named equals. Users of reference types should rely on the syntax a.equals(b),
unless they have a specific need to test the more narrow notion of identity. The
equals method is formally defined in the Object class, which serves as a superclass
for all reference types, but that implementation reverts to returning the value of
expression a == b. Defining a more meaningful notion of equivalence requires
knowledge about a class and its representation.
The author of each class has a responsibility to provide an implementation of
the equals method, which overrides the one inherited from Object, if there is a more
relevant definition for the equivalence of two instances. For example, Java’s String
class redefines equals to test character-for-character equivalence.
Great care must be taken when overriding the notion of equality, as the consis-
tency of Java’s libraries depends upon the equals method defining what is known
as an equivalence relation in mathematics, satisfying the following properties:
Treatment of null: For any nonnull reference variable x, the call x.equals(null)
should return false (that is, nothing equals null except null).
Reflexivity: For any nonnull reference variable x, the call x.equals(x) should
return true (that is, an object should equal itself).
Symmetry: For any nonnull reference variables x and y, the calls x.equals(y)
and y.equals(x) should return the same value.
Transitivity: For any nonnull reference variables x, y, and z, if both calls
x.equals(y) and y.equals(z) return true, then call x.equals(z)
must return true as well.
While these properties may seem intuitive, it can be challenging to properly
implement equals for some data structures, especially in an object-oriented context,
with inheritance and generics. For most of the data structures in this book, we omit
the implementation of a valid equals method (leaving it as an exercise). However,
in this section, we consider the treatment of equivalence testing for both arrays and
linked lists, including a concrete example of a proper implementation of the equals
method for our SinglyLinkedList class.

www.it-ebooks.info
3.5. Equivalence Testing 139

3.5.1 Equivalence Testing with Arrays


As we mentioned in Section 1.3, arrays are a reference type in Java, but not tech-
nically a class. However, the java.util.Arrays class, introduced in Section 3.1.3,
provides additional static methods that are useful when processing arrays. The fol-
lowing provides a summary of the treatment of equivalence for arrays, assuming
that variables a and b refer to array objects:

a == b: Tests if a and b refer to the same underlying array instance.


a.equals(b): Interestingly, this is identical to a == b. Arrays are not a
true class type and do not override the Object.equals method.
Arrays.equals(a,b): This provides a more intuitive notion of equivalence, return-
ing true if the arrays have the same length and all pairs
of corresponding elements are “equal” to each other. More
specifically, if the array elements are primitives, then it uses
the standard == to compare values. If elements of the ar-
rays are a reference type, then it makes pairwise compar-
isons a[k].equals(b[k]) in evaluating the equivalence.

For most applications, the Arrays.equals behavior captures the appropriate no-
tion of equivalence. However, there is an additional complication when using
multidimensional arrays. The fact that two-dimensional arrays in Java are really
one-dimensional arrays nested inside a common one-dimensional array raises an
interesting issue with respect to how we think about compound objects, which are
objects—like a two-dimensional array—that are made up of other objects. In par-
ticular, it brings up the question of where a compound object begins and ends.
Thus, if we have a two-dimensional array, a, and another two-dimensional ar-
ray, b, that has the same entries as a, we probably want to think that a is equal
to b. But the one-dimensional arrays that make up the rows of a and b (such as
a[0] and b[0]) are stored in different memory locations, even though they have the
same internal content. Therefore, a call to the method java.util.Arrays.equals(a,b)
will return false in this case, because it tests a[k].equals(b[k]), which invokes the
Object class’s definition of equals.
To support the more natural notion of multidimensional arrays being equal if
they have equal contents, the class provides an additional method:

Arrays.deepEquals(a,b): Identical to Arrays.equals(a,b) except when the elements


of a and b are themselves arrays, in which case it calls
Arrays.deepEquals(a[k],b[k]) for corresponding entries,
rather than a[k].equals(b[k]).

www.it-ebooks.info
140 Chapter 3. Fundamental Data Structures

3.5.2 Equivalence Testing with Linked Lists


In this section, we develop an implementation of the equals method in the context
of the SinglyLinkedList class of Section 3.2.1. Using a definition very similar to the
treatment of arrays by the java.util.Arrays.equals method, we consider two lists to
be equivalent if they have the same length and contents that are element-by-element
equivalent. We can evaluate such equivalence by simultaneously traversing two
lists, verifying that x.equals(y) for each pair of corresponding elements x and y.
The implementation of the SinglyLinkedList.equals method is given in Code
Fragment 3.19. Although we are focused on comparing two singly linked lists, the
equals method must take an arbitrary Object as a parameter. We take a conservative
approach, demanding that two objects be instances of the same class to have any
possibility of equivalence. (For example, we do not consider a singly linked list to
be equivalent to a doubly linked list with the same sequence of elements.) After
ensuring, at line 2, that parameter o is nonnull, line 3 uses the getClass( ) method
supported by all objects to test whether the two instances belong to the same class.
When reaching line 4, we have ensured that the parameter was an instance of
the SinglyLinkedList class (or an appropriate subclass), and so we can safely cast
it to a SinglyLinkedList, so that we may access its instance variables size and head.
There is subtlety involving the treatment of Java’s generics framework. Although
our SinglyLinkedList class has a declared formal type parameter <E>, we cannot
detect at runtime whether the other list has a matching type. (For those interested,
look online for a discussion of erasure in Java.) So we revert to using a more classic
approach with nonparameterized type SinglyLinkedList at line 4, and nonparame-
terized Node declarations at lines 6 and 7. If the two lists have incompatible types,
this will be detected when calling the equals method on corresponding elements.

1 public boolean equals(Object o) {


2 if (o == null) return false;
3 if (getClass( ) != o.getClass( )) return false;
4 SinglyLinkedList other = (SinglyLinkedList) o; // use nonparameterized type
5 if (size != other.size) return false;
6 Node walkA = head; // traverse the primary list
7 Node walkB = other.head; // traverse the secondary list
8 while (walkA != null) {
9 if (!walkA.getElement( ).equals(walkB.getElement( ))) return false; //mismatch
10 walkA = walkA.getNext( );
11 walkB = walkB.getNext( );
12 }
13 return true; // if we reach this, everything matched successfully
14 }
Code Fragment 3.19: Implementation of the SinglyLinkedList.equals method.

www.it-ebooks.info
3.6. Cloning Data Structures 141

3.6 Cloning Data Structures


The beauty of object-oriented programming is that abstraction allows for a data
structure to be treated as a single object, even though the encapsulated implemen-
tation of the structure might rely on a more complex combination of many objects.
In this section, we consider what it means to make a copy of such a structure.
In a programming environment, a common expectation is that a copy of an
object has its own state and that, once made, the copy is independent of the original
(for example, so that changes to one do not directly affect the other). However,
when objects have fields that are reference variables pointing to auxiliary objects, it
is not always obvious whether a copy should have a corresponding field that refers
to the same auxiliary object, or to a new copy of that auxiliary object.
For example, if a hypothetical AddressBook class has instances that represent
an electronic address book—with contact information (such as phone numbers and
email addresses) for a person’s friends and acquaintances—how might we envision
a copy of an address book? Should an entry added to one book appear in the other?
If we change a person’s phone number in one book, would we expect that change
to be synchronized in the other?
There is no one-size-fits-all answer to questions like this. Instead, each class
in Java is responsible for defining whether its instances can be copied, and if
so, precisely how the copy is constructed. The universal Object superclass de-
fines a method named clone, which can be used to produce what is known as a
shallow copy of an object. This uses the standard assignment semantics to as-
sign the value of each field of the new object equal to the corresponding field of
the existing object that is being copied. The reason this is known as a shallow
copy is because if the field is a reference type, then an initialization of the form
duplicate.field = original.field causes the field of the new object to refer to the
same underlying instance as the field of the original object.
A shallow copy is not always appropriate for all classes, and therefore, Java
intentionally disables use of the clone( ) method by declaring it as protected, and
by having it throw a CloneNotSupportedException when called. The author of
a class must explicitly declare support for cloning by formally declaring that the
class implements the Cloneable interface, and by declaring a public version of the
clone( ) method. That public method can simply call the protected one to do the
field-by-field assignment that results in a shallow copy, if appropriate. However,
for many classes, the class may choose to implement a deeper version of cloning,
in which some of the referenced objects are themselves cloned.
For most of the data structures in this book, we omit the implementation of a
valid clone method (leaving it as an exercise). However, in this section, we consider
approaches for cloning both arrays and linked lists, including a concrete implemen-
tation of the clone method for the SinglyLinkedList class.

www.it-ebooks.info
142 Chapter 3. Fundamental Data Structures

3.6.1 Cloning Arrays


Although arrays support some special syntaxes such as a[k] and a.length, it is im-
portant to remember that they are objects, and that array variables are reference
variables. This has important consequences. As a first example, consider the fol-
lowing code:
int[ ] data = {2, 3, 5, 7, 11, 13, 17, 19};
int[ ] backup;
backup = data; // warning; not a copy
The assignment of variable backup to data does not create any new array; it simply
creates a new alias for the same array, as portrayed in Figure 3.23.

data

2 3 5 7 11 13 17 19
backup 0 1 2 3 4 5 6 7

Figure 3.23: The result of the command backup = data for int arrays.

Instead, if we want to make a copy of the array, data, and assign a reference to
the new array to variable, backup, we should write:
backup = data.clone( );
The clone method, when executed on an array, initializes each cell of the new array
to the value that is stored in the corresponding cell of the original array. This results
in an independent array, as shown in Figure 3.24.

data 2 3 5 7 11 13 17 19
0 1 2 3 4 5 6 7

backup 2 3 5 7 11 13 17 19
0 1 2 3 4 5 6 7
Figure 3.24: The result of the command backup = data.clone( ) for int arrays.

If we subsequently make an assignment such as data[4] = 23 in this configuration,


the backup array is unaffected.
There are more considerations when copying an array that stores reference
types rather than primitive types. The clone( ) method produces a shallow copy
of the array, producing a new array whose cells refer to the same objects referenced
by the first array.

www.it-ebooks.info
3.6. Cloning Data Structures 143
For example, if the variable contacts refers to an array of hypothetical Person
instances, the result of the command guests = contacts.clone( ) produces a shal-
low copy, as portrayed in Figure 3.25.
0 1 2 3 4 5 6 7
contacts

guests
0 1 2 3 4 5 6 7
Figure 3.25: A shallow copy of an array of objects, resulting from the command
guests = contacts.clone( ).

A deep copy of the contact list can be created by iteratively cloning the indi-
vidual elements, as follows, but only if the Person class is declared as Cloneable.
Person[ ] guests = new Person[contacts.length];
for (int k=0; k < contacts.length; k++)
guests[k] = (Person) contacts[k].clone( ); // returns Object type

Because a two-dimensional array is really a one-dimensional array storing other


one-dimensional arrays, the same distinction between a shallow and deep copy
exists. Unfortunately, the java.util.Arrays class does not provide any “deepClone”
method. However, we can implement our own method by cloning the individual
rows of an array, as shown in Code Fragment 3.20, for a two-dimensional array of
integers.

1 public static int[ ][ ] deepClone(int[ ][ ] original) {


2 int[ ][ ] backup = new int[original.length][ ]; // create top-level array of arrays
3 for (int k=0; k < original.length; k++)
4 backup[k] = original[k].clone( ); // copy row k
5 return backup;
6 }
Code Fragment 3.20: A method for creating a deep copy of a two-dimensional array
of integers.

www.it-ebooks.info
144 Chapter 3. Fundamental Data Structures

3.6.2 Cloning Linked Lists


In this section, we add support for cloning instances of the SinglyLinkedList class
from Section 3.2.1. The first step to making a class cloneable in Java is declaring
that it implements the Cloneable interface. Therefore, we adjust the first line of the
class definition to appear as follows:
public class SinglyLinkedList<E> implements Cloneable {
The remaining task is implementing a public version of the clone( ) method of
the class, which we present in Code Fragment 3.21. By convention, that method
should begin by creating a new instance using a call to super.clone( ), which in our
case invokes the method from the Object class (line 3). Because the inherited ver-
sion returns an Object, we perform a narrowing cast to type SinglyLinkedList<E>.
At this point in the execution, the other list has been created as a shallow copy
of the original. Since our list class has two fields, size and head, the following
assignments have been made:
other.size = this.size;
other.head = this.head;
While the assignment of the size variable is correct, we cannot allow the new list to
share the same head value (unless it is null). For a nonempty list to have an inde-
pendent state, it must have an entirely new chain of nodes, each storing a reference
to the corresponding element from the original list. We therefore create a new head
node at line 5 of the code, and then perform a walk through the remainder of the
original list (lines 8–13) while creating and linking new nodes for the new list.

1 public SinglyLinkedList<E> clone( ) throws CloneNotSupportedException {


2 // always use inherited Object.clone() to create the initial copy
3 SinglyLinkedList<E> other = (SinglyLinkedList<E>) super.clone( ); // safe cast
4 if (size > 0) { // we need independent chain of nodes
5 other.head = new Node<>(head.getElement( ), null);
6 Node<E> walk = head.getNext( ); // walk through remainder of original list
7 Node<E> otherTail = other.head; // remember most recently created node
8 while (walk != null) { // make a new node storing same element
9 Node<E> newest = new Node<>(walk.getElement( ), null);
10 otherTail.setNext(newest); // link previous node to this one
11 otherTail = newest;
12 walk = walk.getNext( );
13 }
14 }
15 return other;
16 }
Code Fragment 3.21: Implementation of the SinglyLinkedList.clone method.

www.it-ebooks.info
3.7. Exercises 145

3.7 Exercises
Reinforcement
R-3.1 Give the next five pseudorandom numbers generated by the process described on
page 113, with a = 12, b = 5, and n = 100, and 92 as the seed for cur.
R-3.2 Write a Java method that repeatedly selects and removes a random entry from an
array until the array holds no more entries.
R-3.3 Explain the changes that would have to be made to the program of Code Frag-
ment 3.8 so that it could perform the Caesar cipher for messages that are written
in an alphabet-based language other than English, such as Greek, Russian, or
Hebrew.
R-3.4 The TicTacToe class of Code Fragments 3.9 and 3.10 has a flaw, in that it allows
a player to place a mark even after the game has already been won by someone.
Modify the class so that the putMark method throws an IllegalStateException in
that case.
R-3.5 The removeFirst method of the SinglyLinkedList class includes a special case to
reset the tail field to null when deleting the last node of a list (see lines 51 and 52
of Code Fragment 3.15). What are the consequences if we were to remove those
two lines from the code? Explain why the class would or would not work with
such a modification.
R-3.6 Give an algorithm for finding the second-to-last node in a singly linked list in
which the last node is indicated by a null next reference.
R-3.7 Consider the implementation of CircularlyLinkedList.addFirst, in Code Frag-
ment 3.16. The else body at lines 39 and 40 of that method relies on a locally
declared variable, newest. Redesign that clause to avoid use of any local vari-
able.
R-3.8 Describe a method for finding the middle node of a doubly linked list with header
and trailer sentinels by “link hopping,” and without relying on explicit knowledge
of the size of the list. In the case of an even number of nodes, report the node
slightly left of center as the “middle.” What is the running time of this method?
R-3.9 Give an implementation of the size( ) method for the SingularlyLinkedList class,
assuming that we did not maintain size as an instance variable.
R-3.10 Give an implementation of the size( ) method for the CircularlyLinkedList class,
assuming that we did not maintain size as an instance variable.
R-3.11 Give an implementation of the size( ) method for the DoublyLinkedList class,
assuming that we did not maintain size as an instance variable.
R-3.12 Implement a rotate( ) method in the SinglyLinkedList class, which has semantics
equal to addLast(removeFirst( )), yet without creating any new node.

www.it-ebooks.info
146 Chapter 3. Fundamental Data Structures
R-3.13 What is the difference between a shallow equality test and a deep equality test
between two Java arrays, A and B, if they are one-dimensional arrays of type int?
What if the arrays are two-dimensional arrays of type int?
R-3.14 Give three different examples of a single Java statement that assigns variable,
backup, to a new array with copies of all int entries of an existing array, original.
R-3.15 Implement the equals( ) method for the CircularlyLinkedList class, assuming that
two lists are equal if they have the same sequence of elements, with correspond-
ing elements currently at the front of the list.
R-3.16 Implement the equals( ) method for the DoublyLinkedList class.

Creativity
C-3.17 Let A be an array of size n ≥ 2 containing integers from 1 to n − 1 inclusive, one
of which is repeated. Describe an algorithm for finding the integer in A that is
repeated.
C-3.18 Let B be an array of size n ≥ 6 containing integers from 1 to n − 5 inclusive, five
of which are repeated. Describe an algorithm for finding the five integers in B
that are repeated.
C-3.19 Give Java code for performing add(e) and remove(i) methods for the Scoreboard
class, as in Code Fragments 3.3 and 3.4, except this time, don’t maintain the game
entries in order. Assume that we still need to keep n entries stored in indices 0 to
n − 1. You should be able to implement the methods without using any loops, so
that the number of steps they perform does not depend on n.
C-3.20 Give examples of values for a and b in the pseudorandom generator given on
page 113 of this chapter such that the result is not very random looking, for
n = 1000.
C-3.21 Suppose you are given an array, A, containing 100 integers that were generated
using the method r.nextInt(10), where r is an object of type java.util.Random.
Let x denote the product of the integers in A. There is a single number that x will
equal with probability at least 0.99. What is that number and what is a formula
describing the probability that x is equal to that number?
C-3.22 Write a method, shuffle(A), that rearranges the elements of array A so that every
possible ordering is equally likely. You may rely on the nextInt(n) method of
the java.util.Random class, which returns a random number between 0 and n − 1
inclusive.
C-3.23 Suppose you are designing a multiplayer game that has n ≥ 1 000 players, num-
bered 1 to n, interacting in an enchanted forest. The winner of this game is the
first player who can meet all the other players at least once (ties are allowed).
Assuming that there is a method meet(i, j), which is called each time a player i
meets a player j (with i 6= j), describe a way to keep track of the pairs of meeting
players and who is the winner.

www.it-ebooks.info
3.7. Exercises 147
C-3.24 Write a Java method that takes two three-dimensional integer arrays and adds
them componentwise.
C-3.25 Describe an algorithm for concatenating two singly linked lists L and M, into a
single list L′ that contains all the nodes of L followed by all the nodes of M.
C-3.26 Give an algorithm for concatenating two doubly linked lists L and M, with header
and trailer sentinel nodes, into a single list L′ .
C-3.27 Describe in detail how to swap two nodes x and y (and not just their contents) in
a singly linked list L given references only to x and y. Repeat this exercise for the
case when L is a doubly linked list. Which algorithm takes more time?
C-3.28 Describe in detail an algorithm for reversing a singly linked list L using only a
constant amount of additional space.
C-3.29 Suppose you are given two circularly linked lists, L and M. Describe an algorithm
for telling if L and M store the same sequence of elements (but perhaps with
different starting points).
C-3.30 Given a circularly linked list L containing an even number of nodes, describe
how to split L into two circularly linked lists of half the size.
C-3.31 Our implementation of a doubly linked list relies on two sentinel nodes, header
and trailer, but a single sentinel node that guards both ends of the list should
suffice. Reimplement the DoublyLinkedList class using only one sentinel node.
C-3.32 Implement a circular version of a doubly linked list, without any sentinels, that
supports all the public behaviors of the original as well as two new update meth-
ods, rotate( ) and rotateBackward( ).
C-3.33 Solve the previous problem using inheritance, such that a DoublyLinkedList class
inherits from the existing CircularlyLinkedList, and the DoublyLinkedList.Node
nested class inherits from CircularlyLinkedList.Node.
C-3.34 Implement the clone( ) method for the CircularlyLinkedList class.
C-3.35 Implement the clone( ) method for the DoublyLinkedList class.

Projects
P-3.36 Write a Java program for a matrix class that can add and multiply arbitrary two-
dimensional arrays of integers.
P-3.37 Write a class that maintains the top ten scores for a game application, implement-
ing the add and remove methods of Section 3.1.1, but using a singly linked list
instead of an array.
P-3.38 Perform the previous project, but use a doubly linked list. Moreover, your imple-
mentation of remove(i) should make the fewest number of pointer hops to get to
the game entry at index i.
P-3.39 Write a program that can perform the Caesar cipher for English messages that
include both upper- and lowercase characters.

www.it-ebooks.info
148 Chapter 3. Fundamental Data Structures
P-3.40 Implement a class, SubstitutionCipher, with a constructor that takes a string with
the 26 uppercase letters in an arbitrary order and uses that as the encoder for a
cipher (that is, A is mapped to the first character of the parameter, B is mapped
to the second, and so on.) You should derive the decoding map from the forward
version.
P-3.41 Redesign the CaesarCipher class as a subclass of the SubstitutionCipher from
the previous problem.
P-3.42 Design a RandomCipher class as a subclass of the SubstitutionCipher from Ex-
ercise P-3.40, so that each instance of the class relies on a random permutation
of letters for its mapping.
P-3.43 In the children’s game, Duck, Duck, Goose, a group of children sit in a circle.
One of them is elected “it” and that person walks around the outside of the circle.
The person who is “it” pats each child on the head, saying “Duck” each time,
until randomly reaching a child that the “it” person identifies as “Goose.” At this
point there is a mad scramble, as the “Goose” and the “it” person race around the
circle. Whoever returns to the Goose’s former place first gets to remain in the
circle. The loser of this race is the “it” person for the next round of play. The
game continues like this until the children get bored or an adult tells them it’s
snack time. Write software that simulates a game of Duck, Duck, Goose.

Chapter Notes
The fundamental data structures of arrays and linked lists discussed in this chapter belong
to the folklore of computer science. They were first chronicled in the computer science
literature by Knuth in his seminal book on Fundamental Algorithms [60].

www.it-ebooks.info
Chapter

4 Algorithm Analysis

Contents

4.1 Experimental Studies . . . . . . . . . . . . . . . . . . . . . 151


4.1.1 Moving Beyond Experimental Analysis . . . . . . . . . . . 154
4.2 The Seven Functions Used in This Book . . . . . . . . . . 156
4.2.1 Comparing Growth Rates . . . . . . . . . . . . . . . . . . 163
4.3 Asymptotic Analysis . . . . . . . . . . . . . . . . . . . . . . 164
4.3.1 The “Big-Oh” Notation . . . . . . . . . . . . . . . . . . . 164
4.3.2 Comparative Analysis . . . . . . . . . . . . . . . . . . . . 168
4.3.3 Examples of Algorithm Analysis . . . . . . . . . . . . . . 170
4.4 Simple Justification Techniques . . . . . . . . . . . . . . . 178
4.4.1 By Example . . . . . . . . . . . . . . . . . . . . . . . . . 178
4.4.2 The “Contra” Attack . . . . . . . . . . . . . . . . . . . . 178
4.4.3 Induction and Loop Invariants . . . . . . . . . . . . . . . 179
4.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182

www.it-ebooks.info
150 Chapter 4. Algorithm Analysis
In a classic story, the famous mathematician Archimedes was asked to deter-
mine if a golden crown commissioned by the king was indeed pure gold, and not
part silver, as an informant had claimed. Archimedes discovered a way to perform
this analysis while stepping into a bath. He noted that water spilled out of the bath
in proportion to the amount of him that went in. Realizing the implications of this
fact, he immediately got out of the bath and ran naked through the city shouting,
“Eureka, eureka!” for he had discovered an analysis tool (displacement), which,
when combined with a simple scale, could determine if the king’s new crown was
good or not. That is, Archimedes could dip the crown and an equal-weight amount
of gold into a bowl of water to see if they both displaced the same amount. This
discovery was unfortunate for the goldsmith, however, for when Archimedes did
his analysis, the crown displaced more water than an equal-weight lump of pure
gold, indicating that the crown was not, in fact, pure gold.
In this book, we are interested in the design of “good” data structures and algo-
rithms. Simply put, a data structure is a systematic way of organizing and access-
ing data, and an algorithm is a step-by-step procedure for performing some task in
a finite amount of time. These concepts are central to computing, but to be able to
classify some data structures and algorithms as “good,” we must have precise ways
of analyzing them.
The primary analysis tool we will use in this book involves characterizing the
running times of algorithms and data structure operations, with space usage also
being of interest. Running time is a natural measure of “goodness,” since time is a
precious resource—computer solutions should run as fast as possible. In general,
the running time of an algorithm or data structure operation increases with the input
size, although it may also vary for different inputs of the same size. Also, the run-
ning time is affected by the hardware environment (e.g., the processor, clock rate,
memory, disk) and software environment (e.g., the operating system, programming
language) in which the algorithm is implemented and executed. All other factors
being equal, the running time of the same algorithm on the same input data will be
smaller if the computer has, say, a much faster processor or if the implementation
is done in a program compiled into native machine code instead of an interpreted
implementation run on a virtual machine. We begin this chapter by discussing tools
for performing experimental studies, yet also limitations to the use of experiments
as a primary means for evaluating algorithm efficiency.
Focusing on running time as a primary measure of goodness requires that we be
able to use a few mathematical tools. In spite of the possible variations that come
from different environmental factors, we would like to focus on the relationship
between the running time of an algorithm and the size of its input. We are interested
in characterizing an algorithm’s running time as a function of the input size. But
what is the proper way of measuring it? In this chapter, we “roll up our sleeves”
and develop a mathematical way of analyzing algorithms.

www.it-ebooks.info
4.1. Experimental Studies 151

4.1 Experimental Studies


One way to study the efficiency of an algorithm is to implement it and experiment
by running the program on various test inputs while recording the time spent during
each execution. A simple mechanism for collecting such running times in Java is
based on use of the currentTimeMillis method of the System class. That method
reports the number of milliseconds that have passed since a benchmark time known
as the epoch (January 1, 1970 UTC). It is not that we are directly interested in the
time since the epoch; the key is that if we record the time immediately before
executing the algorithm and then immediately after, we can measure the elapsed
time of an algorithm’s execution by computing the difference of those times. A
typical way to automate this process is shown in Code Fragment 4.1.

1 long startTime = System.currentTimeMillis( ); // record the starting time


2 /∗ (run the algorithm) ∗/
3 long endTime = System.currentTimeMillis( ); // record the ending time
4 long elapsed = endTime − startTime; // compute the elapsed time
Code Fragment 4.1: Typical approach for timing an algorithm in Java.

Measuring elapsed time in this fashion provides a reasonable reflection of an


algorithm’s efficiency; for extremely quick operations, Java provides a method,
nanoTime, that measures in nanoseconds rather than milliseconds.
Because we are interested in the general dependence of running time on the
size and structure of the input, we should perform independent experiments on
many different test inputs of various sizes. We can then visualize the results by
plotting the performance of each run of the algorithm as a point with x-coordinate
equal to the input size, n, and y-coordinate equal to the running time, t. Such a
visualization provides some intuition regarding the relationship between problem
size and execution time for the algorithm. This may be followed by a statistical
analysis that seeks to fit the best function of the input size to the experimental data.
To be meaningful, this analysis requires that we choose good sample inputs and test
enough of them to be able to make sound statistical claims about the algorithm’s
running time.
However, the measured times reported by both methods currentTimeMillis and
nanoTime will vary greatly from machine to machine, and may likely vary from
trial to trial, even on the same machine. This is because many processes share use
of a computer’s central processing unit (or CPU) and memory system; therefore,
the elapsed time will depend on what other processes are running on the computer
when a test is performed. While the precise running time may not be dependable,
experiments are quite useful when comparing the efficiency of two or more algo-
rithms, so long as they gathered under similar circumstances.

www.it-ebooks.info
152 Chapter 4. Algorithm Analysis
As a tangible example of experimental analysis, we consider two algorithms
for constructing long strings in Java. Our goal will be to have a method, with a
calling signature such as repeat('*', 40), that produces a string composed of 40
asterisks: "****************************************".
The first algorithm we consider performs repeated string concatenation, based
on the + operator. It is implemented as method repeat1 in Code Fragment 4.2.
The second algorithm relies on Java’s StringBuilder class (see Section 1.3), and is
implemented as method repeat2 in Code Fragment 4.2.

1 /∗∗ Uses repeated concatenation to compose a String with n copies of character c. ∗/


2 public static String repeat1(char c, int n) {
3 String answer = "";
4 for (int j=0; j < n; j++)
5 answer += c;
6 return answer;
7 }
8
9 /∗∗ Uses StringBuilder to compose a String with n copies of character c. ∗/
10 public static String repeat2(char c, int n) {
11 StringBuilder sb = new StringBuilder( );
12 for (int j=0; j < n; j++)
13 sb.append(c);
14 return sb.toString( );
15 }
Code Fragment 4.2: Two algorithms for composing a string of repeated characters.

As an experiment, we used System.currentTimeMillis( ), in the style of Code


Fragment 4.1, to measure the efficiency of both repeat1 and repeat2 for very large
strings. We executed trials to compose strings of increasing lengths to explore the
relationship between the running time and the string length. The results of our
experiments are shown in Table 4.1 and charted on a log-log scale in Figure 4.1.

n repeat1 (in ms) repeat2 (in ms)


50,000 2,884 1
100,000 7,437 1
200,000 39,158 2
400,000 170,173 3
800,000 690,836 7
1,600,000 2,874,968 13
3,200,000 12,809,631 28
6,400,000 59,594,275 58
12,800,000 265,696,421 135

Table 4.1: Results of timing experiment on the methods from Code Fragment 4.2.

www.it-ebooks.info
4.1. Experimental Studies 153
109
108

Running Time (ms)


107
106
105 repeat1
104 repeat2
103
102
101
100
104 105 106 107
n

Figure 4.1: Chart of the results of the timing experiment from Code Fragment 4.2,
displayed on a log-log scale. The divergent slopes demonstrate an order of magni-
tude difference in the growth of the running times.

The most striking outcome of these experiments is how much faster the repeat2
algorithm is relative to repeat1. While repeat1 is already taking more than 3 days
to compose a string of 12.8 million characters, repeat2 is able to do the same in a
fraction of a second. We also see some interesting trends in how the running times
of the algorithms each depend upon the size of n. As the value of n is doubled, the
running time of repeat1 typically increases more than fourfold, while the running
time of repeat2 approximately doubles.

Challenges of Experimental Analysis


While experimental studies of running times are valuable, especially when fine-
tuning production-quality code, there are three major limitations to their use for
algorithm analysis:
• Experimental running times of two algorithms are difficult to directly com-
pare unless the experiments are performed in the same hardware and software
environments.
• Experiments can be done only on a limited set of test inputs; hence, they
leave out the running times of inputs not included in the experiment (and
these inputs may be important).
• An algorithm must be fully implemented in order to execute it to study its
running time experimentally.
This last requirement is the most serious drawback to the use of experimental stud-
ies. At early stages of design, when considering a choice of data structures or
algorithms, it would be foolish to spend a significant amount of time implementing
an approach that could easily be deemed inferior by a higher-level analysis.

www.it-ebooks.info
154 Chapter 4. Algorithm Analysis

4.1.1 Moving Beyond Experimental Analysis


Our goal is to develop an approach to analyzing the efficiency of algorithms that:
1. Allows us to evaluate the relative efficiency of any two algorithms in a way
that is independent of the hardware and software environment.
2. Is performed by studying a high-level description of the algorithm without
need for implementation.
3. Takes into account all possible inputs.

Counting Primitive Operations


To analyze the running time of an algorithm without performing experiments, we
perform an analysis directly on a high-level description of the algorithm (either in
the form of an actual code fragment, or language-independent pseudocode). We
define a set of primitive operations such as the following:
• Assigning a value to a variable
• Following an object reference
• Performing an arithmetic operation (for example, adding two numbers)
• Comparing two numbers
• Accessing a single element of an array by index
• Calling a method
• Returning from a method
Formally, a primitive operation corresponds to a low-level instruction with an exe-
cution time that is constant. Ideally, this might be the type of basic operation that is
executed by the hardware, although many of our primitive operations may be trans-
lated to a small number of instructions. Instead of trying to determine the specific
execution time of each primitive operation, we will simply count how many prim-
itive operations are executed, and use this number t as a measure of the running
time of the algorithm.
This operation count will correlate to an actual running time in a specific com-
puter, for each primitive operation corresponds to a constant number of instructions,
and there are only a fixed number of primitive operations. The implicit assumption
in this approach is that the running times of different primitive operations will be
fairly similar. Thus, the number, t, of primitive operations an algorithm performs
will be proportional to the actual running time of that algorithm.

Measuring Operations as a Function of Input Size


To capture the order of growth of an algorithm’s running time, we will associate,
with each algorithm, a function f (n) that characterizes the number of primitive
operations that are performed as a function of the input size n. Section 4.2 will in-
troduce the seven most common functions that arise, and Section 4.3 will introduce
a mathematical framework for comparing functions to each other.

www.it-ebooks.info
4.1. Experimental Studies 155
Focusing on the Worst-Case Input
An algorithm may run faster on some inputs than it does on others of the same size.
Thus, we may wish to express the running time of an algorithm as the function of
the input size obtained by taking the average over all possible inputs of the same
size. Unfortunately, such an average-case analysis is typically quite challenging.
It requires us to define a probability distribution on the set of inputs, which is often
a difficult task. Figure 4.2 schematically shows how, depending on the input distri-
bution, the running time of an algorithm can be anywhere between the worst-case
time and the best-case time. For example, what if inputs are really only of types
“A” or “D”?
An average-case analysis usually requires that we calculate expected running
times based on a given input distribution, which usually involves sophisticated
probability theory. Therefore, for the remainder of this book, unless we specify
otherwise, we will characterize running times in terms of the worst case, as a func-
tion of the input size, n, of the algorithm.
Worst-case analysis is much easier than average-case analysis, as it requires
only the ability to identify the worst-case input, which is often simple. Also, this
approach typically leads to better algorithms. Making the standard of success for an
algorithm to perform well in the worst case necessarily requires that it will do well
on every input. That is, designing for the worst case leads to stronger algorithmic
“muscles,” much like a track star who always practices by running up an incline.

5 ms  worst-case time

4 ms
average-case time?
Running Time

3 ms
best-case time
2 ms

1 ms

A B C D E F G
Input Instance

Figure 4.2: The difference between best-case and worst-case time. Each bar repre-
sents the running time of some algorithm on a different possible input.

www.it-ebooks.info
156 Chapter 4. Algorithm Analysis

4.2 The Seven Functions Used in This Book


In this section, we will briefly discuss the seven most important functions used
in the analysis of algorithms. We will use only these seven simple functions for
almost all the analysis we do in this book. In fact, a section that uses a function
other than one of these seven will be marked with a star (⋆) to indicate that it is
optional. In addition to these seven fundamental functions, an appendix (available
on the companion website) contains a list of other useful mathematical facts that
apply in the analysis of data structures and algorithms.

The Constant Function


The simplest function we can think of is the constant function, that is,
f (n) = c,
for some fixed constant c, such as c = 5, c = 27, or c = 210 . That is, for any
argument n, the constant function f (n) assigns the value c. In other words, it does
not matter what the value of n is; f (n) will always be equal to the constant value c.
Because we are most interested in integer functions, the most fundamental con-
stant function is g(n) = 1, and this is the typical constant function we use in this
book. Note that any other constant function, f (n) = c, can be written as a constant
c times g(n). That is, f (n) = cg(n) in this case.
As simple as it is, the constant function is useful in algorithm analysis because it
characterizes the number of steps needed to do a basic operation on a computer, like
adding two numbers, assigning a value to a variable, or comparing two numbers.

The Logarithm Function


One of the interesting and sometimes even surprising aspects of the analysis of
data structures and algorithms is the ubiquitous presence of the logarithm function,
f (n) = logb n, for some constant b > 1. This function is defined as the inverse of a
power, as follows:
x = logb n if and only if bx = n.
The value b is known as the base of the logarithm. Note that by the above definition,
for any base b > 0, we have that logb 1 = 0.
The most common base for the logarithm function in computer science is 2 as
computers store integers in binary. In fact, this base is so common that we will
typically omit it from the notation when it is 2. That is, for us,
log n = log2 n.
We note that most handheld calculators have a button marked LOG, but this is
typically for calculating the logarithm base-10, not base-two.

www.it-ebooks.info
4.2. The Seven Functions Used in This Book 157
Computing the logarithm function exactly for any integer n involves the use of
calculus, but we can use an approximation that is good enough for our purposes
without calculus. We recall that the ceiling of a real number, x, is the smallest
integer greater than or equal to x, denoted with ⌈x⌉. The ceiling of x can be viewed
as an integer approximation of x since we have x ≤ ⌈x⌉ < x + 1. For a positive
integer, n, we repeatedly divide n by b and stop when we get a number less than or
equal to 1. The number of divisions performed is equal to ⌈logb n⌉. We give below
three examples of the computation of ⌈logb n⌉ by repeated divisions:
• ⌈log3 27⌉ = 3, because ((27/3)/3)/3 = 1;
• ⌈log4 64⌉ = 3, because ((64/4)/4)/4 = 1;
• ⌈log2 12⌉ = 4, because (((12/2)/2)/2)/2 = 0.75 ≤ 1.
The following proposition describes several important identities that involve
logarithms for any base greater than 1.
Proposition 4.1 (Logarithm Rules): Given real numbers a > 0, b > 1, c > 0,
and d > 1, we have:
1. logb (ac) = logb a + logb c
2. logb (a/c) = logb a − logb c
3. logb (ac ) = c logb a
4. logb a = logd a/ logd b
5. blogd a = alogd b
By convention, the unparenthesized notation log nc denotes the value log(nc ).
We use a notational shorthand, logc n, to denote the quantity, (log n)c , in which the
result of the logarithm is raised to a power.
The above identities can be derived from converse rules for exponentiation that
we will present on page 161. We illustrate these identities with a few examples.
Example 4.2: We demonstrate below some interesting applications of the loga-
rithm rules from Proposition 4.1 (using the usual convention that the base of a
logarithm is 2 if it is omitted).
• log(2n) = log 2 + log n = 1 + log n, by rule 1
• log(n/2) = log n − log 2 = log n − 1, by rule 2
• log n3 = 3 log n, by rule 3
• log 2n = n log 2 = n · 1 = n, by rule 3
• log4 n = (log n)/ log 4 = (log n)/2, by rule 4
• 2log n = nlog 2 = n1 = n, by rule 5.
As a practical matter, we note that rule 4 gives us a way to compute the base-two
logarithm on a calculator that has a base-10 logarithm button, LOG, for

log2 n = LOG n / LOG 2.

www.it-ebooks.info
158 Chapter 4. Algorithm Analysis

The Linear Function


Another simple yet important function is the linear function,

f (n) = n.

That is, given an input value n, the linear function f assigns the value n itself.
This function arises in algorithm analysis any time we have to do a single basic
operation for each of n elements. For example, comparing a number x to each
element of an array of size n will require n comparisons. The linear function also
represents the best running time we can hope to achieve for any algorithm that
processes each of n objects that are not already in the computer’s memory, because
reading in the n objects already requires n operations.

The N-Log-N Function


The next function we discuss in this section is the n-log-n function,

f (n) = n log n,

that is, the function that assigns to an input n the value of n times the logarithm
base-two of n. This function grows a little more rapidly than the linear function and
a lot less rapidly than the quadratic function; therefore, we would greatly prefer an
algorithm with a running time that is proportional to n log n, than one with quadratic
running time. We will see several important algorithms that exhibit a running time
proportional to the n-log-n function. For example, the fastest possible algorithms
for sorting n arbitrary values require time proportional to n log n.

The Quadratic Function


Another function that appears often in algorithm analysis is the quadratic function,

f (n) = n2 .

That is, given an input value n, the function f assigns the product of n with itself
(in other words, “n squared”).
The main reason why the quadratic function appears in the analysis of algo-
rithms is that there are many algorithms that have nested loops, where the inner
loop performs a linear number of operations and the outer loop is performed a
linear number of times. Thus, in such cases, the algorithm performs n · n = n2
operations.

www.it-ebooks.info
4.2. The Seven Functions Used in This Book 159
Nested Loops and the Quadratic Function
The quadratic function can also arise in the context of nested loops where the first
iteration of a loop uses one operation, the second uses two operations, the third uses
three operations, and so on. That is, the number of operations is

1 + 2 + 3 + · · · + (n − 2) + (n − 1) + n.

In other words, this is the total number of operations that will be performed by the
nested loop if the number of operations performed inside the loop increases by one
with each iteration of the outer loop. This quantity also has an interesting history.
In 1787, a German schoolteacher decided to keep his 9- and 10-year-old pupils
occupied by adding up the integers from 1 to 100. But almost immediately one
of the children claimed to have the answer! The teacher was suspicious, for the
student had only the answer on his slate. But the answer, 5050, was correct and the
student, Carl Gauss, grew up to be one of the greatest mathematicians of his time.
We presume that young Gauss used the following identity.
Proposition 4.3: For any integer n ≥ 1, we have:

n(n + 1)
1 + 2 + 3 + · · · + (n − 2) + (n − 1) + n = .
2
We give two “visual” justifications of Proposition 4.3 in Figure 4.3.

n+1
n n
...
...

3 3
2 2

1 1

0 n 0
1 2 3 1 2 n/2
(a) (b)
Figure 4.3: Visual justifications of Proposition 4.3. Both illustrations visualize the
identity in terms of the total area covered by n unit-width rectangles with heights
1, 2, . . . , n. In (a), the rectangles are shown to cover a big triangle of area n2 /2 (base
n and height n) plus n small triangles of area 1/2 each (base 1 and height 1). In
(b), which applies only when n is even, the rectangles are shown to cover a big
rectangle of base n/2 and height n + 1.

www.it-ebooks.info
160 Chapter 4. Algorithm Analysis
The lesson to be learned from Proposition 4.3 is that if we perform an algorithm
with nested loops such that the operations in the inner loop increase by one each
time, then the total number of operations is quadratic in the number of times, n,
we perform the outer loop. To be fair, the number of operations is n2 /2 + n/2,
and so this is just over half the number of operations than an algorithm that uses n
operations each time the inner loop is performed. But the order of growth is still
quadratic in n.

The Cubic Function and Other Polynomials


Continuing our discussion of functions that are powers of the input, we consider
the cubic function,
f (n) = n3 ,

which assigns to an input value n the product of n with itself three times.
The cubic function appears less frequently in the context of algorithm analysis
than the constant, linear, and quadratic functions previously mentioned, but it does
appear from time to time.

Polynomials
The linear, quadratic and cubic functions can each be viewed as being part of a
larger class of functions, the polynomials. A polynomial function has the form,

f (n) = a0 + a1 n + a2 n2 + a3 n3 + · · · + ad nd ,

where a0 , a1 , . . . , ad are constants, called the coefficients of the polynomial, and


ad 6= 0. Integer d, which indicates the highest power in the polynomial, is called
the degree of the polynomial.
For example, the following functions are all polynomials:
• f (n) = 2 + 5n + n2
• f (n) = 1 + n3
• f (n) = 1
• f (n) = n
• f (n) = n2
Therefore, we could argue that this book presents just four important functions used
in algorithm analysis, but we will stick to saying that there are seven, since the con-
stant, linear, and quadratic functions are too important to be lumped in with other
polynomials. Running times that are polynomials with small degree are generally
better than polynomial running times with larger degree.

www.it-ebooks.info
4.2. The Seven Functions Used in This Book 161
Summations
A notation that appears again and again in the analysis of data structures and algo-
rithms is the summation, which is defined as follows:
b
∑ f (i) = f (a) + f (a + 1) + f (a + 2) + · · · + f (b),
i=a

where a and b are integers and a ≤ b. Summations arise in data structure and algo-
rithm analysis because the running times of loops naturally give rise to summations.
Using a summation, we can rewrite the formula of Proposition 4.3 as
n
n(n + 1)
∑i= 2
.
i=1

Likewise, we can write a polynomial f (n) of degree d with coefficients a0 , . . . , ad as


d
f (n) = ∑ ai ni .
i=0

Thus, the summation notation gives us a shorthand way of expressing sums of in-
creasing terms that have a regular structure.

The Exponential Function


Another function used in the analysis of algorithms is the exponential function,
f (n) = bn ,
where b is a positive constant, called the base, and the argument n is the exponent.
That is, function f (n) assigns to the input argument n the value obtained by mul-
tiplying the base b by itself n times. As was the case with the logarithm function,
the most common base for the exponential function in algorithm analysis is b = 2.
For example, an integer word containing n bits can represent all the nonnegative
integers less than 2n . If we have a loop that starts by performing one operation
and then doubles the number of operations performed with each iteration, then the
number of operations performed in the n th iteration is 2n .
We sometimes have other exponents besides n, however; hence, it is useful
for us to know a few handy rules for working with exponents. In particular, the
following exponent rules are quite helpful.
Proposition 4.4 (Exponent Rules): Given positive integers a, b, and c, we have
1. (ba )c = bac
2. ba bc = ba+c
3. ba /bc = ba−c

www.it-ebooks.info
162 Chapter 4. Algorithm Analysis
For example, we have the following:
• 256 = 162 = (24 )2 = 24·2 = 28 = 256 (Exponent Rule 1)
• 243 = 35 = 32+3 = 32 33 = 9 · 27 = 243 (Exponent Rule 2)
• 16 = 1024/64 = 210 /26 = 210−6 = 24 = 16 (Exponent Rule 3)
We can extend the exponential function to exponents that are fractions or real
numbers and to negative exponents, as follows. Given a positive integer k, we de-
fine b1/k to be k th root of b, that is, the number r such that rk = b. For example,
251/2 = 5, since 52 = 25. Likewise, 271/3 = 3 and 161/4 = 2. This approach al-
lows us to define any power whose exponent can be expressed as a fraction, for
ba/c = (ba )1/c , by Exponent Rule 1. For example, 93/2 = (93 )1/2 = 7291/2 = 27.
Thus, ba/c is really just the c th root of the integral exponent ba .
We can further extend the exponential function to define bx for any real number
x, by computing a series of numbers of the form ba/c for fractions a/c that get pro-
gressively closer and closer to x. Any real number x can be approximated arbitrarily
closely by a fraction a/c; hence, we can use the fraction a/c as the exponent of b
to get arbitrarily close to bx . For example, the number 2π is well defined. Finally,
given a negative exponent d, we define bd = 1/b−d , which corresponds to applying
Exponent Rule 3 with a = 0 and c = −d. For example, 2−3 = 1/23 = 1/8.

Geometric Sums
Suppose we have a loop for which each iteration takes a multiplicative factor longer
than the previous one. This loop can be analyzed using the following proposition.
Proposition 4.5: For any integer n ≥ 0 and any real number a such that a > 0 and
a 6= 1, consider the summation
n
∑ ai = 1 + a + a2 + · · · + an
i=0

(remembering that a0 = 1 if a > 0). This summation is equal to

an+1 − 1
.
a−1

Summations as shown in Proposition 4.5 are called geometric summations, be-


cause each term is geometrically larger than the previous one if a > 1. For example,
everyone working in computing should know that
1 + 2 + 4 + 8 + · · · + 2n−1 = 2n − 1,
for this is the largest unsigned integer that can be represented in binary notation
using n bits.

www.it-ebooks.info
4.2. The Seven Functions Used in This Book 163

4.2.1 Comparing Growth Rates


To sum up, Table 4.2 shows, in order, each of the seven common functions used in
algorithm analysis.

constant logarithm linear n-log-n quadratic cubic exponential


1 log n n n log n n2 n3 an
Table 4.2: Seven functions commonly used in the analysis of algorithms. We recall
that log n = log2 n. Also, we denote with a a constant greater than 1.

Ideally, we would like data structure operations to run in times proportional


to the constant or logarithm function, and we would like our algorithms to run in
linear or n-log-n time. Algorithms with quadratic or cubic running times are less
practical, and algorithms with exponential running times are infeasible for all but
the smallest sized inputs. Plots of the seven functions are shown in Figure 4.4.

1044 Exponential
1040 Cubic
1036
Quadratic
1032
1028 N-Log-N
1024
f (n)

Linear
1020
Logarithmic
1016
1012 Constant
108
104
100
100 101 102 103 104 105 106 107 108 109 1010 1011 1012 1013 1014 1015
n
Figure 4.4: Growth rates for the seven fundamental functions used in algorithm
analysis. We use base a = 2 for the exponential function. The functions are plotted
on a log-log chart to compare the growth rates primarily as slopes. Even so, the
exponential function grows too fast to display all its values on the chart.

The Ceiling and Floor Functions


When discussing logarithms, we noted that the value is generally not an integer,
yet the running time of an algorithm is usually expressed by means of an integer
quantity, such as the number of operations performed. Thus, the analysis of an al-
gorithm may sometimes involve the use of the floor function and ceiling function,
which are defined respectively as follows:
• ⌊x⌋ = the largest integer less than or equal to x. (e.g., ⌊3.7⌋ = 3.)
• ⌈x⌉ = the smallest integer greater than or equal to x. (e.g., ⌈5.2⌉ = 6.)

www.it-ebooks.info
164 Chapter 4. Algorithm Analysis

4.3 Asymptotic Analysis


In algorithm analysis, we focus on the growth rate of the running time as a function
of the input size n, taking a “big-picture” approach. For example, it is often enough
just to know that the running time of an algorithm grows proportionally to n.
We analyze algorithms using a mathematical notation for functions that disre-
gards constant factors. Namely, we characterize the running times of algorithms
by using functions that map the size of the input, n, to values that correspond to
the main factor that determines the growth rate in terms of n. This approach re-
flects that each basic step in a pseudocode description or a high-level language
implementation may correspond to a small number of primitive operations. Thus,
we can perform an analysis of an algorithm by estimating the number of primitive
operations executed up to a constant factor, rather than getting bogged down in
language-specific or hardware-specific analysis of the exact number of operations
that execute on the computer.

4.3.1 The “Big-Oh” Notation


Let f (n) and g(n) be functions mapping positive integers to positive real numbers.
We say that f (n) is O(g(n)) if there is a real constant c > 0 and an integer constant
n0 ≥ 1 such that
f (n) ≤ c · g(n), for n ≥ n0 .
This definition is often referred to as the “big-Oh” notation, for it is sometimes pro-
nounced as “ f (n) is big-Oh of g(n).” Figure 4.5 illustrates the general definition.

cg(n)
Running Time

f(n)

n0 Input Size

Figure 4.5: Illustrating the “big-Oh” notation. The function f (n) is O(g(n)), since
f (n) ≤ c · g(n) when n ≥ n0 .

www.it-ebooks.info
4.3. Asymptotic Analysis 165
Example 4.6: The function 8n + 5 is O(n).
Justification: By the big-Oh definition, we need to find a real constant c > 0 and
an integer constant n0 ≥ 1 such that 8n + 5 ≤ cn for every integer n ≥ n0 . It is easy
to see that a possible choice is c = 9 and n0 = 5. Indeed, this is one of infinitely
many choices available because there is a trade-off between c and n0 . For example,
we could rely on constants c = 13 and n0 = 1.
The big-Oh notation allows us to say that a function f (n) is “less than or equal
to” another function g(n) up to a constant factor and in the asymptotic sense as n
grows toward infinity. This ability comes from the fact that the definition uses “≤”
to compare f (n) to a g(n) times a constant, c, for the asymptotic cases when n ≥ n0 .
However, it is considered poor taste to say “ f (n) ≤ O(g(n)),” since the big-Oh
already denotes the “less-than-or-equal-to” concept. Likewise, although common,
it is not fully correct to say “ f (n) = O(g(n)),” with the usual understanding of the
“=” relation, because there is no way to make sense of the symmetric statement,
“O(g(n)) = f (n).” It is best to say, “ f (n) is O(g(n)).”
Alternatively, we can say “ f (n) is order of g(n).” For the more mathematically
inclined, it is also correct to say, “ f (n) ∈ O(g(n)),” for the big-Oh notation, techni-
cally speaking, denotes a whole collection of functions. In this book, we will stick
to presenting big-Oh statements as “ f (n) is O(g(n)).” Even with this interpretation,
there is considerable freedom in how we can use arithmetic operations with the big-
Oh notation, and with this freedom comes a certain amount of responsibility.

Some Properties of the Big-Oh Notation


The big-Oh notation allows us to ignore constant factors and lower-order terms and
focus on the main components of a function that affect its growth.
Example 4.7: 5n4 + 3n3 + 2n2 + 4n + 1 is O(n4 ).
Justification: Note that 5n4 + 3n3 + 2n2 + 4n + 1 ≤ (5 + 3 + 2 + 4 + 1)n4 = cn4 ,
for c = 15, when n ≥ n0 = 1.
In fact, we can characterize the growth rate of any polynomial function.
Proposition 4.8: If f (n) is a polynomial of degree d , that is,
f (n) = a0 + a1 n + · · · + ad nd ,
and ad > 0, then f (n) is O(nd ).

Justification: Note that, for n ≥ 1, we have 1 ≤ n ≤ n2 ≤ · · · ≤ nd ; hence,


a0 + a1 n + a2 n2 + · · · + ad nd ≤ (|a0 | + |a1 | + |a2 | + · · · + |ad |) nd .
We show that f (n) is O(nd ) by defining c = |a0 | + |a1 | + · · · + |ad | and n0 = 1.

www.it-ebooks.info
166 Chapter 4. Algorithm Analysis
Thus, the highest-degree term in a polynomial is the term that determines the
asymptotic growth rate of that polynomial. We consider some additional properties
of the big-Oh notation in the exercises. Let us consider some further examples here,
focusing on combinations of the seven fundamental functions used in algorithm
design. We rely on the mathematical fact that log n ≤ n for n ≥ 1.
Example 4.9: 5n2 + 3n log n + 2n + 5 is O(n2 ).
Justification: 5n2 + 3n log n+ 2n+ 5 ≤ (5+ 3+ 2+ 5)n2 = cn2 , for c = 15, when
n ≥ n0 = 1.

Example 4.10: 20n3 + 10n log n + 5 is O(n3 ).


Justification: 20n3 + 10n log n + 5 ≤ 35n3 , for n ≥ 1.

Example 4.11: 3 log n + 2 is O(log n).


Justification: 3 log n + 2 ≤ 5 log n, for n ≥ 2. Note that log n is zero for n = 1.
That is why we use n ≥ n0 = 2 in this case.

Example 4.12: 2n+2 is O(2n ).


Justification: 2n+2 = 2n · 22 = 4 · 2n ; hence, we can take c = 4 and n0 = 1 in this
case.

Example 4.13: 2n + 100 log n is O(n).


Justification: 2n + 100 log n ≤ 102n, for n ≥ n0 = 1; hence, we can take c = 102
in this case.

Characterizing Functions in Simplest Terms


In general, we should use the big-Oh notation to characterize a function as closely
as possible. While it is true that the function f (n) = 4n3 + 3n2 is O(n5 ) or even
O(n4 ), it is more accurate to say that f (n) is O(n3 ). Consider, by way of analogy,
a scenario where a hungry traveler driving along a long country road happens upon
a local farmer walking home from a market. If the traveler asks the farmer how
much longer he must drive before he can find some food, it may be truthful for the
farmer to say, “certainly no longer than 12 hours,” but it is much more accurate
(and helpful) for him to say, “you can find a market just a few minutes drive up this
road.” Thus, even with the big-Oh notation, we should strive as much as possible
to tell the whole truth.
It is also considered poor taste to include constant factors and lower-order terms
in the big-Oh notation. For example, it is not fashionable to say that the function
2n2 is O(4n2 + 6n log n), although this is completely correct. We should strive
instead to describe the function in the big-Oh in simplest terms.

www.it-ebooks.info
4.3. Asymptotic Analysis 167
The seven functions listed in Section 4.2 are the most common functions used
in conjunction with the big-Oh notation to characterize the running times and space
usage of algorithms. Indeed, we typically use the names of these functions to refer
to the running times of the algorithms they characterize. So, for example, we would
say that an algorithm that runs in worst-case time 4n2 + n log n is a quadratic-time
algorithm, since it runs in O(n2 ) time. Likewise, an algorithm running in time at
most 5n + 20 log n + 4 would be called a linear-time algorithm.

Big-Omega
Just as the big-Oh notation provides an asymptotic way of saying that a function is
“less than or equal to” another function, the following notations provide an asymp-
totic way of saying that a function grows at a rate that is “greater than or equal to”
that of another.
Let f (n) and g(n) be functions mapping positive integers to positive real num-
bers. We say that f (n) is Ω(g(n)), pronounced “ f (n) is big-Omega of g(n),” if g(n)
is O( f (n)), that is, there is a real constant c > 0 and an integer constant n0 ≥ 1 such
that
f (n) ≥ cg(n), for n ≥ n0 .

This definition allows us to say asymptotically that one function is greater than or
equal to another, up to a constant factor.

Example 4.14: 3n log n − 2n is Ω(n log n).

Justification: 3n log n − 2n = n log n + 2n(log n − 1) ≥ n log n for n ≥ 2; hence,


we can take c = 1 and n0 = 2 in this case.

Big-Theta
In addition, there is a notation that allows us to say that two functions grow at the
same rate, up to constant factors. We say that f (n) is Θ(g(n)), pronounced “ f (n)
is big-Theta of g(n),” if f (n) is O(g(n)) and f (n) is Ω(g(n)), that is, there are real
constants c′ > 0 and c′′ > 0, and an integer constant n0 ≥ 1 such that

c′ g(n) ≤ f (n) ≤ c′′ g(n), for n ≥ n0 .

Example 4.15: 3n log n + 4n + 5 log n is Θ(n log n).

Justification: 3n log n ≤ 3n log n + 4n + 5 log n ≤ (3 + 4 + 5)n log n for n ≥ 2.

www.it-ebooks.info
168 Chapter 4. Algorithm Analysis

4.3.2 Comparative Analysis


The big-Oh notation is widely used to characterize running times and space bounds
in terms of some parameter n, which is defined as a chosen measure of the “size”
of the problem. Suppose two algorithms solving the same problem are available:
an algorithm A, which has a running time of O(n), and an algorithm B, which has a
running time of O(n2 ). Which algorithm is better? We know that n is O(n2 ), which
implies that algorithm A is asymptotically better than algorithm B, although for a
small value of n, B may have a lower running time than A.
We can use the big-Oh notation to order classes of functions by asymptotic
growth rate. Our seven functions are ordered by increasing growth rate in the fol-
lowing sequence, such that f (n) is O(g(n)) if function f (n) precedes function g(n):

1, log n, n, n log n, n2 , n3 , 2n .

We illustrate the growth rates of the seven functions in Table 4.3. (See also
Figure 4.4 from Section 4.2.1.)

n log n n n log n n2 n3 2n
8 3 8 24 64 512 256
16 4 16 64 256 4, 096 65, 536
32 5 32 160 1, 024 32, 768 4, 294, 967, 296
64 6 64 384 4, 096 262, 144 1.84 × 1019
128 7 128 896 16, 384 2, 097, 152 3.40 × 1038
256 8 256 2, 048 65, 536 16, 777, 216 1.15 × 1077
512 9 512 4, 608 262, 144 134, 217, 728 1.34 × 10154

Table 4.3: Selected values of fundamental functions in algorithm analysis.

We further illustrate the importance of the asymptotic viewpoint in Table 4.4.


This table explores the maximum size allowed for an input instance that is pro-
cessed by an algorithm in 1 second, 1 minute, and 1 hour. It shows the importance
of good algorithm design, because an asymptotically slow algorithm is beaten in
the long run by an asymptotically faster algorithm, even if the constant factor for
the asymptotically faster algorithm is worse.

Running Maximum Problem Size (n)


Time (µs) 1 second 1 minute 1 hour
400n 2,500 150,000 9,000,000
2n2 707 5,477 42,426
2n 19 25 31

Table 4.4: Maximum size of a problem that can be solved in 1 second, 1 minute,
and 1 hour, for various running times measured in microseconds.

www.it-ebooks.info
4.3. Asymptotic Analysis 169
The importance of good algorithm design goes beyond just what can be solved
effectively on a given computer, however. As shown in Table 4.5, even if we
achieve a dramatic speedup in hardware, we still cannot overcome the handicap
of an asymptotically slow algorithm. This table shows the new maximum problem
size achievable for any fixed amount of time, assuming algorithms with the given
running times are now run on a computer 256 times faster than the previous one.

Running Time New Maximum Problem Size


400n 256m
2n2 16m
2n m+8

Table 4.5: Increase in the maximum size of a problem that can be solved in a fixed
amount of time, by using a computer that is 256 times faster than the previous one.
Each entry is a function of m, the previous maximum problem size.

Some Words of Caution


A few words of caution about asymptotic notation are in order at this point. First,
note that the use of the big-Oh and related notations can be somewhat misleading
should the constant factors they “hide” be very large. For example, while it is true
that the function 10100 n is O(n), if this is the running time of an algorithm being
compared to one whose running time is 10n log n, we should prefer the O(n log n)-
time algorithm, even though the linear-time algorithm is asymptotically faster. This
preference is because the constant factor, 10100 , which is called “one googol,” is
believed by many astronomers to be an upper bound on the number of atoms in the
observable universe. So we are unlikely to ever have a real-world problem that has
this number as its input size.
The observation above raises the issue of what constitutes a “fast” algorithm.
Generally speaking, any algorithm running in O(n log n) time (with a reasonable
constant factor) should be considered efficient. Even an O(n2 )-time function may
be fast enough in some contexts, that is, when n is small. But an algorithm whose
running time is an exponential function, e.g., O(2n ), should almost never be con-
sidered efficient.

Exponential Running Times


To see how fast the function 2n grows, consider the famous story about the inventor
of the game of chess. He asked only that his king pay him 1 grain of rice for the
first square on the board, 2 grains for the second, 4 grains for the third, 8 for the
fourth, and so on. The number of grains in the 64th square would be
263 = 9, 223, 372, 036, 854, 775, 808,
which is about nine billion billions!

www.it-ebooks.info
170 Chapter 4. Algorithm Analysis
If we must draw a line between efficient and inefficient algorithms, therefore,
it is natural to make this distinction be that between those algorithms running in
polynomial time and those running in exponential time. That is, make the distinc-
tion between algorithms with a running time that is O(nc ), for some constant c > 1,
and those with a running time that is O(b n ), for some constant b > 1. Like so many
notions we have discussed in this section, this too should be taken with a “grain of
salt,” for an algorithm running in O(n100 ) time should probably not be considered
“efficient.” Even so, the distinction between polynomial-time and exponential-time
algorithms is considered a robust measure of tractability.

4.3.3 Examples of Algorithm Analysis


Now that we have the big-Oh notation for doing algorithm analysis, let us give
some examples by characterizing the running time of some simple algorithms using
this notation. Moreover, in keeping with our earlier promise, we will illustrate
below how each of the seven functions given earlier in this chapter can be used to
characterize the running time of an example algorithm.

Constant-Time Operations
All of the primitive operations, originally described on page 154, are assumed to
run in constant time; formally, we say they run in O(1) time. We wish to empha-
size several important constant-time operations that involve arrays. Assume that
variable A is an array of n elements. The expression A.length in Java is evaluated
in constant time, because arrays are represented internally with an explicit variable
that records the length of the array. Another central behavior of arrays is that for
any valid index j, the individual element, A[ j], can be accessed in constant time.
This is because an array uses a consecutive block of memory. The j th element can
be found, not by iterating through the array one element at a time, but by validating
the index, and using it as an offset from the beginning of the array in determin-
ing the appropriate memory address. Therefore, we say that the expression A[ j] is
evaluated in O(1) time for an array.

Finding the Maximum of an Array


As a classic example of an algorithm with a running time that grows proportional
to n, we consider the goal of finding the largest element of an array. A typical
strategy is to loop through elements of the array while maintaining as a variable
the largest element seen thus far. Code Fragment 4.3 presents a method named
arrayMax implementing this strategy.

www.it-ebooks.info
4.3. Asymptotic Analysis 171
1 /∗∗ Returns the maximum value of a nonempty array of numbers. ∗/
2 public static double arrayMax(double[ ] data) {
3 int n = data.length;
4 double currentMax = data[0]; // assume first entry is biggest (for now)
5 for (int j=1; j < n; j++) // consider all other entries
6 if (data[j] > currentMax) // if data[j] is biggest thus far...
7 currentMax = data[j]; // record it as the current max
8 return currentMax;
9 }
Code Fragment 4.3: A method that returns the maximum value of an array.

Using the big-Oh notation, we can write the following mathematically precise
statement on the running time of algorithm arrayMax for any computer.
Proposition 4.16: The algorithm, arrayMax, for computing the maximum ele-
ment of an array of n numbers, runs in O(n) time.
Justification: The initialization at lines 3 and 4 and the return statement at line 8
require only a constant number of primitive operations. Each iteration of the loop
also requires only a constant number of primitive operations, and the loop executes
n − 1 times. Therefore, we account for the number of primitive operations being
c′ ·(n−1)+c′′ for appropriate constants c′ and c′′ that reflect, respectively, the work
performed inside and outside the loop body. Because each primitive operation runs
in constant time, we have that the running time of algorithm arrayMax on an input
of size n is at most c′ · (n − 1) + c′′ = c′ · n + (c′′ − c′ ) ≤ c′ · n if we assume, without
loss of generality, that c′′ ≤ c′ . We conclude that the running time of algorithm
arrayMax is O(n).

Further Analysis of the Maximum-Finding Algorithm


A more interesting question about arrayMax is how many times we might update
the current “biggest” value. In the worst case, if the data is given to us in increasing
order, the biggest value is reassigned n − 1 times. But what if the input is given
to us in random order, with all orders equally likely; what would be the expected
number of times we update the biggest value in this case? To answer this question,
note that we update the current biggest in an iteration of the loop only if the current
element is bigger than all the elements that precede it. If the sequence is given to
us in random order, the probability that the j th element is the largest of the first j
elements is 1/ j (assuming uniqueness). Hence, the expected number of times we
update the biggest (including initialization) is Hn = ∑nj=1 1/ j, which is known as
the n th Harmonic number. It can be shown that Hn is O(log n). Therefore, the
expected number of times the biggest value is updated by arrayMax on a randomly
ordered sequence is O(log n).

www.it-ebooks.info
172 Chapter 4. Algorithm Analysis
Composing Long Strings
As our next example, we revisit the experimental study from Section 4.1, in which
we examined two different implementations for composing a long string (see Code
Fragment 4.2). Our first algorithm was based on repeated use of the string concate-
nation operator; for convenience, that method is also given in Code Fragment 4.4.

1 /∗∗ Uses repeated concatenation to compose a String with n copies of character c. ∗/


2 public static String repeat1(char c, int n) {
3 String answer = "";
4 for (int j=0; j < n; j++)
5 answer += c;
6 return answer;
7 }
Code Fragment 4.4: Composing a string using repeated concatenation.

The most important aspect of this implementation is that strings in Java are
immutable objects. Once created, an instance cannot be modified. The command,
answer += c, is shorthand for answer = (answer + c). This command does not
cause a new character to be added to the existing String instance; instead it produces
a new String with the desired sequence of characters, and then it reassigns the
variable, answer, to refer to that new string.
In terms of efficiency, the problem with this interpretation is that the creation
of a new string as a result of a concatenation, requires time that is proportional
to the length of the resulting string. The first time through this loop, the result
has length 1, the second time through the loop the result has length 2, and so on,
until we reach the final string of length n. Therefore, the overall time taken by this
algorithm is proportional to
1 + 2 + · · · + n,

which we recognize as the familiar O(n2 ) summation from Proposition 4.3. There-
fore, the total time complexity of the repeat1 algorithm is O(n2 ).
We see this theoretical analysis reflected in the experimental results. The run-
ning time of a quadratic algorithm should theoretically quadruple if the size of the
problem doubles, as (2n)2 = 4 · n2 . (We say “theoretically,” because this does not
account for lower-order terms that are hidden by the asymptotic notation.) We see
such an approximate fourfold increase in the running time of repeat1 in Table 4.1
on page 152.
In contrast, the running times in that table for the repeat2 algorithm, which uses
Java’s StringBuilder class, demonstrate a trend of approximately doubling each
time the problem size doubles. The StringBuilder class relies on an advanced tech-
nique with a worst-case running time of O(n) for composing a string of length n;
we will later explore that technique as the focus of Section 7.2.1.

www.it-ebooks.info
4.3. Asymptotic Analysis 173
Three-Way Set Disjointness

Suppose we are given three sets, A, B, and C, stored in three different integer arrays.
We will assume that no individual set contains duplicate values, but that there may
be some numbers that are in two or three of the sets. The three-way set disjointness
problem is to determine if the intersection of the three sets is empty, namely, that
there is no element x such that x ∈ A, x ∈ B, and x ∈ C. A simple Java method to
determine this property is given in Code Fragment 4.5.

1 /∗∗ Returns true if there is no element common to all three arrays. ∗/


2 public static boolean disjoint1(int[ ] groupA, int[ ] groupB, int[ ] groupC) {
3 for (int a : groupA)
4 for (int b : groupB)
5 for (int c : groupC)
6 if ((a == b) && (b == c))
7 return false; // we found a common value
8 return true; // if we reach this, sets are disjoint
9 }
Code Fragment 4.5: Algorithm disjoint1 for testing three-way set disjointness.

This simple algorithm loops through each possible triple of values from the
three sets to see if those values are equivalent. If each of the original sets has size
n, then the worst-case running time of this method is O(n3 ).
We can improve upon the asymptotic performance with a simple observation.
Once inside the body of the loop over B, if selected elements a and b do not match
each other, it is a waste of time to iterate through all values of C looking for a
matching triple. An improved solution to this problem, taking advantage of this
observation, is presented in Code Fragment 4.6.

1 /∗∗ Returns true if there is no element common to all three arrays. ∗/


2 public static boolean disjoint2(int[ ] groupA, int[ ] groupB, int[ ] groupC) {
3 for (int a : groupA)
4 for (int b : groupB)
5 if (a == b) // only check C when we find match from A and B
6 for (int c : groupC)
7 if (a == c) // and thus b == c as well
8 return false; // we found a common value
9 return true; // if we reach this, sets are disjoint
10 }
Code Fragment 4.6: Algorithm disjoint2 for testing three-way set disjointness.

In the improved version, it is not simply that we save time if we get lucky. We
claim that the worst-case running time for disjoint2 is O(n2 ). There are quadrat-
ically many pairs (a, b) to consider. However, if A and B are each sets of distinct

www.it-ebooks.info
174 Chapter 4. Algorithm Analysis
elements, there can be at most O(n) such pairs with a equal to b. Therefore, the
innermost loop, over C, executes at most n times.
To account for the overall running time, we examine the time spent executing
each line of code. The management of the for loop over A requires O(n) time. The
management of the for loop over B accounts for a total of O(n2 ) time, since that
loop is executed n different times. The test a == b is evaluated O(n2 ) times. The
rest of the time spent depends upon how many matching (a, b) pairs exist. As we
have noted, there are at most n such pairs; therefore, the management of the loop
over C and the commands within the body of that loop use at most O(n2 ) time. By
our standard application of Proposition 4.8, the total time spent is O(n2 ).

Element Uniqueness

A problem that is closely related to the three-way set disjointness problem is the
element uniqueness problem. In the former, we are given three sets and we pre-
sumed that there were no duplicates within a single set. In the element uniqueness
problem, we are given an array with n elements and asked whether all elements of
that collection are distinct from each other.
Our first solution to this problem uses a straightforward iterative algorithm.
The unique1 method, given in Code Fragment 4.7, solves the element uniqueness
problem by looping through all distinct pairs of indices j < k, checking if any of
those pairs refer to elements that are equivalent to each other. It does this using two
nested for loops, such that the first iteration of the outer loop causes n − 1 iterations
of the inner loop, the second iteration of the outer loop causes n − 2 iterations of
the inner loop, and so on. Thus, the worst-case running time of this method is
proportional to
(n − 1) + (n − 2) + · · · + 2 + 1,

which we recognize as the familiar O(n2 ) summation from Proposition 4.3.

1 /∗∗ Returns true if there are no duplicate elements in the array. ∗/


2 public static boolean unique1(int[ ] data) {
3 int n = data.length;
4 for (int j=0; j < n−1; j++)
5 for (int k=j+1; k < n; k++)
6 if (data[j] == data[k])
7 return false; // found duplicate pair
8 return true; // if we reach this, elements are unique
9 }

Code Fragment 4.7: Algorithm unique1 for testing element uniqueness.

www.it-ebooks.info
4.3. Asymptotic Analysis 175
Using Sorting as a Problem-Solving Tool
An even better algorithm for the element uniqueness problem is based on using
sorting as a problem-solving tool. In this case, by sorting the array of elements, we
are guaranteed that any duplicate elements will be placed next to each other. Thus,
to determine if there are any duplicates, all we need to do is perform a single pass
over the sorted array, looking for consecutive duplicates.
A Java implementation of this algorithm is given in Code Fragment 4.8. (See
Section 3.1.3 for discussion of the java.util.Arrays class.)

1 /∗∗ Returns true if there are no duplicate elements in the array. ∗/


2 public static boolean unique2(int[ ] data) {
3 int n = data.length;
4 int[ ] temp = Arrays.copyOf(data, n); // make copy of data
5 Arrays.sort(temp); // and sort the copy
6 for (int j=0; j < n−1; j++)
7 if (temp[j] == temp[j+1]) // check neighboring entries
8 return false; // found duplicate pair
9 return true; // if we reach this, elements are unique
10 }

Code Fragment 4.8: Algorithm unique2 for testing element uniqueness.

Sorting algorithms will be the focus of Chapter 12. The best sorting algorithms
(including those used by Array.sort in Java) guarantee a worst-case running time of
O(n log n). Once the data is sorted, the subsequent loop runs in O(n) time, and so
the entire unique2 algorithm runs in O(n log n) time. Exercise C-4.35 explores the
use of sorting to solve the three-way set disjointness problem in O(n log n) time.

Prefix Averages
The next problem we consider is computing what are known as prefix averages of
a sequence of numbers. Namely, given a sequence x consisting of n numbers, we
want to compute a sequence a such that a j is the average of elements x0 , . . . , x j , for
j = 0, . . . , n − 1, that is, j
∑ xi
a j = i=0 .
j+1
Prefix averages have many applications in economics and statistics. For example,
given the year-by-year returns of a mutual fund, ordered from recent to past, an
investor will typically want to see the fund’s average annual returns for the most
recent year, the most recent three years, the most recent five years, and so on. Like-
wise, given a stream of daily Web usage logs, a website manager may wish to track
average usage trends over various time periods. We present two implementation
for computing prefix averages, yet with significantly different running times.

www.it-ebooks.info
176 Chapter 4. Algorithm Analysis
A Quadratic-Time Algorithm
Our first algorithm for computing prefix averages, denoted as prefixAverage1, is
shown in Code Fragment 4.9. It computes each element a j independently, using an
inner loop to compute that partial sum.

1 /∗∗ Returns an array a such that, for all j, a[j] equals the average of x[0], ..., x[j]. ∗/
2 public static double[ ] prefixAverage1(double[ ] x) {
3 int n = x.length;
4 double[ ] a = new double[n]; // filled with zeros by default
5 for (int j=0; j < n; j++) {
6 double total = 0; // begin computing x[0] + ... + x[j]
7 for (int i=0; i <= j; i++)
8 total += x[i];
9 a[j] = total / (j+1); // record the average
10 }
11 return a;
12 }
Code Fragment 4.9: Algorithm prefixAverage1.

Let us analyze the prefixAverage1 algorithm.


• The initialization of n = x.length at line 3 and the eventual return of a refer-
ence to array a at line 11 both execute in O(1) time.
• Creating and initializing the new array, a, at line 4 can be done with in O(n)
time, using a constant number of primitive operations per element.
• There are two nested for loops, which are controlled, respectively, by coun-
ters j and i. The body of the outer loop, controlled by counter j, is ex-
ecuted n times, for j = 0, . . . , n − 1. Therefore, statements total = 0 and
a[j] = total / (j+1) are executed n times each. This implies that these two
statements, plus the management of counter j in the loop, contribute a num-
ber of primitive operations proportional to n, that is, O(n) time.
• The body of the inner loop, which is controlled by counter i, is executed j + 1
times, depending on the current value of the outer loop counter j. Thus, state-
ment total += x[i], in the inner loop, is executed 1 + 2 + 3 + · · · + n times.
By recalling Proposition 4.3, we know that 1 + 2 + 3 + · · · + n = n(n + 1)/2,
which implies that the statement in the inner loop contributes O(n2 ) time.
A similar argument can be done for the primitive operations associated with
maintaining counter i, which also take O(n2 ) time.
The running time of implementation prefixAverage1 is given by the sum of these
terms. The first term is O(1), the second and third terms are O(n), and the fourth
term is O(n2 ). By a simple application of Proposition 4.8, the running time of
prefixAverage1 is O(n2 ).

www.it-ebooks.info
4.3. Asymptotic Analysis 177
A Linear-Time Algorithm

An intermediate value in the computation of the prefix average is the prefix sum
x0 + x1 + · · · + x j , denoted as total in our first implementation; this allows us to
compute the prefix average a[j] = total / (j + 1). In our first algorithm, the prefix
sum is computed anew for each value of j. That contributed O( j) time for each j,
leading to the quadratic behavior.
For greater efficiency, we can maintain the current prefix sum dynamically,
effectively computing x0 + x1 + · · · + x j as total + x j , where value total is equal to
the sum x0 + x1 + · · ·+ x j−1 , when computed by the previous pass of the loop over j.
Code Fragment 4.10 provides a new implementation, denoted as prefixAverage2,
using this approach.

1 /∗∗ Returns an array a such that, for all j, a[j] equals the average of x[0], ..., x[j]. ∗/
2 public static double[ ] prefixAverage2(double[ ] x) {
3 int n = x.length;
4 double[ ] a = new double[n]; // filled with zeros by default
5 double total = 0; // compute prefix sum as x[0] + x[1] + ...
6 for (int j=0; j < n; j++) {
7 total += x[j]; // update prefix sum to include x[j]
8 a[j] = total / (j+1); // compute average based on current sum
9 }
10 return a;
11 }
Code Fragment 4.10: Algorithm prefixAverage2.

The analysis of the running time of algorithm prefixAverage2 follows:

• Initializing variables n and total uses O(1) time.


• Initializing the array a uses O(n) time.
• There is a single for loop, which is controlled by counter j. The maintenance
of that loop contributes a total of O(n) time.
• The body of the loop is executed n times, for j = 0, . . . , n − 1. Thus, state-
ments total += x[j] and a[j] = total / (j+1) are executed n times each.
Since each of these statements uses O(1) time per iteration, their overall
contribution is O(n) time.
• The eventual return of a reference to array A uses O(1) time.
The running time of algorithm prefixAverage2 is given by the sum of the five terms.
The first and last are O(1) and the remaining three are O(n). By a simple applica-
tion of Proposition 4.8, the running time of prefixAverage2 is O(n), which is much
better than the quadratic time of algorithm prefixAverage1.

www.it-ebooks.info
178 Chapter 4. Algorithm Analysis

4.4 Simple Justification Techniques


Sometimes, we will want to make claims about an algorithm, such as showing that
it is correct or that it runs fast. In order to rigorously make such claims, we must
use mathematical language, and in order to back up such claims, we must justify or
prove our statements. Fortunately, there are several simple ways to do this.

4.4.1 By Example
Some claims are of the generic form, “There is an element x in a set S that has
property P.” To justify such a claim, we only need to produce a particular x in S
that has property P. Likewise, some hard-to-believe claims are of the generic form,
“Every element x in a set S has property P.” To justify that such a claim is false, we
only need to produce a particular x from S that does not have property P. Such an
instance is called a counterexample.
Example 4.17: Professor Amongus claims that every number of the form 2i − 1
is a prime, when i is an integer greater than 1. Professor Amongus is wrong.

Justification: To prove Professor Amongus is wrong, we find a counterexample.


Fortunately, we need not look too far, for 24 − 1 = 15 = 3 · 5.

4.4.2 The “Contra” Attack


Another set of justification techniques involves the use of the negative. The two
primary such methods are the use of the contrapositive and the contradiction. To
justify the statement “if p is true, then q is true,” we establish that “if q is not true,
then p is not true” instead. Logically, these two statements are the same, but the
latter, which is called the contrapositive of the first, may be easier to think about.
Example 4.18: Let a and b be integers. If ab is even, then a is even or b is even.

Justification: To justify this claim, consider the contrapositive, “If a is odd and
b is odd, then ab is odd.” So, suppose a = 2 j + 1 and b = 2k + 1, for some integers
j and k. Then ab = 4 jk + 2 j + 2k + 1 = 2(2 jk + j + k) + 1; hence, ab is odd.
Besides showing a use of the contrapositive justification technique, the previous
example also contains an application of de Morgan’s law. This law helps us deal
with negations, for it states that the negation of a statement of the form “p or q” is
“not p and not q.” Likewise, it states that the negation of a statement of the form
“p and q” is “not p or not q.”

www.it-ebooks.info
4.4. Simple Justification Techniques 179
Contradiction

Another negative justification technique is justification by contradiction, which


also often involves using de Morgan’s law. In applying the justification by con-
tradiction technique, we establish that a statement q is true by first supposing that
q is false and then showing that this assumption leads to a contradiction (such as
2 6= 2 or 1 > 3). By reaching such a contradiction, we show that no consistent sit-
uation exists with q being false, so q must be true. Of course, in order to reach this
conclusion, we must be sure our situation is consistent before we assume q is false.

Example 4.19: Let a and b be integers. If ab is odd, then a is odd and b is odd.

Justification: Let ab be odd. We wish to show that a is odd and b is odd. So,
with the hope of leading to a contradiction, let us assume the opposite, namely,
suppose a is even or b is even. In fact, without loss of generality, we can assume
that a is even (since the case for b is symmetric). Then a = 2 j for some integer
j. Hence, ab = (2 j)b = 2( jb), that is, ab is even. But this is a contradiction: ab
cannot simultaneously be odd and even. Therefore, a is odd and b is odd.

4.4.3 Induction and Loop Invariants


Most of the claims we make about a running time or a space bound involve an inte-
ger parameter n (usually denoting an intuitive notion of the “size” of the problem).
Moreover, most of these claims are equivalent to saying some statement q(n) is true
“for all n ≥ 1.” Since this is making a claim about an infinite set of numbers, we
cannot justify this exhaustively in a direct fashion.

Induction

We can often justify claims such as those above as true, however, by using the
technique of induction. This technique amounts to showing that, for any particular
n ≥ 1, there is a finite sequence of implications that starts with something known
to be true and ultimately leads to showing that q(n) is true. Specifically, we begin a
justification by induction by showing that q(n) is true for n = 1 (and possibly some
other values n = 2, 3, . . . , k, for some constant k). Then we justify that the inductive
“step” is true for n > k, namely, we show “if q( j) is true for all j < n, then q(n) is
true.” The combination of these two pieces completes the justification by induction.

www.it-ebooks.info
180 Chapter 4. Algorithm Analysis
Proposition 4.20: Consider the Fibonacci function F(n), which is defined such
that F(1) = 1, F(2) = 2, and F(n) = F(n − 2) + F(n − 1) for n > 2. (See Sec-
tion 2.2.3.) We claim that F(n) < 2n .

Justification: We will show our claim is correct by induction.


Base cases: (n ≤ 2). F(1) = 1 < 2 = 21 and F(2) = 2 < 4 = 22 .
Induction step: (n > 2). Suppose our claim is true for all j < n. Since both n − 2
and n − 1 are less than n, we can apply the inductive assumption (sometimes called
the “inductive hypothesis”) to imply that

F(n) = F(n − 2) + F(n − 1) < 2n−2 + 2n−1 .

Since
2n−2 + 2n−1 < 2n−1 + 2n−1 = 2 · 2n−1 = 2n ,
we have that F(n) < 2n , thus showing the inductive hypothesis for n.
Let us do another inductive argument, this time for a fact we have seen before.
Proposition 4.21: (which is the same as Proposition 4.3)
n
n(n + 1)
∑i= 2
.
i=1

Justification: We will justify this equality by induction.


Base case: n = 1. Trivial, for 1 = n(n + 1)/2, if n = 1.
Induction step: n ≥ 2. Assume the inductive hypothesis is true for any j < n.
Therefore, for j = n − 1, we have

n−1
(n − 1)(n − 1 + 1) (n − 1)n
∑i= 2
=
2
.
i=1

Hence, we obtain

n n−1
(n − 1)n 2n + n2 − n n2 + n n(n + 1)
∑i = n+ ∑ i = n+ 2
=
2
=
2
=
2
,
i=1 i=1

thereby proving the inductive hypothesis for n.


We may sometimes feel overwhelmed by the task of justifying something true
for all n ≥ 1. We should remember, however, the concreteness of the inductive tech-
nique. It shows that, for any particular n, there is a finite step-by-step sequence of
implications that starts with something true and leads to the truth about n. In short,
the inductive argument is a template for building a sequence of direct justifications.

www.it-ebooks.info
4.4. Simple Justification Techniques 181
Loop Invariants
The final justification technique we discuss in this section is the loop invariant. To
prove some statement L about a loop is correct, define L in terms of a series of
smaller statements L0 , L1 , . . . , Lk , where:
1. The initial claim, L0 , is true before the loop begins.
2. If L j−1 is true before iteration j, then L j will be true after iteration j.
3. The final statement, Lk , implies the desired statement L to be true.
Let us give a simple example of using a loop-invariant argument to justify the
correctness of an algorithm. In particular, we use a loop invariant to justify that
the method arrayFind (see Code Fragment 4.11) finds the smallest index at which
element val occurs in array A.
1 /∗∗ Returns index j such that data[j] == val, or −1 if no such element. ∗/
2 public static int arrayFind(int[ ] data, int val) {
3 int n = data.length;
4 int j = 0;
5 while (j < n) { // val is not equal to any of the first j elements of data
6 if (data[j] == val)
7 return j; // a match was found at index j
8 j++; // continue to next index
9 // val is not equal to any of the first j elements of data
10 }
11 return −1; // if we reach this, no match found
12 }
Code Fragment 4.11: Algorithm arrayFind for finding the first index at which a
given element occurs in an array.

To show that arrayFind is correct, we inductively define a series of statements,


L j , that lead to the correctness of our algorithm. Specifically, we claim the follow-
ing is true at the beginning of iteration j of the while loop:
L j : val is not equal to any of the first j elements of data.
This claim is true at the beginning of the first iteration of the loop, because j is
0 and there are no elements among the first 0 in data (this kind of a trivially true
claim is said to hold vacuously). In iteration j, we compare element val to element
data[ j]; if these two elements are equivalent, we return the index j, which is clearly
correct since no earlier elements equal val. If the two elements val and data[ j] are
not equal, then we have found one more element not equal to val and we increment
the index j. Thus, the claim L j will be true for this new value of j; hence, it is
true at the beginning of the next iteration. If the while loop terminates without ever
returning an index in data, then we have j = n. That is, Ln is true—there are no
elements of data equal to val. Therefore, the algorithm correctly returns −1 to
indicate that val is not in data.

www.it-ebooks.info
182 Chapter 4. Algorithm Analysis

4.5 Exercises
Reinforcement
R-4.1 Graph the functions 8n, 4n log n, 2n2 , n3 , and 2n using a logarithmic scale for
the x- and y-axes; that is, if the function value f (n) is y, plot this as a point with
x-coordinate at log n and y-coordinate at log y.
R-4.2 The number of operations executed by algorithms A and B is 8n logn and 2n2 ,
respectively. Determine n0 such that A is better than B for n ≥ n0 .
R-4.3 The number of operations executed by algorithms A and B is 40n2 and 2n3 , re-
spectively. Determine n0 such that A is better than B for n ≥ n0 .
R-4.4 Give an example of a function that is plotted the same on a log-log scale as it is
on a standard scale.
R-4.5 Explain why the plot of the function nc is a straight line with slope c on a log-log
scale.
R-4.6 What is the sum of all the even numbers from 0 to 2n, for any integer n ≥ 1?
R-4.7 Show that the following two statements are equivalent:
(a) The running time of algorithm A is always O( f (n)).
(b) In the worst case, the running time of algorithm A is O( f (n)).
R-4.8 Order the following functions by asymptotic growth rate.
4n log n + 2n 210 2log n
3n + 100 logn 4n 2n
n2 + 10n n3 n log n
R-4.9 Give a big-Oh characterization, in terms of n, of the running time of the example1
method shown in Code Fragment 4.12.
R-4.10 Give a big-Oh characterization, in terms of n, of the running time of the example2
method shown in Code Fragment 4.12.
R-4.11 Give a big-Oh characterization, in terms of n, of the running time of the example3
method shown in Code Fragment 4.12.
R-4.12 Give a big-Oh characterization, in terms of n, of the running time of the example4
method shown in Code Fragment 4.12.
R-4.13 Give a big-Oh characterization, in terms of n, of the running time of the example5
method shown in Code Fragment 4.12.
R-4.14 Show that if d(n) is O( f (n)), then ad(n) is O( f (n)), for any constant a > 0.
R-4.15 Show that if d(n) is O( f (n)) and e(n) is O(g(n)), then the product d(n)e(n) is
O( f (n)g(n)).
R-4.16 Show that if d(n) is O( f (n)) and e(n) is O(g(n)), then d(n) + e(n) is O( f (n) +
g(n)).

www.it-ebooks.info

You might also like