java
java
Darren Yao
2020
� � � Java Edition
Contents
I Basic Techniques 1
1 The Beginning 2
1.1 Competitive Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Contests and Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Competitive Programming Practice . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 About This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Elementary Techniques 5
2.1 Input and Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
II Bronze 20
5 Simulation 21
5.1 Example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.2 Example 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6 Complete Search 24
6.1 Example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
6.2 Generating Permutations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
6.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
ii
CONTENTS iii
III Silver 31
8 Sorting and comparators 32
8.1 Comparators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
8.2 Sorting by Multiple Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
8.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
9 Greedy Algorithms 36
9.1 Introductory Example: Studying Algorithms . . . . . . . . . . . . . . . . . . 36
9.2 Example: The Scheduling Problem . . . . . . . . . . . . . . . . . . . . . . . 37
9.3 Failure Cases of Greedy Algorithms . . . . . . . . . . . . . . . . . . . . . . . 38
9.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
10 Graph Theory 40
10.1 Graph Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
10.2 Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
10.3 Graph Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
10.4 Graph Traversal Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
10.5 Floodfill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
10.6 Disjoint-Set Data Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
10.7 Bipartite Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
10.8 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
11 Prefix Sums 60
11.1 Prefix Sums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
11.2 Two Dimensional Prefix Sums . . . . . . . . . . . . . . . . . . . . . . . . . . 61
11.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
12 Binary Search 64
12.1 Binary Search on the Answer . . . . . . . . . . . . . . . . . . . . . . . . . . 64
12.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
12.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
IV Problem Set 80
15 Parting Shots 81
Part I
Basic Techniques
1
Chapter 1
The Beginning
2
CHAPTER 1. THE BEGINNING 3
problem at the right level of difficulty should be one of two types: either you struggle with
the problem for a while before coming up with a working solution, or you miss it slightly and
need to consult the solution for some small part. If you instantly come up with the solution,
a problem is likely too easy, and if you’re missing multiple steps, it might be too hard.
In general, especially on harder problems, I think it’s fine to read the solution relatively
early on, as long as you’re made several di↵erent attempts at it and you can learn e↵ectively
from the solution.
• On a bronze problem, read the solution after 15-20 minutes of no meaningful progress,
after you’ve exhausted every idea you can think of.
• On a silver problem, read the solution after 30-40 minutes of no meaningful progress.
When you get stuck and consult the solution, you should not read the entire solution at
once, and you certainly shouldn’t look at the solution code. Instead, it’s better to read the
solution step by step until you get unstuck, at which point you should go back and finish the
problem, and implement it yourself. Reading the full solution or its code should be seen as a
last resort.
• Do not use online IDEs that display your code publicly, like the free version of ideone.
This allows other users to copy your code, and you may get flagged for cheating.
Elementary Techniques
import java.io.*;
import java.util.*;
5
CHAPTER 2. ELEMENTARY TECHNIQUES 6
r.close();
pw.close();
}
}
We have several important functions that are used in reading input and printing output:
Method Description
r.readLine() Reads the next line of the input
st.nextToken() Reads the next token (up to a whitespace) and returns as a ‘String‘.
Integer.parseInt Converts the ‘String‘ returned by the ‘StringTokenizer‘ to an ‘int‘.
Double.parseDouble Converts the ‘String‘ returned by the ‘StringTokenizer‘ to a ‘double‘.
Long.parseLong Converts the ‘String‘ returned by the ‘StringTokenizer‘ to a ‘long‘
pw.println() Prints the argument to designated output stream and adds newline
pw.print() Prints the argument to designated output stream
1 2 3
our code (inside the main method) will look like this:
r.close();
pw.close();
Now, let’s suppose we wanted to read this input, which is presented on di↵erent lines,
with di↵erent data types:
100000000000
SFDFSDFSDFD
3
r.close();
pw.close();
Note how we have to re-declare the ‘StringTokenizer‘ every time we read in a new line.
For CodeForces, CSES, and other contests that use standard input and output, here is a
nicer template, which essentially functions as a faster Scanner:
import java.io.*;
import java.util.*;
Here’s a brief description of the methods in our InputReader class, with an instance r,
and PrintWriter with an instance pw.
Method Description
r.next() Reads the next token (up to a whitespace) and returns a String
r.nextInt() Reads the next token (up to a whitespace) and returns as an int
r.nextLong() Reads the next token (up to a whitespace) and returns as a long
r.nextDouble() Reads the next token (up to a whitespace) and returns as a double
pw.println() Prints the argument to designated output stream and adds newline
pw.print() Prints the argument to designated output stream
Here’s an example to show how input/output works. Let’s say we want to write a program
that takes three numbers as input and prints their sum.
pw.close();
}
In programming contests, there is a strict limit on program runtime. This means that
in order to pass, your program needs to finish running within a certain timeframe. For
USACO, this limit is 4 seconds for Java submissions. A conservative estimate for the number
of operations the grading server can handle per second is 108 .
int a = 5;
int b = 7;
int c = 4;
int d = a + b + c + 153;
10
CHAPTER 3. TIME/SPACE COMPLEXITY AND ALGORITHM ANALYSIS 11
int i = 0;
while(i < n){
// constant time node here
i++;
}
Because we ignore constant factors and lower order terms, for loops where we loop up to
5n + 17 or n + 457737 would also be O(n):
We can find the time complexity of multiple loops by multiplying together the time
complexities of each loop. The following example is O(nm), because the outer loop runs O(n)
iterations and the inner loop O(m).
If an algorithm contains multiple blocks, then its time complexity is the worst time
complexity out of any block. For example, if an algorithm has an O(n) block and an O(n2 )
block, the overall time complexity is O(n2 ).
Functions of di↵erent variables generally are not considered lower-order terms with respect
to each other, so we must include both terms. For example, if an algorithm has an O(n2 )
block and an O(nm) block, the overall time complexity would be O(n2 + nm).
• Prime
p factorization of an integer, or checking primality or compositeness of an integer:
O( n)
• Sorting: usually O(n log n) for default sorting algorithms (mergesort, for example
Collections.sort or Arrays.sort on objects)
• Iterating through all subsets of size k of the input elements: O(nk ). For example,
iterating through all triplets is O(n3 ).
Here are conservative upper bounds on the value of n for each time complexity. You can
probably get away with more than this, but this should allow you to quickly check whether
an algorithm is viable.
n Possible complexities
n 10 O(n!), O(n7 ), O(n6 )
n 20 O(2n · n), O(n5 )
n 80 O(n4 )
n 400 O(n3 )
2
n 7500 O(n
p)
n 7 · 104 O(n n)
n 5 · 105 O(n log n)
n 5 · 106 p O(n) p
n 1012 O( n log n), O( n)
n 1018 O(log2 n), O(log n), O(1)
Chapter 4
A data structure determines how data is stored. (is it sorted? indexed? what operations
does it support?) Each data structure supports some operations efficiently, while other
operations are either inefficient or not supported at all. This chapter introduces the data
structures in the Java standard library that are frequently used in competitive programming.
Java default Collections data structures are designed to store any type of object. However,
we usually don’t want this; instead, we want our data structures to only store one type of
data, like integers, or strings. We do this by putting the desired data type within the <>
brackets when declaring the data structure, as follows:
This creates an ArrayList structure that only stores objects of type String.
For our examples below, we will primarily use the Integer data type, but note that you
can have Collections of any object type, including Strings, other Collections, or user-defined
objects.
Collections data types always contain an add method for adding an element to the
collection, and a remove method which removes and returns a certain element from the
collection. They also support the size() method, which returns the number of elements in
the data structure, and the isEmpty() method, which returns true if the data structure is
empty, and false otherwise.
13
CHAPTER 4. BUILT-IN DATA STRUCTURES 14
To iterate through a static or dynamic array, we can use either the regular for loop or the
for-each loop.
Arrays.sort(arr) is used to sort a static array, and Collections.sort(list) a dynamic
array. The default sort function sorts the array in ascending order.
In array-based contest problems, we’ll use one-, two-, and three-dimensional static arrays
most of the time. However, we can also have static arrays of dynamic arrays, dynamic arrays
of static arrays, and so on. Usually, the choice between a static array and a dynamic array is
just personal preference.
Queues
A queue is a First In First Out (FIFO) data structure that supports three operations of
add, insertion at the back of the queue, poll, deletion from the front of the queue, and peek,
CHAPTER 4. BUILT-IN DATA STRUCTURES 15
which retrieves the element at the front without removing it, all in O(1) time. Java doesn’t
actually have a Queue class; it’s only an interface. The most commonly used implementation
is the LinkedList, declared as follows: Queue q = new LinkedList();.
Deques
A deque (usually pronounced “deck”) stands for double ended queue and is a combination
of a stack and a queue, in that it supports O(1) insertions and deletions from both the
front and the back of the deque. In Java, the deque class is called ArrayDeque. The four
methods for adding and removing are addFirst, removeFirst, addLast, and removeLast.
The methods for retrieving the first and last elements without removing are peekFirst and
peekLast.
Priority Queues
A priority queue supports the following operations: insertion of elements, deletion of
the element considered highest priority, and retrieval of the highest priority element, all in
O(log n) time according to the number of elements in the priority queue. Priority is based on
a comparator function, but by default the lowest element is at the front of the priority queue.
The priority queue is one of the most important data structures in competitive programming,
so make sure you understand how and when to use it. By default, the Priority Queue puts
the lowest element at the front of the queue.
pq.add(3); // [4, 3, 2, 1]
System.out.println(pq.peek()); // 1
pq.poll(); // [4, 3, 2]
pq.poll(); // [4, 3]
pq.add(5); // [5, 4, 3]
Unordered Sets
The unordered set works by hashing, which is assigning a usually-unique code to every
variable/object which allows insertions, deletions, and searches in O(1) time, albeit with a
high constant factor, as hashing requires a large constant number of operations. However,
as the name implies, elements are not ordered in any meaningful way, so traversals of an
unordered set will return elements in some arbitrary order. The operations on an unordered
set are add, which adds an element to the set if not already present, remove, which deletes
an element if it exists, and contains, which checks whether the set contains that element.
Ordered Sets
The second type of set data structure is the ordered or sorted set. Insertions, deletions,
and searches on the ordered set require O(log n) time, based on the number of elements
CHAPTER 4. BUILT-IN DATA STRUCTURES 17
in the set. As well as those supported by the unordered set, the ordered set also allows
four additional operations: first, which returns the lowest element in the set, last, which
returns the highest element in the set, lower, which returns the greatest element strictly less
than some element, and higher, which returns the least element strictly greater than it.
The primary limitation of the ordered set is that we can’t efficiently access the kth largest
element in the set, or find the number of elements in the set greater than some arbitrary x.
These operations can be handled using a data structure called an order statistic tree, but
that is beyond the scope of this book.
Maps
A map is a set of ordered pairs, each containing a key and a value. In a map, all keys
are required to be unique, but values can be repeated. Maps have three primary methods:
one to add a specified key-value pairing, one to retrieve the value for a given key, and one
to remove a key-value pairing from the map. Like sets, maps can be unordered (HashSet
in Java) or ordered (TreeSet in Java). In an unordered map, hashing is used to support
O(1) operations. In an ordered map, the entries are sorted in order of key. Operations are
O(log n), but accessing or removing the next key higher or lower than some input k is also
supported.
Unordered Maps
In the unordered map, the put(key, value) method assigns a value to a key and places
the key and value pair into the map. The get(key) method returns the value associated with
the key. The containsKey(key) method checks whether a key exists in the map. Lastly,
remove(key) removes the map entry associated with the specified key. All of these operations
are O(1), but again, due to the hashing, this has a high constant factor.
Ordered Maps
The ordered map supports all of the operations that an unordered map supports, and
additionally supports firstKey/firstEntry and lastKey/lastEntry, returning the lowest
key/entry and the highest key/entry, as well as higherKey/higherEntry and lowerKey/
lowerEntry, returning the lowest key/entry strictly higher than the specified key, or the
highest key/entry strictly lower than the specified key.
A note on unordered sets and maps: In USACO contests, they’re generally fine, but in
CodeForces contests, you should always use sorted sets and maps. This is because the built-in
hashing algorithm is vulnerable to pathological data sets causing abnormally slow runtimes,
in turn causing failures on some test cases.
Multisets
Lastly, there is the multiset, which is essentially a sorted set that allows multiple copies
of the same element. While there is no Multiset in Java, we can implement one using the
TreeMap from values to their respective frequencies. We declare the TreeMap implementation
globally so that we can write functions for adding and removing elements from it.
The first, last, higher, and lower operations still function as intended; just use firstKey,
lastKey, higherKey, and lowerKey respectively.
4.4 Problems
Again, note that CSES’s grader is very slow, so don’t worry if you encounter a Time
Limit Exceeded verdict; as long as you pass the majority of test cases within the time limit,
you can consider the problem solved, and move on.
Bronze
20
Chapter 5
Simulation
In many problems, we can simply simulate what we’re told to do by the problem statement.
Since there’s no formal algorithm involved, the intent of the problem is to assess competence
with one’s programming language of choice and knowledge of built-in data structures. At
least in USACO Bronze, when a problem statement says to find the end result of some
process, or to find when something occurs, it’s usually sufficient to simulate the process.
5.1 Example 1
Alice and Bob are standing on a 2D plane. Alice starts at the point (0, 0), and Bob
starts at the point (R, S) (1 R, S 1000). Every second, Alice moves M units to the
right, and N units up. Every second, Bob moves P units to the left, and Q units down.
(1 M, N, P, Q 10). Determine if Alice and Bob will ever meet (be at the same point at
the same time), and if so, when.
INPUT FORMAT:
The first line of the input contains R and S.
The second line of the input contains M , N , P , and Q.
OUTPUT FORMAT:
Please output a single integer containing the number of seconds after the start at which Alice
and Bob meet. If they never meet, please output 1.
Solution
We can simulate the process. After inputting the values of R, S, M , N , P , and Q, we can
keep track of Alice’s and Bob’s x- and y-coordinates. To start, we initialize variables for their
respective positions. Alice’s coordinates are initially (0, 0), and Bob’s coordinates are (R, S)
respectively. Every second, we increase Alice’s x-coordinate by M and her y-coordinate by
N , and decrease Bob’s x-coordinate by P and his y-coordinate by Q.
Now, when do we stop? First, if Alice and Bob ever have the same coordinates, then we
are done. Also, since Alice strictly moves up and to the right and Bob strictly moves down
and to the left, if Alice’s x- or y-coordinates are ever greater than Bob’s, then it is impossible
for them to meet. Example code will be displayed below (Here, as in other examples, input
processing will be omitted):
21
CHAPTER 5. SIMULATION 22
5.2 Example 2
There are N buckets (5 N 105 ), each with a certain capacity Ci (1 Ci 100). One
day, after a rainstorm, each bucket is filled with Ai units of water (1 Ai Ci ). Charlie
then performs the following process: he pours bucket 1 into bucket 2, then bucket 2 into
bucket 3, and so on, up until pouring bucket N 1 into bucket N . When Charlie pours
bucket B into bucket B + 1, he pours as much as possible until bucket B is empty or bucket
B + 1 is full. Find out how much water is in each bucket once Charlie is done pouring.
INPUT FORMAT:
The first line of the input contains N .
The second line of the input contains the capacities of the buckets, C1 , C2 , . . . , Cn .
The third line of the input contains the amount of water in each bucket A1 , A2 , . . . , An .
OUTPUT FORMAT:
Please print one line of output, containing N space-separated integers: the final amount of
water in each bucket once Charlie is done pouring.
Solution:
Once again, we can simulate the process of pouring one bucket into the next. The amount of
water poured from bucket B to bucket B + 1 is the smaller of the amount of water in bucket
B (after all previous operations have been completed) and the remaining space in bucket
B + 1, which is CB+1 AB+1 . We can just handle all of these operations in order, using an
array C to store the maximum capacities of each bucket, and an array A to store the current
water level in each bucket, which we update during the process. Example code is below (note
that arrays are zero-indexed, so the indices of our buckets go from 0 to N 1 rather than
from 1 to N ).
CHAPTER 5. SIMULATION 23
5.3 Problems
1. USACO December 2018 Bronze Problem 1: Mixing Milk
http://www.usaco.org/index.php?page=viewproblem2&cpid=855
4. USACO February 2017 Bronze Problem 3: Why Did the Cow Cross the Road III
http://www.usaco.org/index.php?page=viewproblem2&cpid=713
Complete Search
In many problems (especially in Bronze), it’s sufficient to check all possible cases in
the solution space, whether it be all elements, all pairs of elements, or all subsets, or all
permutations. Unsurprisingly, this is called complete search (or brute force), because it
completely searches the entire solution space.
6.1 Example 1
You are given N (3 N 5000) integer points on the coordinate plane. Find the square
of the maximum Euclidean distance (aka length of the straight line) between any two of the
points.
INPUT FORMAT:
The first line contains an integer N .
The second line contains N integers, the x-coordinates of the points: x1 , x2 , . . . , xn ( 1000
xi 1000).
The third line contains N integers, the y-coordinates of the points: y1 , y2 , . . . , yn ( 1000
yi 1000).
OUTPUT FORMAT:
Print one integer, the square of the maximum Euclidean distance between any two of the
points.
Solution:
We can brute-force every pair of points and find the square of the distance between them,
by squaring the formula for Euclidean distance: distance2 = (x2 x1 )2 + (y2 y1 )2 . Thus,
we store the coordinates in arrays X[] and Y[], such that X[i] and Y[i] are the x- and
y-coordinates of the ith point, respectively. Then, we iterate through all possible pairs of
points, using a variable max to store the maximum square of distance between any pair seen
so far, and if the square of the distance between a pair is greater than our current maximum,
24
CHAPTER 6. COMPLETE SEARCH 25
A couple notes: first, since we’re iterating through all pairs of points, we start the j loop
from j = i + 1 so that point i and point j are never the same point. Furthermore, it makes
it so that each pair is only counted once. In this problem, it doesn’t matter whether we
double-count pairs or whether we allow i and j to be the same point, but in other problems
where we’re counting something rather than looking at the maximum, it’s important to be
careful that we don’t overcount. Secondly, the problem asks for the square of the maximum
Euclidean distance between any two points. Some students may be tempted to maintain the
maximum distance in a variable, and then square it at the end when outputting. However,
the problem here is that while the square of the distance between two integer points is always
an integer, the distance itself isn’t guaranteed to be an integer. Thus, we’ll end up shoving a
non-integer value into an integer variable, which truncates the decimal part. Using a floating
point variable isn’t likely to work either, due to precision errors (use of floating point decimals
should generally be avoided when possible).
CHAPTER 6. COMPLETE SEARCH 26
[1, 2, 3], [2, 1, 3], [3, 1, 2], [1, 3, 2], [2, 3, 1], [3, 2, 1]
Algorithm: Iterate over all permutations of a given input array, performing some
action on each permutation
Function generatePermutations
Input : An array arr, and its length k
if k = 1 then
process the current permutation
else
generatePermutations (arr, k 1)
for i 0 to k 1 do
if k is even then
swap indices i and k 1 of arr
else
swap indices 0 and k 1 of arr
end
generatePermutations (arr, k 1)
end
end
Code for iterating over all permutations is as follows:
} else {
swap(arr, 0, k-1);
// swap indices 0 and k-1 of arr
}
generate(arr, k-1);
}
}
}
6.3 Problems
1. USACO February 2020 Bronze Problem 1: Triangles
http://usaco.org/index.php?page=viewproblem2&cpid=1011
2. USACO January 2020 Bronze Problem 2: Photoshoot
http://www.usaco.org/index.php?page=viewproblem2&cpid=988
(Hint: Figure out what exactly you’re complete searching)
3. USACO December 2019 Bronze Problem 1: Cow Gymnastics
http://usaco.org/index.php?page=viewproblem2&cpid=963
(Hint: Brute force over all possible pairs)
4. USACO February 2016 Bronze Problem 1: Milk Pails
http://usaco.org/index.php?page=viewproblem2&cpid=615
5. USACO January 2018 Bronze Problem 2: Lifeguards
http://usaco.org/index.php?page=viewproblem2&cpid=784
(Hint: Try removing each lifeguard one at a time).
6. USACO December 2019 Bronze Problem 2: Where Am I?
http://usaco.org/index.php?page=viewproblem2&cpid=964
(Hint: Brute force over all possible substrings)
7. (Permutations) USACO December 2019 Bronze Problem 3: Livestock Lineup
http://usaco.org/index.php?page=viewproblem2&cpid=965
8. (Permutations) CSES Problem Set Task 1624: Chessboard and Queens
https://cses.fi/problemset/task/1624
9. USACO US Open 2016 Bronze Problem 3: Field Reduction
http://www.usaco.org/index.php?page=viewproblem2&cpid=641
(Hint: For this problem, you can’t do a full complete search; you have to do a reduced
search)
10. USACO December 2018 Bronze Problem 3: Back and Forth
http://www.usaco.org/index.php?page=viewproblem2&cpid=857
(This problem is relatively hard)
Chapter 7
7.2 Ad-hoc
Ad-hoc problems are problems that don’t fall into any standard algorithmic category
with well known solutions. They are usually unique problems intended to be solved with
unconventional techniques. In ad-hoc problems, it’s helpful to look at the constraints given in
the problem and devise potential time complexities of solutions; this, combined with details
in the problem statement itself, may give an outline of the solution.
Unfortunately, since ad-hoc problems don’t have solutions consisting of well known
algorithms, we can’t systematically teach you how to do them. The best way of learning how
to do ad-hoc is to practice. Of course, the problem solving intuition from math contests (if
you did them) is quite helpful, but otherwise, you can develop this intuition from practicing
ad-hoc problems.
28
CHAPTER 7. ADDITIONAL BRONZE TOPICS 29
While solving these problems, make sure to utilize what you’ve learned about the built-in
data structures and algorithmic complexity analysis, from chapters 2, 3, and 4. Since ad-hoc
problems comprise a significant portion of bronze problems, we’ve included a large selection
of them below for your practice.
7.3 Problems
Square and Rectangle Geometry
1. USACO December 2017 Bronze Problem 1: Blocked Billboard
http://usaco.org/index.php?page=viewproblem2&cpid=759
Ad-hoc problems
5. USACO January 2016 Bronze Problem 1: Promotion Counting
http://usaco.org/index.php?page=viewproblem2&cpid=591
Silver
31
Chapter 8
8.1 Comparators
Java has built-in functions for sorting: Arrays.sort(arr) for arrays, and Collections.
sort(list) for ArrayLists. However, if we use custom objects, or if we want to sort elements
in a di↵erent order, then we’ll need to use a Comparator.
Normally, sorting functions rely on moving objects with a lower value ahead of objects
with a higher value if sorting in ascending order, and vice versa if in descending order. This
is done through comparing two objects at a time. What a Comparator does is compare two
objects as follows, based on our comparison criteria:
In addition to returning the correct number, comparators should also satisfy the following
conditions:
• The function must be consistent with respect to reversing the order of the arguments:
if compare(x, y) is positive, then compare(y, x) should be negative and vice versa
• The function must be transitive. If compare(x, y) > 0 and compare(y, z) > 0, then
compare(x, z) > 0. Same applies if the compare functions return negative numbers.
Java has default functions for comparing ints, longs, and doubles. The Integer.compare(),
Long.compare(), and Double.compare() functions take two arguments x and y and compare
them as described above.
Now, there are two ways of implementing this in Java: Comparable, and Comparator.
They essentially serve the same purpose, but Comparable is generally easier and shorter to
32
CHAPTER 8. SORTING AND COMPARATORS 33
code. Comparable is a function implemented within the class containing the custom object,
while Comparator is its own class. For our example, we’ll use a Person class that contains a
person’s height and weight, and sort in ascending order by height.
If we use Comparable, we’ll need to put implements Comparable<Person> into the
heading of the class. Furthermore, we’ll need to implement the compareTo method. Essentially,
compareTo(x) is the compare function that we described above, with the object itself as the
first argument: compare(self, x).
When using Comparator, the syntax for using the built-in sorting function requires
a second argument: Arrays.sort(arr, new Comp()), or Collections.sort(list, new
Comp()).
If we instead wanted to sort in descending order, this is also very simple. Instead of the
comparing function returning Integer.compare(x, y) of the arguments, it should instead
return -Integer.compare(x, y).
CHAPTER 8. SORTING AND COMPARATORS 34
I don’t recommend using arrays to represent objects, mostly because it’s confusing, but
it’s worth noting that some competitors do this.
8.3 Problems
1. USACO US Open 2018 Silver Problem 2: Lemonade Line
http://www.usaco.org/index.php?page=viewproblem2&cpid=835
CHAPTER 8. SORTING AND COMPARATORS 35
Greedy Algorithms
Greedy algorithms are algorithms that select the most optimal choice at each step, instead
of looking at the solution space as a whole. This reduces the problem to a smaller problem at
each step. However, as greedy algorithms never recheck previous steps, they sometimes lead
to incorrect answers. Moreover, in a certain problem, there may be more than one possible
greedy algorithm; usually only one of them is correct. This means that we must be extremely
careful when using the greedy method. However, when they are correct, greedy algorithms
are extremely efficient.
Greedy is not a single algorithm, but rather a way of thinking that is applied to problems.
There’s no one way to do greedy algorithms. Hence, we use a selection of well-known examples
to help you understand the greedy paradigm.
Usually, when using a greedy algorithm, there is a heuristic or value function that
determines which choice is considered most optimal.
36
CHAPTER 9. GREEDY ALGORITHMS 37
In this case, the greedy algorithm selects to attend only one event. However, the optimal
solution would be the following:
Coin Change
This problem gives several coin denominations, and asks for the minimum number of
coins needed to make a certain value. The greedy algorithm of taking the largest possible
coin denomination that fits in the remaining capacity can be used to solve this problem only
in very specific cases (it can be proven that it works for the American as well as the Euro
coin systems). However, it doesn’t work in the general case.
Knapsack
The knapsack problem gives a number of items, each having a weight and a value, and
we want to choose a subset of these items. We are limited to a certain weight, and we want
to maximize the value of the items that we take.
Let’s take the following example, where we have a maximum capacity of 4:
If we use greedy based on highest value first, we choose item A and then we are done, as
we don’t have remaining weight to fit either of the other two. Using greedy based on value
per weight again selects item A and then quits. However, the optimal solution is to select
items B and C, as they combined have a higher value than item A alone. In fact, there is no
working greedy solution. The solution to this problem uses dynamic programming, which is
beyond the scope of this book.
9.4 Problems
1. USACO December 2015 Silver Problem 2: High Card Wins
http://usaco.org/index.php?page=viewproblem2&cpid=571
3. USACO February 2017 Silver Problem 1: Why Did The Cow Cross The Road
http://www.usaco.org/index.php?page=viewproblem2&cpid=714
Chapter 10
Graph Theory
Graph theory is one of the most important topics at the Silver level and above. Graphs
can be used to represent many things, from images to wireless signals, but one of the simplest
analogies is to a map. Consider a map with several cities and highways connecting the cities.
Some of the problems relating to graphs are:
• If we have a map with some cities and roads, what’s the shortest distance I have to
travel to get from point A to point B?
• Consider a map of cities and roads. Is city A connected to city B? Consider a region to
be a group of cities such that each city in the group can reach any other city in said
group, but no other cities. How many regions are in this map, and which cities are in
which region?
4
1 4 1 4
5 2 1 5
5 2 6 5 2 6
2 3
3 3
Figure 10.1: An undirected unweighted graph (left) and a directed weighted graph (right)
40
CHAPTER 10. GRAPH THEORY 41
A connected component is a set of nodes within which any node can reach any other
node. For example, in this graph, nodes 1, 2, and 3 are a connected component, nodes 4 and
5 are a connected component, and node 6 is its own component.
3 6
1 2 4 5
10.2 Trees
A tree is a special type of graph satisfying two constraints: it is acyclic, meaning there
are no cycles, and the number of edges is one less than the number of nodes. Trees satisfy
the property that for any two nodes A and B, there is exactly one way to travel between A
and B.
2 3
4 5 6 7
The root of a tree is the one vertex that is placed at the top, and is where we usually
start our tree traversals from. Usually, problems don’t tell us where the tree is rooted at, and
it usually doesn’t matter either; trees can be arbitrarily rooted (here, we’ll use the convention
of rooting at index 1).
Every node except the root node has a parent. The parent of a node s is defined as
follows: On the path from the root to s, the node that is one closer to the root than s is the
parent of s. Each non-root node has a unique parent.
Child nodes are the opposite. They lie one farther away from the root than their parent
node. Unlike parent nodes, these are not unique. Each node can have arbitrarily many child
nodes, and nodes can also have zero children. If a node s is the parent of a node t, then t is
the child node of s.
A leaf node is a node that has no children. Leaf nodes can be identified quite easily
because there is only one edge adjacent to them.
CHAPTER 10. GRAPH THEORY 42
In our example tree above, node 1 is the root, nodes 2 and 3 are children of node 1, nodes
4, 5, and 6 are children of 2, and node 7 is the child of 3. Nodes 4, 5, 6, and 7 are leaf nodes.
6 7 // 6 nodes, 7 edges
// the following lines represent edges.
1 2
1 4
1 5
2 3
2 4
3 5
4 6
1 4
5 2 6
Graphs can be represented in three ways: Adjacency List, Adjacency Matrix, and Edge
List. Regardless of how the graph is represented, it’s important that it be stored globally
and statically, because we need to be able to access it from outside the main method, and
call the graph searching and traversal methods on it.
CHAPTER 10. GRAPH THEORY 43
Adjacency List
The adjacency list is the most commonly used method of storing graphs. When we use
DFS, BFS, Dijkstra’s, or other single-source graph traversal algorithms, we’ll want to use an
adjacency list. In an adjacency list, we maintain a length N array of lists. Each list stores
the neighbors of one node. In an undirected graph, if there is an edge between node a and
node b, we add a to the list of b’s neighbors, and b to the list of a’s neighbors. In a directed
graph, if there is an edge from node a to node b, we add b to the list of a’s neighbors, but
not vice versa.
4
1 4
3 9 2 3
5 2 6
4 5
1
3
Adjacency lists take up O(N + M ) space, because each node corresponds to one list of
neighbors, and each edge corresponds to either one or two endpoints (directed vs undirected).
In an adjacency list, we can find (and iterate through) the neighbors of a node easily. Hence,
the adjacency list is the graph representation we should be using most of the time.
Often, we’ll want to maintain a array visited, which is a boolean array representing
whether each node has been visited. When we visit node k (0-indexed), we mark visited[k]
true, so that we know not to return to it.
Code for setting up an adjacency list is as follows:
If we’re dealing with a weighted graph, we’ll declare an Edge class or struct that stores
two variables: the second endpoint of the edge, and the weight of the edge, and we store an
array of lists of edges rather than an array of lists of integers.
Adjacency Matrix
Another way of representing graphs is the adjacency matrix, which is an N by N 2-
dimensional array that stores for each pair of indices (a, b), stores whether there is an
edge between a and b. Start by initializing every entry in the matrix to zero (this is done
automatically in Java), and then for undirected graphs, for each edge between indices a and
b, set adj[a][b] and adj[b][a] to 1 (if unweighted) or the edge weight (if weighted). If
the graph is directed, for an edge from a to b, only set adj[a][b].
CHAPTER 10. GRAPH THEORY 45
4
1 4
3 9 2 3
5 2 6
4 5
1
3
⇥ 0 1 2 3 4 5
0 0 9 0 4 3 0
1 9 0 5 2 0 0
2 0 5 0 0 4 1
3 4 2 0 0 0 3
4 3 0 4 0 0 0
5 0 0 1 3 0 0
At the Silver level, we generally won’t be using the adjacency matrix much, but it’s helpful
to know if it does come up. The primary use of the adjacency matrix is the Floyd-Warshall
algorithm, which is beyond the scope of this book.
Code for setting up an adjacency matrix is as follows:
Edge List
The last graph representation is the edge list. Usually, we use this in weighted undirected
graphs when we want to sort the edges by weight (for DSU, for example; see section 10.6).
In the edge list, we simply store a single list of all the edges, in the form (a, b, w) where a
and b are the nodes that the edge connects, and w is the edge weight. Note that in an edge
list, we do NOT add each edge twice; there is only one place for us to add the edges, so we
only do so once.
4
1 4
3 9 2 3
5 2 6
4 5
1
3
(0, 1, 9), (0, 3, 4), (0, 4, 3), (1, 3, 2), (3, 5, 3), (2, 4, 4), (2, 1, 5), (2, 5, 1)
Code for the edge list is as follows, using the above edge class:
Depth-first search
Depth-first search continues down a single path to the end, then it backtracks to check
other vertices. Depth-first search will process all nodes that are reachable (connected by
edges) to the starting node. Let’s look at an example of how this works. Depth first-search
can start at any node, but by convention we’ll start the search at node 1. We’ll use the
following color scheme: blue for nodes we have already visited, red for nodes we are currently
processing, and black for nodes that have not been visited yet.
The DFS starts from node 1 and then goes to node 2, as it’s the only neighbor of node 1:
1 2 3 1 2 3
5 4 5 4
Now, the DFS goes to node 3 and then 4, following a single path to the end until it has no
more nodes to process:
CHAPTER 10. GRAPH THEORY 48
1 2 3 1 2 3
5 4 5 4
Lastly, the DFS backtracks to visit node 5, which was skipped over previously.
1 2 3
5 4
Depth-first search is implemented recursively because it allows for much simpler and shorter
code. The algorithm is as follows:
Algorithm: Recursive implementation for depth-first traversal of a graph
Function DFS
Input : start, the 0-indexed number of the starting vertex
visted(start) true
foreach vertex k adjacent to start do
if visited(k) is false then
DFS (k)
end
end
Code:
Breadth-first search
Breadth-first search visits nodes in order of distance away from the starting node; it first
visits all nodes that are one edge away, then all nodes that are two edges away, and so on.
Let’s use the same example graph that we used earlier: The BFS starts from node 1 and
then goes to node 2, as it’s the only neighbor of node 1:
CHAPTER 10. GRAPH THEORY 49
1 2 3 1 2 3
5 4 5 4
Now, the BFS goes to node 3, and then node 5, because both of them are two edges away
from node 1:
1 2 3 1 2 3
5 4 5 4
1 2 3
5 4
Iterative DFS
If you encounter stack overflows while using recursive DFS, you can write an iterative
DFS, which is just BFS but with nodes stored on a stack rather than a queue.
10.5 Floodfill
Floodfill is an algorithm that identifies and labels the connected component that a
particular cell belongs to, in a multi-dimensional array. Essentially, it’s DFS, but on a grid,
and we want to find the connected component of all the connected cells with the same number.
For example, let’s look at the following grid and see how floodfill works, starting from the
top-left cell. The color scheme will be the same: red for the node currently being processed,
blue for nodes already visited, and uncolored for nodes not yet visited.
2 2 1
2 1 1
2 2 1
2 2 1
2 1 1
2 2 1
2 2 1
2 1 1
2 2 1
2 2 1
2 1 1
2 2 1
2 2 1
2 1 1
2 2 1
2 2 1
2 1 1
2 2 1
As opposed to an explicit graph where the edges are given, a grid is an implicit graph.
This means that the neighbors are just the nodes directly adjacent in the four cardinal
directions.
Usually, grids given in problems will be N by M , so the first line of the input contains the
numbers N and M . In this example, we will use an two-dimensional integer array to store the
grid, but depending on the problem, a two-dimensional character array or a two-dimensional
boolean array may be more appropriate. Then, there are N rows, each with M numbers
containing the contents of each square in the grid. Example input might look like the following
(varies between problems):
CHAPTER 10. GRAPH THEORY 52
3 4
1 1 2 1
2 3 2 1
1 3 3 3
array):
Algorithm: Floodfill of a graph
Function main
// Input/output, global vars, etc hidden
for i 0 to n 1 do
for j 0 to m 1 do
if the square at (i, j) is not visited then
currentSize 0
floodfill(i, j, grid[i][j])
Process the connected component
end
end
end
Function floodfill
Input : r, c, color
// row and column index of starting square, target color
if r or c is out of bounds then
return
end
if the cell at (r, c) is the wrong color then
return
end
if the square at (r, c) has already been visited then
return
end
visited[r][c] true
currentSize currentSize + 1
floodfill(r, c + 1, color)
floodfill(r, c 1, color)
floodfill(r 1, c, color)
floodfill(r + 1, c, color)
The code below shows the global/static variables we need to maintain while doing floodfill,
and the floodfill algorithm itself.
To achieve this, we store sets as trees, with the root of the tree representing the “parent”
of the set. Initially, we store each node as its own set. Then, we combine their sets when we
add an edge between two nodes. The image below illustrates this structure.
CHAPTER 10. GRAPH THEORY 55
However, this naive implementation of a DSU isn’t much better than simply running a
floodfill. As the recursing up the tree of a set to find it’s root can be time-consuming for
trees with long chains, the runtime ultimately degrades to still being O(nm) for n nodes and
m edges.
Now that we understand the general idea of a DSU, we can improve the runtime of this
implementation using an optimization known as path compression. The general idea is to
reassign nodes in the tree as you are recursively calling the find method to prevent long
chains from forming. Here is a rewritten find method representing this idea:
The following image demonstrates how the tree with parent 1 is compressed after find(6)
is called. All of the bolded nodes in the final tree were visited during the recursive operation,
and now point to the root.
CHAPTER 10. GRAPH THEORY 57
With this new optimization, the runtime reduces to O(n log n), far better than our naive
algorithm. Further optimizations can reduce the runtime of DSU to nearly constant. However,
those techniques and the proof of complexity for these optimizations are both unnecessary for
and out of the scope of the USACO Silver division, so they will not be included in this book.
1 2 3
4 5
A graph is bipartite if and only if there are no cycles of odd length. For example, the
following graph is not bipartite, because it contains a cycle of length 3.
1 2 3
The following image depicts how a bipartite graph splits vertices into two “groups”
depending on their color.
CHAPTER 10. GRAPH THEORY 58
end
end
end
return true // bipartite
10.8 Problems
DFS/BFS Problems
1. USACO January 2018 Silver Problem 3: MooTube
http://www.usaco.org/index.php?page=viewproblem2&cpid=788
CHAPTER 10. GRAPH THEORY 59
DSU Problems
Many of these problems do not require DSU. However, they become much easier to do if
you understand it.
Prefix Sums
Index i 0 1 2 3 4 5 6
arr[i] 0 1 6 4 2 5 3
Naively, for every query, we can iterate through all entries from index a to index b to add
them up. Since we have Q queries and each query requires a maximum of O(N ) operations
to calculate the sum, our total time complexity is O(N Q). For most problems of this nature,
the constraints will be N, Q 105 , so N Q is on the order of 1010 . This is not acceptable; it
will almost always exceed the time limit.
Instead, we can use prefix sums to process these array sum queries. We designate a prefix
sum array prefix[]. First, since we’re 1-indexing the array, set prefix[0] = 0, then for
indices k such that 1 k n, define the prefix sum array as follows:
k
X
prefix[k] = arr[i]
i=1
Basically, what this means is that the element at index k of the prefix sum array stores the
sum of all the elements in the original array from index 1 up to k. This can be calculated
easily in O(N ) by the following formula:
prefix[k] = prefix[k-1] + arr[k]
For the example case, our prefix sum array looks like this:
Index i 0 1 2 3 4 5 6
prefix[i] 0 1 7 11 13 18 21
60
CHAPTER 11. PREFIX SUMS 61
Now, when we want to query for the sum of the elements of arr between (1-indexed)
indices a and b inclusive, we can use the following formula:
b
X b
X a 1
X
arr[i] = arr[i] arr[i]
i=a i=1 i=1
Using our definition of the elements in the prefix sum array, we have
b
X
arr[i] = prefix[b] prefix[a-1]
i=a
Since we are only querying two elements in the prefix sum array, we can calculate subarray
sums in O(1) per query, which is much better than the O(N ) per query that we had before.
Now, after an O(N ) preprocessing to calculate the prefix sum array, each of the Q queries
takes O(1) time. Thus, our total time complexity is O(N + Q), which should now pass the
time limit.
Let’s do an example query and find the subarray sum between indices a = 2 and b = 5,
inclusive,
P5 in the 1-indexed arr. From looking at the original array, we see that this is
i=2 arr[i] = 6 + 4 + 2 + 5 = 17.
Index i 0 1 2 3 4 5 6
arr[i] 0 1 6 4 2 5 3
Index i 0 1 2 3 4 5 6
prefix[i] 0 1 7 11 13 18 21
0 0 0 0 0 0
0 1 5 6 11 8
0 1 7 11 9 4
0 4 6 1 3 2
0 7 5 4 2 3
Naively, each sum query would then take O(N M ) time, for a total of O(QN M ). This is
too slow.
Let’s take the following example region, which we want to sum:
CHAPTER 11. PREFIX SUMS 62
0 0 0 0 0 0
0 1 5 6 11 8
0 1 7 11 9 4
0 4 6 1 3 2
0 7 5 4 2 3
Manually summing all the cells, we have a submatrix sum of 7 + 11 + 9 + 6 + 1 + 3 = 37.
The first logical optimization would be to do one-dimensional prefix sums of each row.
Then, we’d have the following row-prefix sum matrix. The desired subarray sum of each row
in our desired region is simply the green cell minus the red cell in that respective row. We do
this for each row, to get (28 1) + (14 4) = 37.
0 0 0 0 0 0
0 1 6 12 23 31
0 1 8 19 28 32
0 4 10 11 14 16
0 7 12 16 18 21
Now, if we wanted to find a submatrix sum, we could break up the submatrix into a
subarray for each row, and then add their sums, which would be calculated using the prefix
sums method described earlier. Since the matrix has N rows, the time complexity of this is
O(QN ). This is better, but still usually not fast enough.
To do better, we can do two-dimensional prefix sums. In our two dimensional prefix sum
array, we have
Xa X b
prefix[a][b] = arr[i][j]
i=1 j=1
This can be calculated as follows for row index 1 i n and column index 1 j m:
prefix[i][j] = prefix[i-1][j] + prefix[i][j-1] prefix[i-1][j-1] + arr[i][j]
The submatrix sum between rows a and A and columns b and B, can thus be expressed as
follows:
X A X B
arr[i][j] = prefix[A][B] prefix[a-1][B]
i=a j=b prefix[A][b-1] + prefix[a-1][b-1]
Summing the blue region from above using the 2d prefix sums method, we add the value
of the green square, subtract the values of the red squares, and then add the value of the
gray square. In this example, we have 65 23 6 + 1 = 37, as expected.
0 0 0 0 0 0
0 1 6 12 23 31
0 2 14 31 51 63
0 6 24 42 65 79
0 13 36 58 83 100
Since no matter the size of the submatrix we are summing, we only need to access 4 values
of the 2d prefix sum array, this runs in O(1) per query after an O(N M ) preprocessing. This
is fast enough.
CHAPTER 11. PREFIX SUMS 63
11.3 Problems
1. USACO December 2015 Silver Problem 3: Breed Counting
http://usaco.org/index.php?page=viewproblem2&cpid=572
5. (2D Prefix Sums) USACO February 2019 Silver Problem 2: Painting the Barn
http://www.usaco.org/index.php?page=viewproblem2&cpid=919
Chapter 12
Binary Search
Then, we find the point at which true becomes false, using binary search.
Below, we present two algorithms for binary search. The first implementation may be
more intuitive, because it’s closer to the binary search most students learned, while the
64
CHAPTER 12. BINARY SEARCH 65
12.2 Example
Source: Codeforces Round 577 (Div. 2) Problem C
https://codeforces.com/contest/1201/problem/C
Given an array arr of n integers, where n is odd, we can perform the following operation
on it k times: take any element of the array and increase it by 1. We want to make the
median of the array as large as possible, after k operations.
Constraints: 1 n 2 · 105 , 1 k 109 and n is odd.
The solution is as follows: we first sort the array in ascending order. Then, we binary
search for the maximum possible median. We know that the number of operations required
to raise the median to x increases monotonically as x increases, so we can use binary search.
For a given median value x, the number of operations required to raise the median to x is
n
X
max(0, x arr[i])
i=(n+1)/2
CHAPTER 12. BINARY SEARCH 66
If this value is less than or equal to k, then x can be the median, so our check function
returns true. Otherwise, x cannot be the median, so our check function returns false.
Solution code (using the second implementation of binary search):
static int n;
static long k;
static long[] arr;
public static void main(String[] args) {
n = r.nextInt(); k = r.nextLong();
arr = new long[n];
for(int i = 0; i < n; i++){
arr[i] = r.nextLong();
}
Arrays.sort(arr);
pw.println(search());
pw.close();
}
12.3 Problems
1. USACO December 2018 Silver Problem 1: Convention
http://www.usaco.org/index.php?page=viewproblem2&cpid=858
CHAPTER 12. BINARY SEARCH 67
68
CHAPTER 13. ELEMENTARY NUMBER THEORY 69
i n v
2 252 {}
2 126 {2}
2 63 {2, 2}
3 21 {2, 2, 3}
3 7 {2, 2, 3, 3}
p
At this point, the for loop terminates, because i is already 3 which is greater than b 7c. In
the last step, we add 7 to the list of factors v, because it otherwise won’t be added, for a
final prime factorization of {2, 2, 3, 3, 7}.
Finding the GCD of two numbers can be done in O(log n) time, where n = min(a, b).
The least common multiple (LCM) of two integers a and b is the smallest integer
divisible by both a and b.
The LCM can easily be calculated from the following property with the GCD:
a·b
lcm(a, b) =
gcd(a, b)
If we want to take the GCD or LCM of more than two elements, we can do so two at a time,
in any order. For example,
Under a prime moduli, division does exist; however it’s rarely used in problems and is
beyond the scope of this book.
13.4 Problems
1. CodeForces VK Cup 2012 Wildcard Round 1
https://codeforces.com/problemset/problem/162/C
Chapter 14
2SUM Problem
Given an array of N elements (1 N 105 ), find two elements that sum to X. We can
solve this problem using two pointers; sort the array, then set one pointer at the beginning
and one pointer at the end of the array. Then, we consider the sum of the numbers at the
indices of the pointers. If the sum is too small, advance the left pointer towards the right,
and if the sum is too large, advance the right pointer towards the left. Repeat until either
the correct sum is found, or the pointers meet (in which case there is no solution).
Let’s take the following example array, where N = 6 and X = 15
1 7 11 10 5 13
First, we sort the array:
1 5 7 10 11 13
We then place the left pointer at the start of the array, and the right pointer at the end
of the array.
1 5 7 10 11 13
Then, run and repeat this process: If the sum of the pointer elements is less than X,
move the left pointer one step to the right. If the sum is greater than X, move the right
pointer one step to the left. The example is as follows. First, the sum 1 + 13 = 14 is too
small, so we move the left pointer one step to the right.
1 5 7 10 11 13
71
CHAPTER 14. ADDITIONAL SILVER TOPICS 72
Now, 5 + 13 = 18 overshoots the sum we want, so we move the right pointer one step to
the left.
1 5 7 10 11 13
At this point we have 5 + 11 = 16, still too big. We continue moving the right pointer to
the left.
1 5 7 10 11 13
Now, we have the correct sum, and we are done.
Code is as follows:
Subarray Sum
Given an array of N (1 N 105 ) positive elements, find a contiguous subarray that
sums to X.
We can do this in a similar manner to how we did the 2SUM problem: except this time we
start both pointers at the left, and the pointers mark the beginning and end of the subarray
we are currently checking. We advance the right pointer one step to the right if the total of
the current subarray is too small, advance the left pointer one step to the right if the current
total is too large, and we are done when we find the correct total.
Then, the key observation is that two segments P Q and XY intersect if the two conditions
hold:
CHAPTER 14. ADDITIONAL SILVER TOPICS 74
For example, in the figure below, [X1 P1 Q1 ] and [Q1 X1 Y1 ] are positive because their vertices
occur in counterclockwise order, and [Y1 P1 Q1 ] and [P1 X1 Y1 ] are negative because their vertices
occur in clockwise order. Therefore, we know that X1 Y1 and P1 Q1 intersect. Similarly, on
the right, we know that [P2 X2 Y2 ] and [Q2 X2 Y2 ] have vertices both going in clockwise order,
so their signed areas are the same, and therefore P2 Q2 and X2 Y2 don’t intersect.
X1 X2
P1 Q1 P2 Q2
Y1 Y2
If the two conditions hold and some of the signs are zero, then this means that the segments
intersect at their endpoints. If the problem does not count these as intersecting, then consider
zero to have the same sign as both positive and negative.
However, there is a special case. If the signs of all four areas are zero, then all four points
lie on a line. To check if they intersect in this case, we just check whether one point is
between the others. In particular, we check if P or Q is on XY or if X is on P Q. We don’t
need to check if Y is on P Q because if the segments do intersect, we will have two instances
of points on the other segments.
Here’s a full implementation:
Bitwise Operations
There are several binary operations on binary numbers called bitwise operations. These
operations are applied separately for each bit position. The common binary operations are
shown in table 14.1:
CHAPTER 14. ADDITIONAL SILVER TOPICS 76
The AND operation (&) returns 1 if and only if both bits are 1.
19 & 27
1 0 0 1 1 = 19
AN D 1 1 0 1 1 = 27
= 1 0 0 1 1 = 19
The OR operation (|) returns 1 if either bit is 1.
19 | 27
1 0 0 1 1 = 19
OR 1 1 0 1 1 = 27
= 1 1 0 1 1 = 27
The XOR operation (^) returns 1 if and only if exactly one of the bits is 1.
19 ^ 26
1 0 0 1 1 = 19
XOR 1 1 0 1 1 = 27
= 0 1 0 0 0 = 8
Finally, the left shift operator x << k multiplies x by 2k . Watch for overflow and use the
long data type if necessary. For example:
1 << 5 = 1 · 25 = 32
7 << 2 = 7 · 22 = 28
Exercises
Calculate by converting the numbers to binary, applying the bit operations, and then
converting back to decimal numbers:
(a) 19 & 34 Answer: 2
(b) 14 | 29 Answer: 31
(c) 10 ^ 19 Answer: 25
(d) 3 << 5 Answer: 96
CHAPTER 14. ADDITIONAL SILVER TOPICS 77
Generating Subsets
Occasionally in a problem we’ll want to iterate through every possible subset of a given
set, either to find a subset that satisfies some condition, or to find the number of subsets that
satisfy some condition. Also, some problems might ask you to find the number of partitions
of a set into 2 groups that satisfy a certain condition. In this case, we will iterate through all
possible subsets, and check each subset for validity (first adding the non-selected elements to
the second subset if necessary).
In a set of N elements, there are 2N possible subsets, because for each of the N elements,
there are two choices: either in the subset, or not in the subset. Subset problems usually
require a time complexity of O(N · 2N ), because each subset has an average of O(N ) elements.
Now, let’s look at how we can generate the subsets. We can represent subsets as binary
numbers from 0 to 2N 1. Then, each bit represents whether or not a certain element is in
the subset. Let’s look at an example set of a, b, c.
Algorithm: The algorithm for generating all subsets of a given input array
Function generateSubsets
Input : An array arr, and its length n
for i 0 to 2n 1 do
Declare list
for j = 0 to n-1 do
if the bit in the binary representation of i corresponding to 2j is 1 then
Add arr[j] to the list
end
end
Process the list
end
In the following code, our original set is represented by the array arr[] with length n.
int ans = 0;
for(int i = 0; i < (1<<n); i++){
// this loop iterates through the 2^n subsets, one by one.
// 1 << n is a shortcut for 2^n
CHAPTER 14. ADDITIONAL SILVER TOPICS 78
14.5 Problems
Two Pointers
1. CSES Problem Set Task 1640: Sum of Two Values
https://cses.fi/problemset/task/1640
Line Sweep
3. USACO US Open 2019 Silver Problem 2: Cow Steeplechase II
http://usaco.org/index.php?page=viewproblem2&cpid=943
Subsets
4. (Subsets) CSES Problem Set Task 1623: Apple Division
https://cses.fi/problemset/task/1623
CHAPTER 14. ADDITIONAL SILVER TOPICS 79
Ad hoc problems
5. USACO February 2016 Silver Problem 1: Circular Barn
http://usaco.org/index.php?page=viewproblem2&cpid=618
Problem Set
80
Chapter 15
Parting Shots
Set 1
1. https://codeforces.com/problemset/problem/1227/B
2. https://codeforces.com/problemset/problem/1196/B
3. https://codeforces.com/problemset/problem/1195/B
4. https://codeforces.com/problemset/problem/1294/B
5. https://codeforces.com/problemset/problem/1288/B
6. https://codeforces.com/problemset/problem/1293/A
7. https://codeforces.com/problemset/problem/1213/B
8. https://codeforces.com/problemset/problem/1207/B
9. https://codeforces.com/problemset/problem/1324/B
10. https://codeforces.com/problemset/problem/1327/A
Set 2
1. https://codeforces.com/problemset/problem/1182/B
2. https://codeforces.com/problemset/problem/1183/D
3. https://codeforces.com/problemset/problem/1183/C
4. https://codeforces.com/problemset/problem/1133/C
81
CHAPTER 15. PARTING SHOTS 82
5. https://codeforces.com/problemset/problem/1249/B2
6. https://codeforces.com/problemset/problem/1194/B
7. https://codeforces.com/problemset/problem/1271/C
8. https://codeforces.com/problemset/problem/1326/C
9. https://codeforces.com/problemset/problem/1294/C
10. https://codeforces.com/problemset/problem/1272/B
Set 3
1. https://codeforces.com/problemset/problem/1169/B
2. https://codeforces.com/problemset/problem/1102/D
3. https://codeforces.com/problemset/problem/978/F
4. https://codeforces.com/problemset/problem/1196/C
5. https://codeforces.com/problemset/problem/1154/D
6. https://codeforces.com/problemset/problem/1272/D
7. https://codeforces.com/problemset/problem/1304/C
8. https://codeforces.com/problemset/problem/1296/C
9. https://codeforces.com/contest/1263/problem/D
10. https://codeforces.com/contest/1339/problem/C
Set 4
1. https://codeforces.com/problemset/problem/1281/B
2. https://codeforces.com/problemset/problem/1196/D2
3. https://codeforces.com/problemset/problem/1165/D
4. https://codeforces.com/problemset/problem/1238/C
5. https://codeforces.com/problemset/problem/1234/D
6. https://codeforces.com/problemset/problem/1198/B
7. https://codeforces.com/problemset/problem/1198/A
8. https://codeforces.com/problemset/problem/1077/D
9. https://codeforces.com/problemset/problem/1303/C
10. https://codeforces.com/problemset/problem/1098/A
CHAPTER 15. PARTING SHOTS 83
Set 5
1. https://codeforces.com/problemset/problem/1185/D
2. https://codeforces.com/problemset/problem/1195/D2
3. https://codeforces.com/problemset/problem/1154/E
4. https://codeforces.com/contest/1195/problem/C
5. https://codeforces.com/problemset/problem/1196/E
6. https://codeforces.com/problemset/problem/1328/D
7. https://codeforces.com/problemset/problem/1253/D
8. https://codeforces.com/problemset/problem/1157/E
9. https://codeforces.com/problemset/problem/1185/C2
10. https://codeforces.com/problemset/problem/1209/D