The Future of Cryptography Under Quantum Computers: Dartmouth College Computer Science Technical Report TR2002 - 425
The Future of Cryptography Under Quantum Computers: Dartmouth College Computer Science Technical Report TR2002 - 425
The Future of Cryptography Under Quantum Computers: Dartmouth College Computer Science Technical Report TR2002 - 425
1 Preliminaries 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Introduction to cryptographic primitives . . . . . . . . . . . . 2
1.3.1 Basics and terminology . . . . . . . . . . . . . . . . . . 2
1.3.2 Symmetric-key cryptography . . . . . . . . . . . . . . . 2
1.3.3 One-way hash functions . . . . . . . . . . . . . . . . . 3
1.3.4 Trapdoor functions and public-key cryptography . . . . 4
1.3.5 Digital signing . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.6 Pseudorandom number generation . . . . . . . . . . . . 5
1.4 Complexity theory . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4.1 Overview of complexity classes . . . . . . . . . . . . . . 5
1.4.2 Hard problems vs. easy problems . . . . . . . . . . . . 8
1.5 Quantum computers . . . . . . . . . . . . . . . . . . . . . . . 9
2 Complexity-Generalized Cryptography 10
2.1 Complexity and cryptography . . . . . . . . . . . . . . . . . . 11
2.2 Definitions of cryptographic primitives . . . . . . . . . . . . . 11
2.2.1 Definition of symmetric-key cryptography . . . . . . . 12
2.2.2 Definition of one-way hash functions . . . . . . . . . . 14
2.2.3 Definition of public-key cryptography . . . . . . . . . . 15
2.2.4 Definition of digital signing . . . . . . . . . . . . . . . 17
2.2.5 Definition of pseudorandom number generation . . . . 19
2.3 Complexity-generalized requirements for security and feasibility 20
2.3.1 Complexity requiremens of Symmetric . . . . . . . . . 21
2.3.2 Complexity requirements of OneWayHash . . . . . . . 21
2.3.3 Complexity requirements of PublicKey . . . . . . . . . 22
2.3.4 Complexity requirements of DigitalSign . . . . . . . . . 22
i
2.3.5 Complexity requirements of PseudoRandom . . . . . . 22
3 Quantum Computers 24
3.1 Introduction to quantum computation . . . . . . . . . . . . . 24
3.1.1 Qubits and quantum properties . . . . . . . . . . . . . 24
3.1.2 The parallel potential of quantum computers . . . . . . 26
3.1.3 Decoherence . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2 Shor’s factoring algorithm . . . . . . . . . . . . . . . . . . . . 27
3.3 Consequences . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5 Conclusions 34
5.1 Complexity-generalized cryptography . . . . . . . . . . . . . . 34
5.2 Philosophical and practical implications . . . . . . . . . . . . . 34
5.3 Open questions and future work . . . . . . . . . . . . . . . . . 35
ii
Abstract
Preliminaries
1.1 Motivation
Cryptography is an ancient art that has passed through many paradigms,
from simple letter substitutions to polyalphabetic substitutions to rotor ma-
chines to digital encryption to public-key cryptosystems. With the possible
advent of quantum computers and the strange behaviors they exhibit, a new
paradigm shift in cryptography may be on the horizon. Quantum computers
may hold the potential to render most modern encryption useless against a
quantum-enabled adversary. The aim of this thesis is to characterize this
convergence of cryptography and quantum computation. We are not con-
cerned so much with particular algorithms as with cryptography in general.
To this end, we will examine primitives that constitute the core of modern
cryptography and analyze the complexity-theoretical implications for them
of quantum computation.
1.2 Overview
This chapter will be devoted to introducing the subjects to be discussed in
this thesis. In Chapter 2 we define and analyze the basic cryptographic prim-
itives, and in Chapter 3 we give an introduction to quantum computation and
discuss some results that could have implications for cryptography. Chap-
ter 4 is devoted to bringing together the cryptographic and quantum pieces
and characterizing their intersection. Finally, in Chapter 5 we summarize
our conclusions and suggest possible avenues for future work.
1
1.3 Introduction to cryptographic primitives
1.3.1 Basics and terminology
The term cryptography refers to the art or science of designing cryptosystems
(to be defined shortly), while cryptanalysis refers to the science or art of
breaking them. Although cryptology is the name given to the field that
includes both of these, we will generally follow the common practice (even
among many professionals and researchers in the field) of using the term
“cryptography” interchangeably with “cryptology” to refer to the making
and breaking of cryptosystems.
The main purpose of cryptography is to protect the interests of parties
communicating in the presence of adversaries. A cryptosystem is a mecha-
nism or scheme employed for the purpose of providing such protection. We
examine several cryptosystems in this paper, spanning a wide range of cryp-
tographic uses. We shall now take a moment to introduce the cryptographic
primitives to be discussed. They will be formally defined and analyzed in
Chapter 2. For a more comprehensive review of cryptographic concepts the
reader is directed to Rivest’s chapter in the Handbook of Theoretical Com-
puter Science [20], and for a wide-ranging treatment of the application of
those concepts the reader is referred to Schneier’s book Applied Cryptog-
raphy [22]. The definitions presented in Chapter 2, however, are meant to
construct a more general complexity-theoretical framework for discussing the
primitives than can be found currently in the literature.
2
The same piece of plaintext will generally encrypt to a different ciphertext
at different times. A stream cipher is conceptually very similar to a pseudo-
random number generator (see below) and, in fact, is often implemented in
the same way.
The main purpose of such a cryptosystem is, of course, to thwart an ad-
versary in his or her attempt to intercept or disrupt communications. The
adversary may have various types of information available with which to
attack the cryptosystem. In a ciphertext-only attack, the adversary knows
nothing but a number of ciphertexts polynomial in the input size (the input
size is the sum of the sizes of the key and message). Note that in this paper
we always assume that the adversary has full knowledge of the algorithm
used. In a known-plaintext attack, the adversary has access to a polynomial
number of plaintext-ciphertext pairs. In a chosen-ciphertext attack, the ad-
versary may select a polynomial number of ciphertexts for which to see the
plaintext. One might also encounter adaptive chosen-plaintext or adaptive
chosen-ciphertext attacks, in which the adversary need not choose all the
plaintexts or ciphertexts at once but may see some results before making
further selections. For simplicity’s sake, we do not address the additional
complexity of these last three attacks, and we concern ourselves here with
ciphertext-only and known-plaintext attacks.
3
is not one-to-one, and it clearly cannot be honest with a fixed-size output. For
a detailed look at cryptographic one-way hash functions, including myriad
real-world examples, see Schneier’s book [22, Chapter 18]. Although there
are differing opinions on just what should constitute a one-way function, we
will attempt to make some generalizations and draw conclusions relevant to
cryptography.
4
because the operations are inverses of each other. Only the possessor of the
particular private key can generate a signature that is correctly verified by
the corresponding public key, and the signature for each document is different
(again, with high probability).
5
share the same asymptotic upper limit on running time for a particular model
of computation. Examples of such limits include:
6
all if more than two) execution paths and accepts the input if any
execution path enters an accepting configuration.
7
to the oracle tape and then entering a special oracle state; after a single
time step, the answer will replace the question on the oracle tape. The
oracle can be thought of as a problem the Turing machine gets to solve
for free.
When a proof is given that involves an OTM, it is an example of rela-
tivized complexity, or complexity analysis relative to an oracle. There
are at least two reasons why such proofs can be interesting. First, the
relativized proof becomes a non-relativized proof if ever a tractable al-
gorithm is devised to perform the same function as the oracle. And
second, since many proof techniques relativize, that is, remain valid
when applied in a relativized setting, it can be useful to demonstrate
different oracles relative to which open questions are answered in dif-
ferent ways. If this can be done it means that proving the question one
way or another will be difficult and require unusual technique
If O is an oracle, we denote by M O an oracle Turing machine that can
query O in its computations.
There is one more class that we will mention: PSPACE is the class of
problems that take polynomial space to solve (and unspecified time). It is
well known that P ⊆ NP ⊆ PSPACE and BPP ⊆ PSPACE, but it is not
known whether any of those inclusions are proper.
Note that there is a distinction to be drawn among what we have been re-
ferring to generally as “problems.” Turing machines recognize languages, or,
equivalently, solve decision problems. Given a string, a Turing machine will
return accept or reject. Most of the problems we are concerned with, how-
ever, are not decision problems but functions that produce a value other than
merely accept or reject. These functions are described by classes FP analo-
gous to P, FNP analogous to NP, and so on (Grollmann and Selman [11]
refer to FP as PSV, FNP as NPSV, etc.). See Papadimitriou’s book [18]
for further explanation and analysis of this distinction. In this paper, we will
not distinguish between classes of languages and classes of functions until it
becomes crucial to draw the distinction clearly for Theorem 2.
8
BPP has replaced P as the main polynomial class. Now we want to make
this bound movable. The hard of tomorrow may be more restricting than
the hard of today, but we want these definitions to withstand the shift.
When talking about cryptography and complexity-theoretic security, we
need to define what is considered to be “hard” and what is “easy.” Gen-
erally the line between tractable and intractable has been taken to be the
polynomial/exponential line. That is, a problem is considered tractable if the
running time of an algorithm to solve it is O(nk ) for some constant k, and
a problem is considered intractable if it cannot be bound by such a limit.
Though there are classes between polynomial and exponential, by far the
most commonly discussed superpolynomial bound is exponential time.
Classifying decision problems as easy and hard by these standards de-
pends on the model of computation used, of course. Exactly which problems
are solvable in polynomial time depends on whether you can make coin flips,
choose nondeterministically, etc.
In this paper we shall use the notation Easy(n) to refer to classes that are
feasible as n grows large and Hard(n) to refer to classes that are infeasible
as n grows large, judging by the most powerful model(s) of computation
available. In Chapter 4 we will examine which problems will be in Easy(n)
and which will be in Hard(n) if a quantum computer is built and quantum
computing becomes available as a model of computation.
9
Chapter 2
Complexity-Generalized
Cryptography
Before we discuss in detail the effects quantum computers will have on cryp-
tography, it is necessary to define and review some important cryptographic
concepts. In this section we will present some fundamental elements of cryp-
tography as they are relevant to the subject. We assume the reader has a
basic familiarity with cryptography, but we will review the key details.
It is important to keep in mind that the security of the systems we are
concerned with is measured in the sense of computational complexity, not
the information-theoretic sense. A cryptosystem is information-theoretically
secure if the ciphertext (along with knowledge of the algorithm) does not
give the adversary enough information to find the plaintext. The standard
example of such a system is the one-time pad, under which each message
is xor’d with a different random key of the same length as the message.
Since any plaintext of that length could encrypt to the same ciphertext,
given the appropriate key, the adversary cannot determine any information
about the message (other than perhaps the length). A cryptosystem is still
computationally secure, on the other hand, even if an adversary has enough
information to recover the message in theory but the computation requires
too much time to be feasible.
10
Primitive Definition Security & feasibility
Symmetric cryptography Section 2.2.1 Section 2.3.1
One-way hash functions Section 2.2.2 Section 2.3.2
Public-key cryptography Section 2.2.3 Section 2.3.3
Digital signing Section 2.2.4 Section 2.3.4
Pseudorandom number generation Section 2.2.5 Section 2.3.5
11
2.2.1 Definition of symmetric-key cryptography
We begin with our complexity-generalized definition of a symmetric cryp-
tosystem.
12
This definition is left intentionally general because the particulars do
not concern us at this time. Whether this can easily be implemented as a
real test rather than an oracle query depends on the type of attack. For
a ciphertext-only attack in which the attacker knows something about the
structure of the message and has enough ciphertext, this test returns True
if the decryption for the key in question is intelligible (i.e. fits the known
structure). An obvious case is when the message is known to be, say, English
text in ASCII; I would return True if decryption with the key in question
produced a message recognizable as English. In a known-plaintext attack, the
test returns True if it decrypts the given ciphertext into the corresponding
plaintext.
Whether the attacker has “enough ciphertext” in a ciphertext-only at-
tack is measured by unicity distance, introduced by Shannon in his 1949
paper [23]. The unicity distance for a message with a certain structure is the
message length needed to guarantee with high probability that there is only
one plaintext that could produce the given ciphertext with any key. The
unicity distance for ASCII English text encrypted with various algorithms
ranges from about 8.2 to 37.6 characters for keys of length 56 to 256 bits [22,
p.236], so for this case it may be reasonable to assume that most messages
under consideration will be longer than the unicity distance and the test can
be performed without difficulty.
Note that the unicity distance test is meaningless for a one-time pad for
the same reasons that any ciphertext-only attack against a one-time pad is
theoretically impossible; namely, a particular ciphertext could be produced
by any message of the correct length given the right key, so there is no way to
distinguish one key from another if each decrypts the ciphertext to a message
that fits the structure.
We now turn to the cracking problem for a symmetric cryptosystem.
crack[Symmetric]I : {C × M∅ }+ −→ G,
13
such that:
∃(k ∈ K)∀(c ∈ C)∃(m ∈ M) :
(I(k) = True) ∧ (m = g(c, k)) ∧ (m = g 0 (c))
There is an acknowledged drawback to this definition of the cracking
problem: it presents an all-or-nothing approach. A real cryptosystem would
be considered compromised if the adversary could reliably decrypt half of
the messages but not all of them, though such a situation does not count as
cracking under our definition. This is one area in which further work could
extend the results presented here.
14
Definition 4 The cracking problem
crack[OneWayHash]I : Z + × (D × R)∗ −→ G,
15
Note that this definition refers to f and g, which are, respectively, the
encryption and decryption functions parameterized by the keys.
• A key pair (ke , kd ) is valid for functions f and g if both correctness and
uniqueness hold.
– (Correctness)
∀(c ∈ C) : (f (g(c, kd ), ke ) = c)
– (Uniqueness)
16
At the heart of every public-key cryptosystem is a trapdoor function (or
function pair: f and g may or may not be the same function) parameterized
by a key pair in some form. Note that this definition is general enough to
consider function pairs as key pairs: kf and kg can be functions and f and g
can merely apply the given function to the given message or ciphertext.
The correctness criterion for valid key pairs stipulates that encrypting
a message with f, ke and then decrypting it with g, kd will reproduce the
original message, and vice-versa. The uniqueness criterion requires that no
two keys encrypt any one message to the same ciphertext (or decrypt any
one ciphertext to the same message).
crack[PublicKey]I : Kf −→ G,
17
• M is the message space
• It is also desirable for the key pair to have the uniqueness property:
(Uniqueness)
The correctness criterion requires all verifying keys to correctly verify sig-
natures created with their corresponding signing keys. The uniqueness crite-
rion specifies that no verifying key can produce a false positive, or True result
for a signature generated with either another message or another signing key.
The uniqueness criterion is usually slightly relaxed in practice, though with
the intent that it be infeasible for an adversary to take advantage of this
relaxation.
Many public-key cryptosystems can also function as digital signature
schemes.1 To sign a message, a user “decrypts” it with his or her private
key. Any other user can then verify the signature by “encrypting” it and
comparing the result to the original message. This scheme has some prob-
lems, however, including a signature that is as long as the original message.
A slight modification makes this practice much more useful, though it sacri-
fices perfect uniqueness. By first using a one-way hash function to obtain a
hash value h for the message and then signing h, a user can produce a useful
signature much shorter (in most cases) than the message, while preserving
1
Note: The Diffie-Hellman key exchange protocol[7] cannot be used as a signature
scheme, but it also does not fit under our definition of PublicKey.
18
the useful properties of the signature. Uniqueness will be compromised to
the extent that collisions of the hash function can be found.
crack[DigitalSign]I : Kf −→ G,
∃(kg ∈ Kg )∀(m ∈ M) :
(I(kg ) = True) ∧ (f (m, g(m, kg ), kv ) = True)
∧(f (m, g 0 (m), kv ) = True))
19
• K is the key space, a set of strings over some alphabet
• f (k, p) = x1 x2 x3 . . . xp for k ∈ K, p ∈ N , and xi ∈ {0, 1}. Let f (k, p)i
denote xi .
The key for a pseudorandom number generator is the seed for the gen-
eration process. The output of f is a string of bits. The salient feature of
the string, of course, is that it is hard to predict bit xi given bits x1 . . . xi−1
without knowledge of the seed. In general, the seed should ideally be a
truly random string of bits; the pseudorandom number generator functions
as a randomness expander and increases the length of the sequence without
significantly increasing the feasibility of predicting the next bit.
Cracking a pseudorandom number generator, of course, involves being
able to predict the next bit.
Definition 8 The cracking problem
crack[PseudoRandom]I : {0, 1}∗ × N −→ G,
given an instance of PseudoRandom, takes a sequence of bits and the number
of bits p in the sequence and produces a function g 0 ∈ G : N −→ {0, 1} that
can then be used to predict any bit of a pseudorandom sequence up to the
(p + 1)th bit. The cracking problem is to compute
crack[PseudoRandom]I ({0, 1}p , p) = g 0
such that:
∃(k ∈ K)∀(p, q ≤ p ∈ N ) : (I(k) = True)∧((q ≤ p+1) ⇒ (f (k, p)q = g 0 (q)))
1
with probability greater than 2
+ for some small .
The identification oracle here identifies the seed used to generate the
pseudorandom bit sequence.
20
The complexity of each primitive has two parts: feasibility and security.
The security requirement for each primitive is that its corresponding cracking
program be in Hard(n) for its size parameter n; this is what is necessary
for the user’s goals to be protected from an adversary’s intervention. The
feasibility requirements, if met, ensure that the primitive is usable; that is,
the functions that make up the primitive must be in Easy(n). In practice,
it is relatively quite easy to create systems that meet the feasibility require-
ments, though ensuring that systems meet the security requirements can be
tricky to impossible.
For each primitive we describe the size parameter and the particular fea-
sibility requirements, as well as restate the security requirement.
Feasibility
Symmetric is feasible if both f and g are in Easy(n).
Security
Symmetric is secure if crack[Symmetric]I is in Hard(n).
21
Feasibility
OneWayHash is feasible if f (m) is in Easy(n)
Security
OneWayHash is secure if crack[OneWayHash]I is in Hard(n).
Feasibility
PublicKey is feasible if f , g, and generating a valid key pair (kf , kg )
are all in Easy(n).
Security
PublicKey is secure if crack[PublicKey]I is in Hard(n).
Feasibility
DigitalSign is feasible if f and g are in Easy(n).
Security
DigitalSign is secure if crack[DigitalSign]I is in Hard(n).
Feasibility
PseudoRandom is feasible if f is in Easy(n).
22
Security
PseudoRandom is secure if crack[PseudoRandom]I is in Hard(n).
It is less than perfectly intuitive that the size parameter for PseudoRan-
dom be the length of the seeding key, but the key is the source for the
randomness that PseudoRandom expands into the pseudorandom sequence
it produces as output. The connection should be clearer upon consideration
of the fact that a brute-force search through the keyspace would find all
pseudorandom sequences and thus crack PseudoRandom.
23
Chapter 3
Quantum Computers
24
The state of a single qubit alone can be thought of as a unit vector in
a two-dimensional vector space with basis {|0i, |1i}. Here |0i and |1i are
orthogonal vectors representing quantum states such as spin up and spin
down or vertical and horizontal polarization. A qubit can be in state |0i or
in state |1i, but it can also be in a superposition x|0i + y|1i of the two states.
The complex amplitudes x and y determine which state we will see if we make
a measurement. When an observer measures a qubit in this superposition,
the probability that the observer will see state |0i is |x|2 and the probability
of seeing |1i is |y|2 . Note that because x|0i + y|1i is a unit vector, the sum
|x|2 + |y|2 must be equal to 1.
The second quantum-mechanical property that interests us is quantum
entanglement, which ties qubits inextricably to each other over the course of
operations. Because qubits can be entangled and interfere with each other,
the state of a multiple-qubit system cannot be represented generally as a
linear combination of the state vectors of each qubit; the interactions between
each pair of qubits is as relevant as the state of each qubit itself. The state of
the system, then, cannot be described in terms of a simple Cartesian product
of the individual spaces, but rather a tensor product. We will not go into the
mathematics behind tensor products here, but one significant consequence
of this fact is that the number of dimensions of the combined space is the
product rather than the sum of the numbers of dimensions in each of the
component spaces. For example, the Cartesian product of spaces with bases
{x, y, z} and {u, v}, respectively, has basis {u, v, x, y, z} with 2 + 3 = 5
elements. The tensor product of the spaces, however, has basis {u ⊗ x, u ⊗
y, u ⊗ z, v ⊗ x, v ⊗ y, v ⊗ z} with 2 · 3 = 6 elements, where u ⊗ x denotes the
tensor product of vectors u and x. We write the tensor product |0i ⊗ |0i as
|00i, so the vector space for a two-qubit system has basis {|00i, |01i, |10i,
|11i} and the vector space for a three-qubit system has basis {|000i, |001i,
. . . , |111i}, and so on.
One other property of quantum computers that is notable for its difference
from classical computation is that all operations are reversible. On one level,
this is due to the fact that classical computations dissipate heat, and with
it information, whereas quantum operations dissipate no heat and therefore
retain all information across each calculation. Since reversible quantum gates
exist that permit the full complement of familiar logical operations, however,
this point need not concern us.
25
3.1.2 The parallel potential of quantum computers
It is through entanglement and superposition that quantum computers offer
a potentially exponential speedup over classical computers. The fact that
entanglement implies a tensor product rather than Cartesian product means
that a system of multiple qubits has a state space that grows exponentially
in the number of qubits. Furthermore, because a qubit or system of qubits
can be in a superposition of states, one operator applied to such a system
can operate on all the states simultaneously. This gives quantum computers
enormous computational power: an operator can be applied to a superposi-
tion of all possible inputs, performing an exponential number of operations
simultaneously! The implications will be profound if a working quantum
computer can indeed be built.
There is a catch, however: since the result will be a superposition of the
possible outputs, a measurement of the result will not necessarily reveal the
desired answer. In fact, a simple measurement will find any one of the pos-
sible outputs, taken randomly from the probability distribution of the wave
amplitudes: in the naı̈ve case, we are no better off than with a classical com-
puter, since we can measure only one randomly chosen result. The key to
designing quantum algorithms, then, is finding clever methods for manipulat-
ing probability amplitudes so that the desired answer has a high probability
of being measured at the end of the computation. This is far from easy in
general, though some clever techniques have been explored, such as using a
quantum Fourier transform to amplify answers that are multiples of the pe-
riod of a function (this is the technique Shor used in his factoring algorithm,
discussed in Section 3.2).
3.1.3 Decoherence
The main problem thus far prohibiting actual realization of a quantum com-
puter (unless, of course, the NSA or a similar organization has quietly build
one without public knowledge!) is decoherence, or the interaction of the quan-
tum system with the environment, disturbing the quantum state and leading
to errors in the computation. We will not discuss this problem further in this
paper, except to mention that techniques of quantum error correction have
been used successfully to combat some effects of decoherence, but there is
still a long way to go before building a large-scale quantum computer will be
possible. For a detailed look at quantum error correction and other issues in
26
quantum information, see part III of Nielsen and Chuang’s text [17].
3.3 Consequences
Shor’s algorithms have obvious and potentially catastrophic implications for
the field of cryptography. Many cryptosystems, including the popular RSA
cryptosystem, depend for their security on the assumption that factoring
large numbers is difficult; others depend on the difficulty of computing dis-
crete logarithms. The discovery of this polynomial-time quantum factoring
algorithm means that anyone with a quantum computer could easily crack
RSA and many other cryptosystems, and possibly much more. The full po-
tential of quantum computers is unknown. Though we will not address them
here, a few other quantum algorithms have been discovered, such as Grover’s
search algorithm [12], and there has been some work on quantum attacks
on cryptographic systems, such as the 1998 paper by Brassard, Høyer, and
Tapp on quantum cryptanalysis of hash functions [4].
In the next chapter we will look at the possible strengths of quantum
computers and assess their implications for the cryptographic primitives we
defined in Chapter 2.
27
Chapter 4
Cryptographic Implications of
Quantum Computers
Quantum computers may have much more in store for cryptography than
merely the demise of RSA; on the other hand, it may turn out that they have
no more power than classical computers after all and it is just coincidence
that the quantum polynomial-time factoring algorithm was discovered before
the classical one. Here we introduce the most relevant complexity class for
quantum computers and investigate how it might fit into classical hierarchies
of complexity classes. The implications for cryptography are then explored.
28
common. Here we examine some possibilities and their consequences.
In 1977, Gill proposed the class BPP (defined in Section 1.4.1) and
showed that
P ⊆ BPP ⊆ PSPACE,
and while P ⊆ NP ⊆ PSPACE, it is not known whether either BPP ⊆ NP
or its converse is true [10]. Bernstein and Vazirani demonstrated that
4.1.2 Possibilities
The reader should keep in mind that none of the inclusions just discussed are
known to be proper. It is theoretically still possible that P = NP, or even
P = PSPACE, though these equalities are widely believed to be false.
29
dom
sh
Ha
Ran
ign
ric
ey
Way
met
licK
italS
udo
Sym
Pub
One
Dig
Pse
Where is BQP? Section
√ √ √
BPP = BQP ⊂ NP √ √ ≈ ≈ √ 4.2.1
BPP ⊂ BQP √ ≈ ≈ √ 4.2.2
UP ⊆ BQP × × × 4.2.3
NP ⊆ BQP × × × × × 4.2.4
√
Table 4.1: Summary of estimated implications. denotes survival of the
primitive under that possibility (meaning that feasible, secure instances of
the primitive may still exist), × means no survival, and ≈ indicates limited
survival
4.2 Implications
4.2.1 BPP = BQP ⊂ NP
The case where BQP = BPP and both are properly included in NP is
simple: BQP introduces no new consequences for cryptography not present
30
in BPP. Note that for this to be true, however, classical equivalents for
Shor’s algorithms would have to exist. This implies the consequences of the
next section, though coming from BPP rather than BQP.
4.2.3 UP ⊆ BQP
The possibility UP ⊆ BQP may be very closely related to UP ⊆ P. If
UP ⊆ P, one-way functions as defined by Grollmann and Selman [11] cannot
exist.
1. P 6= UP
31
4.2.4 NP ⊆ BQP
This result is highly unlikely, but it would have profound implications. As
long as we have an oracle to correctly identify a key, any of these primitives
can be cracked with nondeterministic guessing (in the case of OneWayHash,
we would guess and check messages rather than keys).
Proof. For this proof, we do differentiate between NP and FNP and be-
tween BQP and FBQP (which is the obvious analogue to FNP). Easy(n)
is a class of functions, not decision problems, so we must show essentially
that NP ⊆ BQP implies that crack[Symmetric]I ∈ FBQP.
Define an arbitrary ordering over the keys k1 , k2 , . . . , k2n ∈ K, where n is
the length of a key, which is also the size parameter.
Let M be an NTM that takes two integers m and n, such that m < n, as
input and runs the following algorithm:
• guess a key kj
• if j < m or n < j
• then reject
• then accept
• else reject
32
C implements crack[Symmetric]I since it returns the key k such that
I(k) = True. The quantum computer will have made O(n) calls to M ,
which is an NTM and so recognizes a language in NP, which is in BQP by
assumption. C makes O(n) calls to polynomial-time M , so it is obviously
in Easy(n). Therefore NP ⊆ BQP implies that crack[Symmetric]I is in
Easy(n). 2
Similar proofs can be constructed for the other primitives using very
similar algorithms, since each primitive has one type of key or another or a
message that can be nondeterministically guessed.
33
Chapter 5
Conclusions
34
5.3 Open questions and future work
The topics explored and results obtained in this thesis could potentially be
the starting points for research in a number of different directions.
35
Acknowledgments
I would like to extend a hearty thanks to Tom O’Connell, who gave me very
constructive comments especially on the theorems of Chapter 4. And I am
greatly indebted to my advisor Sean W. Smith, who guided me through the
whole process with encouragement and helpful criticism alike. He was much
more than reasonably patient as I pushed back draft deadlines again and
again, and he helped me to find a vision for this thesis and make it a reality.
36
Bibliography
[1] Charles H. Bennett, Ethan Bernstein, Gilles Brassard, and Umesh Vazi-
rani. Strengths and weaknesses of quantum computing. SIAM J. Com-
put., 26(5):1510–1523, October 1997. arXiv:quant-ph/9701001.
[3] Gilles Brassard. Quantum information processing: The good, the bad
and the ugly. In Burton S. Kaliski, Jr., editor, CRYPTO ’97, volume
1294, pages 337–341. Springer, 1997.
[4] Gilles Brassard, Peter Høyer, and Alain Tapp. Quantum cryptanalysis
of hash and claw-free functions. In Claudio L. Lucchesi and Arnaldo V.
Moura, editors, LATIN ’98, volume 1380, pages 163–169. Springer, 1998.
[6] David Deutsch. Quantum theory, the Church-Turing principle and the
universal quantum computer. In Proceedings of the Royal Society of
London Ser. A, volume A400, pages 97–117, 1985.
37
[9] Lance Fortnow and John Rogers. Complexity limitations on quantum
computation. In 13th Annual IEEE Conference on Computational Com-
plexity, pages 202–209. IEEE Computer Society, 1998.
[10] John Gill. Computational complexity of probabilistic Turing machines.
SIAM J. Comput., 6(4):675–695, December 1977.
[11] Joachim Grollmann and Alan L. Selman. Complexity measures for
public-key cryptosystems. SIAM J. Comput., 17(2):309–335, April 1998.
[12] Lov K. Grover. A fast quantum mechanical algorithm for database
search. In ACM Symposium on Theory of Computing, pages 212–219,
1996.
[13] Juris Hartmanis and Richard E. Stearns. On the computational com-
plexity of algorithms. Transactions of the American Mathematical So-
ciety, 117:285–306, May 1965.
[14] John E. Hopcroft, Rajeev Motwani, and Jeffrey D. Ullman. Introduction
to Automata Theory, Languages, and Computation. Addison-Wesley,
second edition, 2001.
[15] David S. Johnson. A catalog of complexity classes. In J. van Leeuwen,
editor, Handbook of Theoretical Computer Science, chapter 2, pages 68–
161. Elsevier Science Publishers B.V.: Amsterdam, The Netherlands,
1990.
[16] Ralph C. Merkle. Secure communications over insecure channels. Com-
munications of the ACM, 21(4):294–299, 1978.
[17] Michael A. Nielsen and Isaac L. Chuang. Quantum Computation and
Quantum Information. Cambridge University Press, 2000.
[18] Christos H. Papadimitriou. Computational Complexity. Addison-Wesley,
1994.
[19] Eleanor Rieffel and Wolfgang Polak. An introduction to quantum com-
puting for non-physicists. arXiv:quant-ph/9809016, 1998.
[20] Ronald L. Rivest. Cryptography. In J. van Leeuwen, editor, Handbook
of Theoretical Computer Science, chapter 13, pages 718–755. Elsevier
Science Publishers B.V.: Amsterdam, The Netherlands, 1990.
38
[21] Ronald L. Rivest, Adi Shamir, and Leonard Adleman. A method for
obtaining digital signatures and public-key cryptosystems. Communi-
cations of the ACM, 21(2):120–126, 1978.
[22] Bruce Schneier. Applied Cryptography. John Wiley & Sons, Inc., second
edition, 1996.
39