SoK - Computer-Aided Cryptography - IEEE SP2021
SoK - Computer-Aided Cryptography - IEEE SP2021
SoK - Computer-Aided Cryptography - IEEE SP2021
Manuel Barbosa∗ , Gilles Barthe†‡ , Karthik Bhargavan§ , Bruno Blanchet§ , Cas Cremers¶ , Kevin Liao†k , Bryan Parno∗∗
∗ Universityof Porto (FCUP) and INESC TEC, † Max Planck Institute for Security & Privacy, ‡ IMDEA Software Institute,
§ INRIA Paris, ¶ CISPA Helmholtz Center for Information Security, k MIT, ∗∗ Carnegie Mellon University
Abstract—Computer-aided cryptography is an active area of which are difficult to catch by code testing or auditing; ad-
research that develops and applies formal, machine-checkable hoc constant-time coding recipes for mitigating side-channel
approaches to the design, analysis, and implementation of attacks are tricky to implement, and yet may not cover the
cryptography. We present a cross-cutting systematization of
the computer-aided cryptography literature, focusing on three whole gamut of leakage channels exposed in deployment.
main areas: (i) design-level security (both symbolic security and Unfortunately, the current modus operandi—relying on a select
computational security), (ii) functional correctness and efficiency, few cryptography experts armed with rudimentary tooling to
and (iii) implementation-level security (with a focus on digital vouch for security and correctness—simply cannot keep pace
side-channel resistance). In each area, we first clarify the role with the rate of innovation and development in the field.
of computer-aided cryptography—how it can help and what the
caveats are—in addressing current challenges. We next present Computer-aided cryptography, or CAC for short, is an active
a taxonomy of state-of-the-art tools, comparing their accuracy, area of research that aims to address these challenges. It en-
scope, trustworthiness, and usability. Then, we highlight their compasses formal, machine-checkable approaches to design-
main achievements, trade-offs, and research challenges. After ing, analyzing, and implementing cryptography; the variety of
covering the three main areas, we present two case studies. tools available address different parts of the problem space.
First, we study efforts in combining tools focused on different
areas to consolidate the guarantees they can provide. Second, we At the design level, tools can help manage the complexity of
distill the lessons learned from the computer-aided cryptography security proofs, even revealing subtle flaws or as-yet-unknown
community’s involvement in the TLS 1.3 standardization effort. attacks in the process. At the implementation level, tools
Finally, we conclude with recommendations to paper authors, can guarantee that highly optimized implementations behave
tool developers, and standardization bodies moving forward. according to their design specifications on all possible inputs.
At the deployment level, tools can check that implementations
I. I NTRODUCTION
correctly protect against classes of side-channel attacks. Al-
Designing, implementing, and deploying cryptographic though individual tools may only address part of the problem,
mechanisms is notoriously hard to get right, with high- when combined, they can provide a high degree of assurance.
profile design flaws, devastating implementation bugs, and Computer-aided cryptography has already fulfilled some of
side-channel vulnerabilities being regularly found even in these promises in focused but impactful settings. For instance,
widely deployed mechanisms. Each step is highly involved computer-aided security analyses were influential in the recent
and fraught with pitfalls. At the design level, cryptographic standardization of TLS 1.3 [1]–[4]. Formally verified code
mechanisms must achieve specific security goals against some is also being deployed at Internet-scale—components of the
well-defined class of attackers. Typically, this requires com- HACL∗ library [5] are being integrated into Mozilla Firefox’s
posing a series of sophisticated building blocks—abstract con- NSS security engine, elliptic curve code generated using the
structions make up primitives, primitives make up protocols, Fiat Cryptography framework [6] has populated Google’s
and protocols make up systems. At the implementation level, BoringSSL library, and EverCrypt [7] routines are used in
high-level designs are then fleshed out with concrete functional the Zinc crypto library for the Linux kernel. In light of these
details, such as data formats, session state, and programming successes, there is growing enthusiasm for computer-aided
interfaces. Moreover, implementations must be optimized for cryptography. This is reflected in the rapid emergence of
interoperability and performance. At the deployment level, a dynamic community comprised of theoretical and applied
implementations must also account for low-level threats that cryptographers, cryptography engineers, and formal methods
are absent at the design level, such as side-channel attacks. practitioners. Together, the community aims to achieve broader
Attackers are thus presented with a vast attack surface: They adoption of computer-aided cryptography, blending ideas from
can break high-level designs, exploit implementation bugs, many fields, and more generally, to contribute to the future
recover secret material via side-channels, or any combination development of cryptography.
of the above. Preventing such varied attacks on complex At the same time, computer-aided cryptography risks be-
cryptographic mechanisms is a challenging task, and existing coming a victim of its own success. Trust in the field can be
methods are hard-pressed to do so. Pen-and-paper security undermined by difficulties in understanding the guarantees and
proofs often consider pared-down “cores” of cryptographic fine-print caveats of computer-aided cryptography artifacts.
mechanisms to simplify analysis, yet remain highly complex The field is also increasingly broad, complex, and rapidly
and error-prone; demands for aggressively optimized imple- evolving, so no one has a complete understanding of every
mentations greatly increase the risks of introducing bugs, facet. This can make it difficult for the field to develop and
address pressing challenges, such as the expected transition security (in the cryptography community). This has led to two
to post-quantum cryptography and scaling from lower-level complementary strands of work, so we cover them both.
primitives and protocols to whole cryptographic systems.
A. Critical Review
Given these concerns, the purpose of this SoK is three-fold:
1) We clarify the current capabilities and limitations of Why is design-level security important? Validating cryp-
computer-aided cryptography. tographic designs through mathematical arguments is perhaps
2) We present a taxonomy of computer-aided cryptography the only way to convincingly demonstrate their security against
tools, highlighting their main achievements and important entire classes of attacks. This has become standard practice in
trade-offs between them. cryptography, and security proofs are necessary for any new
3) We outline promising new directions for computer-aided standard. This holds true at all levels: primitives, protocols,
cryptography and related areas. and systems. When using a lower-level component in a larger
system, it is crucial to understand what security notion and ad-
We hope this will help non-experts better understand the field,
versarial model the proof is relative to. Similar considerations
point experts to opportunities for improvement, and showcase
apply when evaluating the security of a cryptographic system
to stakeholders (e.g., standardization bodies and open source
relative to its intended deployment environment.
projects) the many benefits of computer-aided cryptography. How can design-level security fail? The current modus
A. Structure of the Paper operandi for validating the security of cryptographic designs
The subsequent three sections expand on the role of using pen-and-paper arguments is alarmingly fragile. This is
computer-aided cryptography in three main areas: Section II for two main reasons:
covers how to establish design-level security guarantees, • Erroneous arguments. Writing security arguments is tedious
using both symbolic and computational approaches; Sec- and error-prone, even for experts. Because they are primarily
tion III covers how to develop functionally correct and ef- done on pen-and-paper, errors are difficult to catch and can
ficient implementations; Section IV covers how to establish go unnoticed for years.
implementation-level security guarantees, with a particular • Inappropriate modeling. Even when security arguments are
focus on protecting against digital side-channel attacks. correct, attacks can lie outside the model in which they are
We begin each section with a critical review of the area, established. This is a known and common pitfall: To make
explaining why the considered guarantees are important, how (pen-and-paper) security analysis tractable, models are often
current tools and techniques outside CAC may fail to meet heavily simplified into a cryptographic core that elides many
these guarantees, how CAC can help, the fine-print caveats details about cryptographic designs and attacker capabilities.
of using CAC, and necessary technical background. We then Unfortunately, attacks are often found outside of this core.
taxonomize state-of-the-art tools based on criteria along four How are these failures being addressed outside CAC?
main categories: accuracy (A), scope (S), trust (T), and usabil- To minimize erroneous arguments, cryptographers have de-
ity (U). For each criterion, we label them with one or more vised a number of methodological frameworks for security
categories, explain their importance, and provide some light analysis (e.g., the code-based game playing [8] and universal
discussion about tool support for them. The ensuing discussion composability [9] frameworks). The high-level goal of these
highlights broader points, such as main achievements, impor- frameworks is to decompose security arguments into simpler
tant takeaways, and research challenges. Finally, we end each arguments that are easier to get right and then smoothly
section with references for further reading. Given the amount combine the results. Still, pen-and-paper proofs based on these
of material we cover, we are unable to be exhaustive in each methodologies remain complex and error-prone, which has led
area, but we still point to other relevant lines of work. to suggestions of using computer-aided tools [10].
Sections V and VI describe two case studies. Our first case To reduce the risks of inappropriate modeling, real-world
study (Section V) examines how to combine tools that address provable security [11]–[13] advocates making security ar-
different parts of the problem space and consolidate their guar- guments in more accurate models of cryptographic designs
antees. Our second case study (Section VI) distills the lessons and adversarial capabilities. Unfortunately, the added realism
learned from the computer-aided cryptography community’s comes with greater complexity, complicating security analysis.
involvement in the TLS 1.3 standardization effort. How can computer-aided cryptography help? Computer-
Finally, in Section VII, we offer recommendations to paper aided cryptography tools are effective for detecting flaws
authors, tool developers, and standardization bodies on how to in cryptographic designs and for managing the complexity
best move the field of computer-aided cryptography forward. of security proofs. They crystallize the benefits of on-paper
methodologies and of real-world provable security. They also
II. D ESIGN -L EVEL S ECURITY deliver trustworthy analyses for complex designs that are
In this section, we focus on the role of computer-aided beyond reach of pen-and-paper analysis.
cryptography in establishing design-level security guarantees. What are the fine-print caveats? Computer-aided security
Over the years, two flavors of design-level security have been proofs are only as good as the statements being proven.
developed in two largely separate communities: symbolic se- However, understanding these statements can be challenging.
curity (in the formal methods community) and computational Most security proofs rely on implicit assumptions; without
proper guidance, reconstructing top-level statements can be Computational security properties are also probabilistic
challenging, even for experts. (As an analogy, it is hard even and can be characterized along two axes: game-based or
for a talented mathematician to track all dependencies in a simulation-based, and concrete or asymptotic.
textbook.) Finally, as with any software, tools may have bugs. Game-based properties specify a probabilistic experiment
What background do I need to know about symbolic secu- called a “game” between a challenger and an adversary, and
rity? The symbolic model is an abstract model for representing an explicit goal condition that the adversary must achieve to
and analyzing cryptographic protocols. Messages (e.g., keys, break a scheme. Informally, security statements say: For all
nonces) are represented symbolically as terms (in the parlance adversaries, the probability of achieving the goal condition
of formal logic). Typically, terms are atomic data, meaning does not exceed some threshold. The specific details, e.g., the
that they cannot be split into, say, component bitstrings. adversary’s computational resources and the threshold, depend
Cryptographic primitives are modeled as black-box functions on the choice of concrete or asymptotic security.
over terms related by a set of mathematical identities called A core proof methodology for game-based security is game
an equational theory. For example, symmetric encryption can hopping. In the originally specified game, the adversary’s
be modeled by the black-box functions Enc and Dec related success probability may be unknown. Thus, we proceed by
by the following equational theory: Dec(Enc(m, k), k) = m. step-wise transforming the game until reaching one in which
This says that decrypting the ciphertext Enc(m, k) using the the success probability can be computed. We also bound the
key k recovers the original plaintext m. increases in the success probability from the game transforma-
An adversary is restricted to compute (i.e., derive new terms tions, often by reducing to an assumed hard problem (e.g., the
contributing to its knowledge set) using only the specified discrete log or RSA problems). We can then deduce a bound
primitives and equational theory. Equational theories are thus on the adversary’s success probability in the original game.
important for broadening the scope of analysis—ignoring The interested reader can see the tutorials on game hopping
valid equations implicitly weakens the class of adversaries by Shoup [14] and Bellare and Rogaway [8].
considered. In the example above, m and k are atomic terms, Simulation-based properties specify two probabilistic ex-
and so equipped with only the given identity, an adversary periments: The “real” game runs the scheme under analysis.
can decrypt a ciphertext only if it has knowledge of the entire The “ideal” game runs an idealized scheme that does not
secret key. Such simplifications enable modeling and verifying involve any cryptography, but instead runs a trusted third-
protocols using symbolic logic. Symbolic tools are thus well- party called an ideal functionality, which serves as the se-
suited to automatically searching for and unveiling logical curity specification. Informally, security statements say: For
flaws in complex cryptographic protocols and systems. all adversaries in the real game, there exists a simulator in the
Symbolic security properties come in two main flavors: ideal game that can translate any attack on the real scheme
trace properties and equivalence properties. Trace properties into an attack on the ideal functionality. Because the ideal
state that a bad event never occurs on any execution trace. functionality is secure by definition, the real scheme must also
For example, a protocol preserves trace-based secrecy if, for be secure. In general, simulation-based proofs tend to be more
any execution trace, secret data is not in the adversarial complicated than game-based proofs, but importantly they
knowledge set. On the other hand, equivalence properties support composition theorems that allow analyzing complex
state that an adversary is unable to distinguish between two constructions in a modular way from simpler building blocks.
protocols, often with one being the security specification. The interested reader can see the tutorial on simulation-based
Equivalence properties typically cannot be (naturally or pre- proofs by Lindell [15].
cisely) expressed as trace properties. For example, a protocol Concrete security quantifies the security of a scheme by
preserves indistinguishability-based secrecy if the adversary bounding the maximum success probability of an adversary
cannot differentiate between a trace with the real secret and a given some upper bound on running time. A scheme is (t, )-
trace with the real secret replaced by a random value. secure if every adversary running for time at most t succeeds
What background do I need to know about computational in breaking the scheme with probability at most . In contrast,
security? In the computational model, messages are bitstrings, asymptotic security views the running time of the adversary
cryptographic primitives are probabilistic algorithms on bit- and its success probability as functions of some security
strings, and adversaries are probabilistic Turing machines. parameter (e.g., key length), rather than as concrete numbers.
For example, symmetric encryption can be modeled by a A scheme is secure if every probabilistic polynomial time
triple of algorithms (Gen, Enc, Dec). The probabilistic key adversary succeeds in breaking the scheme with negligible
generation algorithm Gen outputs a bitstring k. The encryption probability (i.e., with probability asymptotically less than all
(decryption) algorithm Enc (Dec) takes as input a key k and a inverse polynomials in the security parameter).
plaintext m (ciphertext c), and outputs a ciphertext c (plaintext Of these different security properties, we note that
m). The basic correctness property that must hold for every computer-aided security proofs have primarily focused
key k output by Gen and every message m in the message on game-based, concrete security. Work on mechanizing
space is Dec(Enc(m, k), k) = m. Because keys are bitstrings simulation-based proofs is relatively nascent; asymptotic secu-
in this model, knowing bits of an encryption key reduces the rity is the prevailing paradigm in cryptography, but by proving
computational resources required to decrypt a ciphertext. concrete security, asymptotic security follows a fortiori.
Tool Unbound Trace Equiv Eq-thy State Link
CPSA. [16] they work (i.e., terminate), the analysis is independent of the
F7
[17] number of sessions.
F5 [18]
Maude-NPA. [19] d Trace properties (S). Does the tool support verification of
ProVerif?†
[20] d trace properties?
fs2pv† [21] Equivalence properties (S). Does the tool support verifi-
GSVerif?† [22]
ProVerif-ATP?† [23] cation of equivalence properties? There are several different
StatVerif?† [24] d
equivalence notions used in current tools. Here, we provide
Scyther. [25]
some high-level intuition, but for a more formal treatment,
scyther-proof.‡§ [26]
Tamarin∗‡
[27] d see the survey by Delaune and Hirschi [39].
SAPIC? [28] Trace equivalence (t) means that, for each trace of one
CI-AtSe. [29]
OFMC.† [30]
protocol, there exists a corresponding trace of the other
SATMC. [31] protocol, such that the messages exchanged in these two
AKISS? [32] t
traces are indistinguishable. This is the weakest equivalence
APTE? [33] t
DEEPSEC? [34] t notion, roughly meaning that it can express the most security
SAT-Equiv? [35] t
properties. (The other stronger notions are often intermediate
o
SPEC?,§ [36]
steps towards proving trace equivalence.) It is also arguably
Specification language Miscellaneous symbols
. – security protocol notation
– previous tool extension the most natural for formalizing privacy properties.
? – process calculus † – abstractions Open bisimilarity (o) is a strictly stronger notion that
∗ – multiset rewriting ‡ – interactive mode
– general programming language § – independent verifiability captures the knowledge of the adversary by pairs of symbolic
traces, called bi-traces. A bi-trace is consistent when the
Equational theories (Eq-thy) Equivalence properties (Equiv)
– with AC axioms t – trace equivalence messages in the two symbolic traces are indistinguishable by
– without AC axioms o – open bisimilarity the adversary. Informally, two protocols are open bisimilar
– fixed d – diff equivalence
when each action in one protocol can be simulated in the
TABLE I
OVERVIEW OF TOOLS FOR SYMBOLIC SECURITY ANALYSIS . S EE other using a consistent bi-trace.
S ECTION II-B FOR MORE DETAILS ON COMPARISON CRITERIA . Diff-equivalence (d) is another strictly stronger notion that
is defined for protocols that have the same structure and differ
only by the messages they exchange. It means that, during
execution, all communications and tests, including those that
B. Symbolic Tools: State of the Art
the adversary can make, either succeed for both protocols
Table I presents a taxonomy of modern, general-purpose or fail for both protocols. This property implies that both
symbolic tools. Tools are listed in three groups (demarcated protocols still have the same structure during execution.
by dashed lines): unbounded trace-based tools, bounded trace- Equational theories (S). What is the support for equational
based tools, and equivalence-based tools; within each group, theories? At a high-level, extra support for certain axioms
top-level tools are listed alphabetically. Tools are categorized enables detecting a larger class of attacks (see, e.g., [40], [41]).
as follows, annotated with the relevant criteria (A, S, T, U ) We provide a coarse classification as follows: tools that support
described in the introduction. Note that the capabilities of a fixed set of equational theories or no equational theories at
symbolic tools are more nuanced than what is reflected in the all ( ); tools that support user-defined equational theories, but
table—the set of examples that tools can handle varies even without associative-commutative (AC) axioms ( ); tools that
if they support the same features according to the table. support user-defined equational theories with AC axioms ( ).
Unbounded number of sessions (A). Can the tool analyze Supporting associative and commutative properties enables
an unbounded number of protocol sessions? There exist proto- detecting a much larger class of attacks, since they allow the
cols that are secure when at most N sessions are considered, most detailed modeling of, e.g., xor operations, abelian groups,
but become insecure with more than N sessions [37]. Bounded and Diffie-Hellman constructions. One caveat is that the finer
tools ( ) explicitly limit the analyzed number of sessions and details between these coarse classifications often make them
do not consider attacks beyond the cut-off. Unbounded tools incomparable, and even where they overlap, they are not all
( ) can prove the absence of attacks within the model, but at equally effective for analyzing concrete protocols.
the cost of undecidability [38]. Global mutable state (S). Does the tool support verification
In practice, modern unbounded tools typically substantially of protocols with global mutable state? Many real-world
outperform bounded tools even for a small number of sessions, protocols involve shared databases (e.g., key servers) or shared
and therefore enable the analysis of more complex models. memory, so reasoning support for analyzing complex, stateful
This is because bounded tools are a bit naive in their ex- attacks scenarios extends the reach of such tools [28].
ploration of the state space, basically enumerating options Link to implementation (T). Can the tool extract/generate
(but exploiting some symmetry). They therefore typically grow executable code from specifications in order to link symbolic
exponentially in the number of sessions. The unbounded tools security guarantees to implementations?
inherently need to be “more clever” to even achieve unbounded † Abstractions (U). Does the tool use abstraction? Algo-
analysis. While their algorithms are more complex, when rithms may use abstraction to overestimate attack possibilities,
Tool RF Auto Comp CS Link TCB
e.g., by computing a superset of the adversary’s knowledge. AutoG&P [55] self, SMT
This can yield more efficient and fully automatic analysis CertiCrypt. [56] Coq
CryptHOL [57] Isabelle
systems and can be a workaround to undecidability, but comes CryptoVerif? [58] self
at the cost of incompleteness, i.e., false attacks may be found EasyCrypt. [59] self, SMT
F7 [17] self, SMT
or the tool may terminate with an indefinite answer. F∗ [60] self, SMT
‡ Interactive mode (U). Does the tool support an inter- FCF [61] Coq
ZooCrypt [62] self, SMT
active analysis mode? Interactive modes generally trade off
Reasoning Focus (RF) Concrete security (CS) Specification language
automation for control. While push-button tools are certainly – automation focus – security + efficiency ? – process calculus
desirable, they may fail opaquely (perhaps due to undecid- – expressiveness focus – security only . – imperative
– no support – functional
ability barriers), leaving it unclear or impossible to proceed.
Interactive modes can allow users to analyze failed automated TABLE II
analysis attempts, inspect partial proofs, and to provide hints OVERVIEW OF TOOLS FOR COMPUTATIONAL SECURITY ANALYSIS . S EE
S ECTION II-D FOR MORE DETAILS ON COMPARISON CRITERIA .
and guide analyses to overcome any barriers.
§ Independent verifiability (T). Are the analysis results
independently machine-checkable? Symbolic tools implement
complex verification algorithms and decision procedures, protocols [48] (seconds to days depending on the protocol). In
which may be buggy and return incorrect results. This places general, more Diffie-Hellman key agreements (e.g., in Signal
them in the trusted computing base. Exceptions include and Noise) increase analysis times.
scyther-proof [26], which generates proof scripts that can be Tamarin has been used to analyze the 5G authentication
machine-checked in the Isabelle theorem prover [42], and key exchange protocol [49] (around five hours), TLS 1.3 [2],
SPEC [36], which can produce explicit evidence of security [4] (around one week, requiring 100GB RAM), the DNP3
claims that can be checked for correctness. SAv5 power grid protocol [50] (several minutes), and Noise
Specification language (U). How are protocols specified? protocols [51] (seconds to hours depending on the protocol).
The categorizations are domain-specific security protocol lan- Challenge: Verifying equivalence properties. Many se-
guages (.), process calculus (?), multiset rewriting (∗), and curity properties can be modeled accurately by equivalence
general programming language (). General programming properties, but they are inherently more difficult to verify
languages are arguably the most familiar to non-experts, than trace properties. This is because they involve relations
while security protocol languages (i.e., notations for describing between traces instead of single traces. As such, tool support
message flows between parties) are commonplace in cryptog- for reasoning about equivalence properties is thus substantially
raphy. Process calculi and multiset rewriting may be familiar less mature. For full automation, either one bounds the number
to formal methods practitioners. Process calculi are formal of sessions or one has to use the very strong notion of diff-
languages for describing concurrent processes and their inter- equivalence, which cannot handle many desired properties,
actions (e.g., [43]–[45]). Multiset rewriting is a more general e.g., vote privacy in e-voting and unlinkability.
and lower-level formalism that allows for various encodings of For the bounded setting, recent developments include
processes, but has no built-in notion of a process. It provides support for more equational theories (AKISS [32],
a natural formalism for complex state machines. DEEPSEC [34]), for protocols with else branches (APTE [33],
AKISS, DEEPSEC) and for protocols whose actions are
C. Symbolic Security: Discussion
not entirely determined by their inputs (APTE, DEEPSEC).
Achievements: Symbolic proofs for real-world case studies. There have also been performance improvements based
Of the considered symbolic tools, ProVerif and Tamarin stand on partial order reduction (APTE, AKISS, DEEPSEC) or
out as having been used to analyze large, real-world protocols. graph planning (SAT-Equiv). For the unbounded setting, diff-
They offer unprecedented combinations of scalability and equivalence, first introduced in ProVerif [52] and later adopted
expressivity, which enables them to deal with complex systems by Maude-NPA [53] and Tamarin [54], remains the only
and properties. Moreover, they provide extensive documenta- fully automated approach for proving equivalences. Because
tion, a library of case studies, and practical usability features trace equivalence is the most natural for formalizing privacy
(e.g., packaging, a graphical user interface for Tamarin, attack properties, verifying more general equivalence properties for
reconstruction in HTML for Proverif). an unbounded number of sessions remains a challenge.
Next, we provide a rough sense of their scalability on real-
world case studies; more precise numbers can be found in D. Computational Tools: State of the Art
the respective papers. It is important to keep in mind that Table II presents a taxonomy of general-purpose computa-
comparisons between tools are difficult (even on similar case tional tools. Tools are listed alphabetically and are categorized
studies), so these numbers should be taken with a grain of salt. as follows.
ProVerif has been used to analyze TLS 1.0 [46] (seconds Reasoning focus (U). Is the tool’s reasoning focus on au-
to several hours depending on the security property) and tomation ( ) or on expressivity ( )? Automation focus means
1.3 [3] (around one hour), Signal [47] (a few minutes to more being able to produce automatically or with light interaction
than a day depending on the security property), and Noise a security proof (at the cost of some expressiveness). Dually,
expressivity focus means being able to express arbitrary argu- automation is instrumental to deal with large cryptographic
ments (at the cost of some automation). games and games that contain many different cases, as is often
Automated proof-finding (U). Can the tool automatically the case in proofs of protocols.
find security proofs? A subset of the automation-focused tools Takeaway: F∗ is good for analysis of full protocols and
can automatically (non-interactively) find security proofs in systems. F∗ is a general-purpose verification-oriented program-
restricted settings (e.g., proofs of pairing-based schemes for ming language. It works particularly well for analyzing cryp-
AutoG&P, proofs of key exchange protocols using a catalog tographic protocols and systems beyond their cryptographic
of built-in game transformations for CryptoVerif, proofs of core. Computational proofs in F∗ rely on transforming a
padding-based public key encryption schemes for ZooCrypt). detailed protocol description into a final (ideal) program by
Composition (U). Does the tool support compositional relying on ideal functionalities for cryptographic primitives.
reasoning? Support for decomposing security arguments of Formal validation of this transformation is carried out man-
cryptographic systems into security arguments for their core ually, with some help from the F∗ verification infrastructure.
components is essential for scalable analysis. Formal verification of the final program is done within F∗ . This
Concrete security (A). Can the tool be used to prove approach is driven by the insight that critical security issues,
concrete bounds on the adversary’s success probability and and therefore also potential attacks, often arise only in detailed
execution time? We consider tools with no support ( ), support descriptions of full protocols and systems (compared to when
for success probability only ( ), and support for both ( ). reasoning about cryptographic cores). The depth of this insight
Link to implementation (T). Can the tool extract/generate is reflected by the success of F∗ -based verification both in
executable code from specifications in order to link computa- helping discovering new attacks on real-world protocols like
tional security guarantees to implementations? TLS [72], [73] as well as in verifying their concrete design
Trusted computing base (T). What lies in the trusted com- and implementation [1], [70].
puting base (TCB)? An established general-purpose theorem Takeaway: EasyCrypt is the closest to pen-and-paper
prover such as Coq [63] or Isabelle [64] is usually held as the cryptographic proofs. EasyCrypt supports a general-purpose
minimum TCB for proof checking. Most tools, however, rely relational program logic (i.e., a formalism for specifying
on an implementation of the tool’s logics in a general purpose and verifying properties about two programs or two runs
language that must be trusted (self). Automation often relies of the same program) that captures many of the common
on SMT solvers [65], such as Z3 [66]. game hopping techniques. This is complemented by libraries
Specification language (U). What kind of specification that support other common techniques, e.g., the PRF/PRP
language is used? All tools support some functional language switching lemma, hybrid arguments, and lazy sampling [8].
core for expressing the semantics of operations (). Some tools In addition, EasyCrypt features a union bound logic for upper
support an imperative language (.) in which to write security bounding the probability of some event E in an experiment
games, while others rely on a process calculus (?). (game) G (e.g., bounding the probability of collisions in exper-
iments that involve hash functions). Overall, EasyCrypt proofs
E. Computational Security: Discussion closely follow the structure of pen-and-paper arguments. A
Achievements: Machine-checked security for real-world consequence is that EasyCrypt is amenable to proving the
cryptographic designs. Computational tools have been used to security of primitives, as well as protocols and systems.
develop machine-checked security proofs for a range of real- Challenge: Scaling security proofs for cryptographic sys-
world cryptographic mechanisms. CryptoVerif has been used tems. Analyzing large cryptographic systems is best done
for a number of protocols, including TLS 1.3 [3], Signal [47], in a modular way by composing simpler building blocks.
and WireGuard [67]. EasyCrypt has been used for the Amazon However, cryptographers have long recognized the difficulties
Web Service (AWS) key management system [68] and the of preserving security under composition [74]. Most game-
SHA-3 standard [69]. F7 was used to build miTLS, a refer- based security definitions do not provide out-of-the-box com-
ence implementation of TLS 1.2 with verified computational position guarantees, so simulation-based definitions are the
security at the code-level [70], [71]. F∗ was used to implement preferred choice for analyzing large cryptographic systems,
and verify the security of the TLS 1.3 record layer [1]. with universal composability (UC) being the gold-standard—
Takeaway: CryptoVerif is good for highly automated UC definitions guarantee secure composition in arbitrary con-
computational analysis of protocols and systems. CryptoVerif texts [9]. Work on developing machine-checked UC proofs is
is both a proof-finding and proof-checking tool. It works relatively nascent [75]–[77], but is an important and natural
particularly well for protocols (e.g., key exchange), as it can next step for computational tools.
produce automatically or with a light guidance a sequence
of proof steps that establish security. One distinctive strength F. Further Reading
of CryptoVerif is its input language based on the applied π- Another class of tools leverages the benefits of automated
calculus [45], which is well-suited to describing protocols that verification to support automated synthesis of secure crypto-
exchange messages in sequence. Another strength of Cryp- graphic designs, mainly in the computational world [62], [78]–
toVerif is a carefully crafted modeling of security assumptions [81]. Cryptographic compilers provide high-level abstractions
that help the automated discovery of proof steps. In turn, (e.g., a domain-specific language) for describing cryptographic
tasks, which are then compiled into custom protocol imple- problematic in their own ways, as we will see in Section IV).
mentations. These have been proposed for verifiable compu- Currently, these painstaking efforts are manually repeated for
tation [82]–[85], zero-knowledge [86]–[89], and secure mul- each target architecture.
tiparty computation [90] protocols, which are parameterized How are these failures being addressed outside CAC?
by a proof-goal or a functionality to compute. Some are Given its difficulty, the task of developing high-speed cryp-
supported by proofs that guarantee the output protocols are tography is currently entrusted to a handful of experts. Even
correct and/or secure for every input specification [91]–[94]. so, experts make mistakes (e.g., a performance optimization to
We recommend readers to also consult other related surveys. OpenSSL’s AES-GCM implementation nearly reached deploy-
Blanchet [95] surveys design-level security until 2012 (with ment even though it enabled arbitrary message forgeries [109];
a focus on ProVerif). Cortier et al. [96] survey computational an arithmetic bug in OpenSSL led to a full key recovery
soundness results, which transfer security properties from the attack [110]). Current solutions for preventing more mistakes
symbolic world to the computational world. are (1) auditing, which is costly in both time and expertise,
and (2) testing, which cannot be complete for the size of inputs
III. F UNCTIONAL C ORRECTNESS AND E FFICIENCY used in cryptographic algorithms. These solutions are also
In this section, we focus on the role of computer-aided clearly inadequate: Despite widespread usage and scrutiny,
cryptography in developing functionally correct and efficient OpenSSL’s libcrypto library reported 24 vulnerabilities
implementations. between January 1, 2016 and May 1, 2019 [7].
How can computer-aided cryptography help? Crypto-
A. Critical Review graphic code is an ideal target for program verification. Such
Why are functional correctness and efficiency important? code is both critically important and difficult to get right. The
To reap the benefits of design-level security guarantees, im- use of heavyweight formal methods is perhaps the only way
plementations must be an accurate translation of the design to attain the high-assurance guarantees expected of them. At
proven secure. That is, they must be functionally correct (i.e., the same time, because the volume of code in cryptographic
have equivalent input/output behavior) with respect to the de- libraries is relatively small (compared to, say, an operating
sign specification. Moreover, to meet practical deployment re- system), verifying complex, optimized code is well within
quirements, implementations must be efficient. Cryptographic reach of existing tools and reasonable human effort, without
routines are often on the critical path for security applications compromising efficiency.
(e.g., for reading and writing TLS packets or files in an What are the fine-print caveats? Functional correctness
encrypted file system), and so even a few additional clock- makes implicit assumptions, e.g., correct modeling of hard-
cycles can have a detrimental impact on overall performance. ware functional behavior. Another source of implicit assump-
How can functional correctness and efficiency fail? tions is the gap between code and verified artifacts, e.g.,
If performance is not an important goal, then achieving verification may be carried out on a verification-friendly
functional correctness is relatively easy—just use a refer- representation of the source code, rather than on the source
ence implementation that does not deviate too far from the code itself. Moreover, proofs may presuppose correctness of
specification, so that correctness is straightforward to argue. libraries, e.g., for efficient arithmetic. Finally, as with any
However, performance demands drive cryptographic code into software, verification tools may have bugs.
extreme contortions that make functional correctness difficult What background do I need to know? Functional cor-
to achieve, let alone prove. For example, OpenSSL is one of rectness is the central focus of program verification. An
the fastest open source cryptographic libraries; they achieve implementation can be proved functionally correct in two
this speed in part through the use of Perl code to generate different ways: equivalence to a reference implementation, or
strings of text that additional Perl scripts interpret to produce satisfying a functional specification, typically expressed as pre-
input to the C preprocessor, which ultimately produces highly conditions (what the program requires on inputs) and post-
tuned, platform-specific assembly code [103]. Many more conditions (what the program guarantees on outputs). Both
examples of high-speed crypto code written at assembly and forms of verification are supported by a broad range of tools.
pre-assembly levels can be found in SUPERCOP [107], a A unique aspect of cryptographic implementations is that
benchmarking framework for cryptography implementations. their correctness proofs often rest on non-trivial mathematics.
More broadly, efficiency considerations typically rule out Mechanizing them thus requires striking a good balance be-
exclusively using high-level languages. Instead, C and as- tween automation and user control. Nevertheless, SMT-based
sembly are the de facto tools of the trade, adding memory automation remains instrumental for minimizing verification
safety to the list of important requirements. Indeed, memory effort, and almost all tools offer an SMT-based backend.
errors can compromise secrets held in memory, e.g., in the Typically, functional correctness proofs are carried out at
Heartbleed attack [108]. Fortunately, as we discuss below, source level. A long-standing challenge is how to carry guar-
proving memory safety is table stakes for most of the tools we antees to machine code. This can be addressed using verified
discuss. Additionally, achieving best-in-class performance de- compilers, which are supported by formal correctness proofs.
mands aggressive, platform-specific optimizations, far beyond CompCert [111] is a prime example of moderately optimizing
what is achievable by modern optimizing compilers (which are verified compiler for a large fragment of C. However, the
Memory Parametric
Tool Automation Input language Target(s) TCB
safety verification
Cryptol + SAW [97] C, Java C, Java SAT, SMT
CryptoLine [98] CryptoLine C Boolector, MathSAT, Singular
Dafny [99] Dafny C#, Java, JavaScript, Go Boogie, Z3
∗
F [60] F∗ OCaml, F#, C, Asm, Wasm Z3, typechecker
Fiat Crypto [6] Gallina C Coq, C compiler
Frama-C [100] C C Coq, Alt-Ergo, Why3
gfverif [101] C C g++, Sage
Jasmin [102] Jasmin Asm Coq, Dafny, Z3
Vale [103], [104] Vale Asm Dafny or F*, Z3
VST [105] Gallina C Coq
Why3 [106] WhyML OCaml SMT, Coq
Automation
– automated – automated + interactive – interactive
TABLE III
OVERVIEW OF TOOLS FOR FUNCTIONAL CORRECTNESS . S EE S ECTION III-B FOR MORE DETAILS ON COMPARISON CRITERIA .
follow a number of strict guidelines, e.g., they must avoid further, some tools can automatically repair code that violates
variable-time operations, control flow, and memory access constant-time into compliant code. These approaches neces-
patterns that depend on secret data. Unfortunately, complying sarily abstract the leakage interface available to real-world
with constant-time coding guidelines forces implementors to attackers, but being precisely defined, they help clarify the
avoid natural but potentially insecure programming patterns, gap between formal leakage models and real-world leakage.
making enforcement error-prone. What are the fine-print caveats? Implementation-level
Even worse, the observable properties of a program’s exe- proofs are only as good as their models, e.g., of physically
cution are generally not evident from source code alone. Thus, observable effects of hardware. Furthermore, new attacks may
software-invisible optimizations, e.g., compiler optimizations challenge these models. Implicit assumptions arise from gaps
or data-dependent instruction set architecture (ISA) optimiza- between code and verified artifacts.
tions, can remove source-level countermeasures. Programmers What background do I need to know? Formal reasoning
also assume that the computing machine provides memory about side-channels is based on a leakage model. This model
isolation, which is a strong and often unrealistic assumption is defined over the semantics of the target language, abstractly
in general-purpose hardware (e.g., due to isolation breaches representing what an attacker can observe during the computa-
allowed by speculative execution mechanisms). tion process. For example, the leakage model for a branching
How are these failures being addressed outside CAC? operation may leak all program values associated with the
To check that implementations correctly adhere to constant- branching condition. After having defined the appropriate
time coding guidelines, current solutions are (1) auditing, leakage models, proving that an implementation is secure
which is costly in both time and expertise, and (2) testing, (with respect to the leakage models) amounts to showing
which commits the fallacy of interpreting constant-time to be that the leakage accumulated over the course of execution
constant wall-clock time. These solutions are inadequate: A is independent of the values of secret data. This property is
botched patch for a timing vulnerability in TLS [145] led to the an instance of observational non-interference, an information
Lucky 13 timing vulnerability in OpenSSL [146]; in turn, the flow property requiring that variations in secret data cause no
Lucky 13 patch led to yet another timing vulnerability [147]! differences in observable outputs [150].
To prevent compiler optimizations from interfering with The simplest leakage model is the program counter pol-
constant-time recipes applied at the source-code level, imple- icy, where the program control-flow is leaked during ex-
mentors simply avoid using compilers at all, instead choosing ecution [151]. The most common leakage model, namely
to implement cryptographic routines and constant-time recipes the constant-time policy, additionally assumes that memory
directly in assembly. Again, checking that countermeasures are accesses are leaked during execution. This leakage model is
implemented correctly is done through auditing and testing, usually taken as the best practice to remove exploitable exe-
but in a much more difficult, low-level setting. cution time variations and a best-effort against cache-attacks
Dealing with micro-architectural attacks that breach mem- launched by co-located processes. A more precise leakage
ory isolation, such as Spectre and Meltdown [148], [149], is model called the size-respecting policy also assumes that
still an open problem and seems to be out of reach of purely operand sizes are leaked for specific variable-time operations.
software-based countermeasures if there is to be any hope of For more information on leakage models, see the paper by
achieving decent performance. Barthe et al. [150, Section IV.D].
How can computer-aided cryptography help? Program
analysis and verification tools can automatically (or semi- B. Digital Side-Channel Tools: State of the Art
automatically) check whether a given implementation meets Table V presents a taxonomy of tools for verifying digital
constant-time coding guidelines, thereby providing a formal side-channel resistance. Tools are listed alphabetically and are
foundation supporting heretofore informal best practices. Even categorized as follows.
Target (A,S). At what level is the analysis performed to be fully input independent may lead to large performance
(e.g., source, assembly, binary)? To achieve the most reliable overheads.
guarantees, analysis should be performed as close as possible Public output (S). Does the tool support public outputs?
to the executed machine code. Similarly, support for public outputs allows differentiating be-
Method (A). The tools we consider all provide a means tween public and secret outputs. The advantages to supporting
to verify absence of timing leaks in a well-defined leakage public outputs is the same as those for supporting public
model, but using different techniques: inputs: for example, branching on a bit that is revealed to
• Static analysis techniques use type systems or data-flow the attacker explicitly is fine.
analysis to keep track of data dependencies from secret Control flow leakage (S). Does the tool consider control-
inputs to problematic operations. flow leakage? The leakage model includes values associated
• Quantitative analysis techniques construct a rich model of a with conditional branching (e.g., if, switch, while, for state-
hardware feature, e.g, the cache, and derive an upper-bound ments) during program execution.
on the leaked information. Memory access leakage (S). Does the tool consider memory
• Deductive verification techniques prove that the leakage access pattern leakage? The leakage model includes memory
traces of two executions of the program coincide if the pub- addresses accessed during program execution.
lic parts of the inputs match. These techniques are closely Variable-time operation leakage (S). Does the tool consider
related to the techniques used for functional correctness. variable-time operation leakage? The leakage model includes
inputs to variable-time operations (e.g., floating point opera-
Type-checking and data-flow analysis are more amenable to tions [153]–[155], division and modulus operations on some
automation, and they guarantee non-interference by excluding architectures) classified according to timing-equivalent ranges.
all programs that could pass secret information to an operation
that appears in the trace. The emphasis on automation, how- C. Discussion
ever, limits the precision of the techniques, which means that Achievements: Automatic verification of constant-time
secure programs may be rejected by the tools (i.e., they are not real-world code. There are several tools that can perform
complete). Tools based on deductive verification are usually verification of constant-time code automatically, both for high-
complete, but require more user interaction. In some cases, level code and low-level code. These tools have been applied
users interact with the tool by annotating code, and in others to real-world libraries. For example, portions of the assembly
the users use an interactive proof assistant to complete the code in OpenSSL have been verified using Vale [103], high-
proof. It is hard to conciliate a quantitative bound on leakage speed implementations of SHA-3 and TLS 1.3 ciphersuites
with standard cryptographic security notions, but such tools have been verified using Jasmin [102], and various off-the-
can also be used to prove a zero-leakage upper bound, which shelf libraries have been analyzed with FlowTracker [137].
implies non-interference in the corresponding leakage model. Takeaway: Lowering the target provides better guarantees.
Synthesis (U). Can the tool take an insecure program and Of the surveyed tools, several operate at the level of C code;
automatically generate a secure program? Tools that support others operate at the level of LLVM assembly; still others
synthesis (e.g., FaCT [136] and SC Eliminator [140]) can operate at the level of assembly or binary. The choice of target
automatically generate secure implementations from insecure is important. To obtain a faithful correspondence with the ex-
implementations. This allows developers to write code natu- ecutable program under an attacker’s scrutiny, analysis should
rally with constant-time coding recipes applied automatically. be performed as close as possible to the executed machine
Soundness (A, T). Is the analysis sound, i.e., it only deems code. Given that mainstream compilers (e.g., GCC and Clang)
secure programs as secure? Note that this is our baseline are known to optimize away defensive code and even introduce
filter for consideration, but we make this explicit in the table. new side-channels [156], compiler optimizations can interfere
Still, it bears mentioning that some unsound tools are used with countermeasures deployed and verified at source-level.
in practice. One example is ctgrind [152], an extension of Challenge: Secure, constant-time preserving compilation.
Valgrind that takes in a binary with taint annotations and Given that mainstream compilers can interfere with side-
checks for constant-address security via dynamic analysis. It channel countermeasures, many cryptography engineers avoid
supports public inputs but not public outputs, and is neither using compilers at all, instead choosing to implement crypto-
sound nor complete. graphic routines directly in assembly, which means giving up
Completeness (A, S). Is the analysis complete, i.e., it only the benefits of high-level languages.
deems insecure programs as insecure? An alternative solution is to use secure compilers that carry
Public input (S). Does the tool support public inputs? Sup- source-level countermeasures along the compilation chain
port for public inputs allows differentiating between public and down to machine code. This way, side-channel resistant code
secret inputs. Implementations can benignly violate constant- can be written using portable C, and the secure compiler takes
time policies without introducing side-channel vulnerabilities care of preserving side-channel resistance to specific architec-
by leaking no more information than public inputs of compu- tures. Barthe et al. [150] laid the theoretical foundations of
tations. Unfortunately, tools without such support would reject constant-time preserving compilation. These ideas were sub-
these implementations as insecure; forcing execution behaviors sequently realized in the verified CompCert C compiler [157].
Unfortunately, CompCert-generated assembly code is not as HACL∗ -related implementations are partially verified, as only
efficient as that generated by GCC and Clang, which in turn AEAD primitives have computational proofs, which are semi-
lags the performance of hand-optimized assembly. mechanized [1]. Security guarantees do not apply to, e.g.,
Challenge: Protecting against micro-architectural attacks. elliptic curve implementations or bignum code.
The constant-time policy is designed to capture logical timing Functional correctness. We categorize functional correct-
side channels in a simple model of hardware. Unfortunately, ness guarantees as follows: target-level ( ), source-level ( ),
this simple model is inappropriate for modern hardware, as and not verified ( ). Target-level guarantees can be achieved
microarchitectural features, e.g., speculative or out-of-order in two ways: Either guarantees are established directly on
execution, can be used for launching devastating side-channel assembly code, or guarantees are established at source level
attacks. Over the last year, the security world has been and a verified compiler is used.
shaken by a series of attacks, including Spectre [148] and Efficiency. We categorize efficiency as follows: comparable
Meltdown [149]. A pressing challenge is to develop notions to assembly reference implementations ( ), comparable to
of constant-time security and associated verification methods portable C reference implementations ( ), and slower than
that account for microarchitectural features. portable C reference implementations ( ).
Challenge: Rethinking the hardware-software contract Side-channel resistance. We categorize side-channel resis-
from secure, formal foundations. An ISA describes (usually tance guarantees as follows: target-level ( ), source-level ( ),
informally) what one needs to know to write a functionally and not verified ( ).
correct program [158], [159]. However, current ISAs are an Takeaway: Existing tools can be used to achieve the
insufficient specification of the hardware-software contract “grand slam” of guarantees for complex cryptographic
when it comes to writing secure programs [160]. They do not primitives. Ideally, we would like computational security
capture hardware features that affect the temporal behavior guarantees, (target-level) functional correctness, efficiency, and
of programs, which makes carrying side-channel countermea- (target-level) side-channel guarantees to be connected in a
sures at the software-level to the hardware-level difficult. formal, machine-checkable way (the “grand slam” of guar-
To rectify this, researchers have called on new ISA designs antees). Many implementations come close, but so far, only
that expose, for example, the temporal behaviors of hardware, one meets all four. Almeida et al. [69] formally verify an
which can lend to reasoning about them in software [160]. efficient implementation of the sponge construction from the
This, of course, poses challenging and competing requirements SHA-3 standard. It connects proofs of random oracle (RO)
for hardware architects, but we believe developing formal indifferentiability for a pseudo-code description of the sponge
foundations for verification and reasoning about security at construction, and proofs of functional correctness and side-
the hardware-software interface can help. This line of work channel resistance for an efficient, vectorized, implementation.
seems also to be the only path that can lead to a sound, formal The proofs are constructed using EasyCrypt and Jasmin.
treatment of micro-architectural attacks. Other works focus on either provable security or efficiency,
D. Further Reading plus functional correctness and side-channel resistance. This
disconnect is somewhat expected. Provable security guarantees
For lack of space, we had to omit several threads of
are established for pseudo-code descriptions of constructions,
relevant work, e.g., on verifying side-channel resistance in
whereas efficiency considerations demand non-trivial opti-
hardware [161]–[165], and on verifying masked implemen-
mizations at the level of C or assembly.
tations aimed at protecting against differential power analysis
Takeaway: Integration can deliver strong and intuitive
attacks [166]–[171].
guarantees. Interpreting verification results that cover multiple
V. C ASE S TUDY I: C ONSOLIDATING G UARANTEES requirements can be very challenging, especially because they
Previous sections focus on specific guarantees: design-level may involve (all at once) designs, reference implementations,
security, functional correctness, efficiency, and side-channel and optimized assembly implementations. To simplify their in-
resistance. This case study focuses on unifying approaches that terpretation, Almeida et al. [174] provide a modular methodol-
can combine these guarantees. This is a natural and important ogy to connect the different verification efforts, in the form of
step towards the Holy Grail of computer-aided cryptography: an informal meta-theorem, which concludes that an optimized
to deliver guarantees on executable code that match the assembly implementation is secure against implementation-
strength and elegance of guarantees on cryptographic designs. level adversaries with side-channel capabilities. The meta-
Table VI collects implementations that verifiably meet more theorem states four conditions: (i) the design must be prov-
than one guarantee. Implementations are grouped by year ably black-box secure in the (standard) computational model;
(demarcated by dashed lines), starting from 2014 and ending (ii) the design is correctly implemented by a reference imple-
in 2019; within each year, implementations are listed alpha- mentation; (iii) the reference implementation is functionally
betically by author. We report on the primitives included, the equivalent to the optimized implementation; (iv) the optimized
languages targeted, the tools used, and the guarantees met. implementation is protected against side-channels. These con-
Computational security. We categorize computational se- ditions yield a clear separation of concerns, which reflects the
curity guarantees as follows: verified ( ), partially veri- division of the previous sections.
fied ( ), not verified ( ), and not applicable (−). The Takeaway: Achieving broad scope and efficiency. Many
Computational Functional Side-channel
Implementation(s) Target(s) Tool(s) used Efficiency
security correctness resistance
RSA-OEAP [172] C EasyCrypt, Frama-C, CompCert
Curve25519 scalar mult. loop [114] asm Coq, SMT −
SHA-1, SHA-2, HMAC, RSA [131] asm Dafny, BoogieX86 −
HMAC-SHA-2 [173] C FCF, VST, CompCert
MEE-CBC [174] C EasyCrypt, Frama-C, CompCert
Salsa20, AES, ZUC, FFS, ECDSA, SHA-3 [175] Java, C Cryptol, SAW
Curve25519 [176] OCaml F∗ , Sage −
Salsa20, Curve25519, Ed25519 [102] asm Jasmin
SHA-2, Poly1305, AES-CBC [103] asm Vale
HMAC-DRBG [177] C FCF, VST, CompCert
HACL∗1 [5] C F∗
HACL∗1 [5] C F∗ , CompCert
HMAC-DRBG [178] C Cryptol, SAW
SHA-3 [69] asm EasyCrypt, Jasmin
ChaCha20, Poly1305 [117] asm EasyCrypt, Jasmin
BGW multi-party computation protocol [179] OCaml EasyCrypt, Why3
Curve25519, P-256 [6] C Fiat Crypto −
Poly1305, AES-GCM [104] asm F∗ , Vale
Bignum code4 [98] C CryptoLine −
WHACL∗1 , LibSignal∗ [180] Wasm F∗
EverCrypt2 [7] C F∗
EverCrypt3 [7] asm F∗ , Vale
implementations target either C or assembly. This involves The situation changed substantially during the proactive
trade-offs between the portability and lighter verification-effort design process of TLS version 1.3: The academic community
of C code, and the efficiency that can be gained via hand- was actively consulted and encouraged to provide analysis
tuned assembly. EverCrypt [7] is one of the first systems during the process of developing multiple drafts. (See [183]
to target both. This garners the advantages of both, and it for a more detailed account of TLS’s standardization history.)
helps explain, in part, the broad scope of algorithms EverCrypt On the computer-aided cryptography side of things, there
covers. Generic functionality and outer loops can be efficiently were substantial efforts in verifying implementations of TLS
written and verified in C, whereas performance-critical cores 1.3 [1], [3] and using tools to analyze symbolic [2]–[4] and
can be verified in assembly. Soundly mixing C and assembly computational [3] models of TLS. Below we collect the most
requires careful modeling of interoperation between the two, important lessons learned from TLS throughout the years.
including platform and compiler-specific calling conventions,
and differences in the “natural” memory and leakage models Lesson: The process of formally specifying and verifying
used to verify C versus assembly [7], [104]. a protocol can reveal flaws. The work surrounding TLS has
shown that the process of formally verifying TLS, and perhaps
VI. C ASE S TUDY II: L ESSONS L EARNED FROM TLS even just formally specifying it, can reveal flaws. The imple-
mentation of TLS 1.2 with verified cryptographic security by
The Transport Layer Security (TLS) protocol is widely used Bhargavan et al. [70] discovered new alert fragmentation and
to establish secure channels on the Internet, and is arguably fingerprinting attacks and led to the discovery of the Triple
the most important real-world deployment of cryptography to Handshake attacks [72]. The symbolic analysis of TLS 1.3
date. Before TLS version 1.3, the protocol’s design phases did draft 10 using Tamarin by Cremers et al. [2] uncovered a
not involve substantial academic analysis, and the process was potential attack allowing an adversary to impersonate a client
highly reactive: When an attack was found, interim patches during a PSK-resumption handshake, which was fixed in
would be released for the mainstream TLS libraries or a draft 11. The symbolic analysis of TLS 1.3 using ProVerif
longer-term fix would be incorporated in the next version by Bhargavan et al. [3] uncovered a new attack on 0-RTT
of the standard. This resulted in an endless cycle of attacks client authentication that was fixed in draft 13. The symbolic
and patches. Given the complexity of the protocol, early analysis of draft 21 using Tamarin by Cremers et al. [4]
academic analyses considered only highly simplified crypto- revealed unexpected behavior that inhibited certain strong
graphic cores. However, once the academic community started authentication guarantees. In nearly all cases, these discoveries
considering more detailed aspects of the protocol, many new led to improvements to the protocol, and otherwise clarified
attacks were discovered, e.g., [181], [182]. documentation of security guarantees.
Lesson: Cryptographic protocol designs are moving tar- analysis times across disparate tools are often incomparable,
gets; machine-checked proofs can be more easily updated. since modeling a problem in the exact same way is non-trivial.
The TLS 1.3 specification was a rapidly moving target, with
significant changes being effected on a fairly regular basis. B. Recommendations to Tool Developers
As changes were made between a total of 28 drafts, previous
analyses were often rendered stale within the space of a few Although we are still in the early days of seeing verified
months, requiring new analyses and proofs. An important ben- cryptography deployed in the wild, one major pending chal-
efit of machine-checked analyses and proofs over their manual lenge is how to make computer-aided cryptography artifacts
counterparts is that they can be more easily and reliably maintainable. Because computer-aided cryptography tools sit
updated from draft to draft as the protocol evolves [2]–[4]. at the bleeding-edge of how cryptography is done, they are
Moreover, machine-checked analyses and proofs can ensure constantly evolving, often in non-backwards-compatible ways.
that new flaws are not introduced as components are changed. When this happens, we must either allow the artifacts (e.g.,
Lesson: Standardization processes can facilitate analysis machine-checked proofs) to become stale, or else muster
by embracing minor changes that simplify security argu- significant human effort to keep them up to date. Moreover,
ments and help modular reasoning. In contrast to other because cryptography is a moving target, we should expect that
protocol standards, the TLS 1.3 design incorporates many even verified implementations (and their proofs) will require
suggestions from the academic community. In addition to secu- updates. This could be to add functionality, or in the worst
rity fixes, these include changes purposed to simplify security case, to swiftly patch new vulnerabilities beyond what was
proofs and automated analysis. For example, this includes verifiably accounted for. To this end, we hope to see more
changes to the key schedule that help with key separation, interplay between proof engineering research [184], [185] and
thus simplifying modular proofs; a consistent tagging scheme; computer-aided cryptography research in the coming years.
and including more transcript information in exchanges, which
simplifies consistency proofs. These changes have negligible C. Recommendations to Standardization Bodies
impact on the performance of the protocol, and have helped
make analyzing such a complex protocol feasible. Given its benefits in the TLS 1.3 standardization effort, we
believe computer-aided cryptography should play an important
VII. C ONCLUDING R EMARKS role in standardization processes [186]. Traditionally, cryp-
tographic standards are written in a combination of prose,
A. Recommendations to Authors formulas, and pseudocode, and can change drastically be-
Our first recommendation concerns the clarity of trust tween drafts. On top of getting the cryptography right in
assumptions. We observe that, in some papers, the distinction the first place, standards must also focus on clarity, ease of
between what parts of an artifact are trusted/untrusted is implementation, and interoperability. Unsurprisingly, standard-
not always clear, which runs the risk of hazy/exaggerated ization processes can be long and arduous. And even when
claims. On one hand, crisply delineating between what is they are successful, the substantial gap between standards and
trusted/untrusted may be difficult, especially when multiple implementations leaves plenty of room for error.
tools are used, and authors may be reluctant to spell out an Security proofs can also become a double-edged sword
artifact’s weaknesses. On the other hand, transparency and in standardization processes. Proposals supported by hand-
clarity of trust assumptions are vital for progress. We point written security arguments often cannot be reasonably audited.
to the paper by Beringer et al. [173] as an exemplar for how A plausible claim with a proof that cannot be audited should
to clearly delineate between what is trusted/untrusted. At the not be taken as higher assurance than simply stating the
same time, critics should understand that trust assumptions are claim—we believe the latter is a lesser evil, as it does not
often necessary to make progress at all. create a false sense of security. For example, Hales [187]
Our second recommendation concerns the use of metrics. discusses ill-intentioned security arguments in the context of
Metrics are useful for tracking progress over time when used the Dual EC pseudo-random generator [188]. Another example
appropriately. The HACL∗ [5] study uses metrics effectively: is the recent discovery of attacks against the AES-OCB2 ISO
To quantify verification effort, the authors report proof-to-code standard, which was previously believed to be secure [189].
ratios and person efforts for various primitives. While these are To address these challenges, we advocate the use of
crude proxies, because the comparison is vertical (same tool, computer-aided cryptography, not only to formally certify
same developers), the numbers sensibly demonstrate that, e.g., compliance to standards, but also to facilitate the role of
code involving bignums requires more work to verify in F∗ . auditors and evaluators in standardization processes, allowing
Despite their limitations, we argue that even crude metrics the discussion to focus on the security claims, rather than on
(when used appropriately) are better than none for advancing whether the supporting security arguments are convincing. We
the field. When used inappropriately, however, metrics become see the current NIST post-quantum standardization effort [190]
dangerous and misleading. Horizontal comparisons across as an excellent opportunity to put our recommendations into
disparate tools tend to be problematic and must be done with practice, and we encourage the computer-aided cryptography
care if they are to be used. For example, lines of proof or community to engage in the process.
ACKNOWLEDGMENTS [11] K. G. Paterson and G. J. Watson, “Plaintext-dependent decryption: A
formal security treatment of SSH-CTR,” in Annual International Con-
We thank the anonymous reviewers for their useful sugges- ference on the Theory and Applications of Cryptographic Techniques
tions; Jason Gross, Boris Köpf, Stever Kremer, Peter Schwabe, (EUROCRYPT), ser. LNCS, vol. 6110. Springer, 2010, pp. 345–361.
and Alwen Tiu for feedback on earlier drafts of the paper; and [12] A. Boldyreva, J. P. Degabriele, K. G. Paterson, and M. Stam, “Security
of symmetric encryption in the presence of ciphertext fragmentation,”
Tiago Oliveira for help setting up Jasmin and benchmarks. in Annual International Conference on the Theory and Applications
Work by Manuel Barbosa was supported by National Funds of Cryptographic Techniques (EUROCRYPT), ser. LNCS, vol. 7237.
through the Portuguese Foundation for Science and Technol- Springer, 2012, pp. 682–699.
[13] J. P. Degabriele, K. G. Paterson, and G. J. Watson, “Provable security
ogy (FCT) under project PTDC/CCI-INF/31698/2017. Work in the real world,” IEEE Security & Privacy, vol. 9, no. 3, pp. 33–41,
by Gilles Barthe was supported by the Office of Naval 2011.
Research (ONR) under project N00014-15-1-2750. Work by [14] V. Shoup, “Sequences of games: a tool for taming complexity in
security proofs,” IACR Cryptology ePrint Archive, vol. 2004, p. 332,
Karthik Bhargavan was supported by the European Research 2004. [Online]. Available: http://eprint.iacr.org/2004/332
Council (ERC) under the European Union’s Horizon 2020 [15] Y. Lindell, “How to simulate it - A tutorial on the simulation proof
research and innovation programme (grant agreement no. technique,” in Tutorials on the Foundations of Cryptography. Springer
International Publishing, 2017, pp. 277–346.
683032 - CIRCUS). Work by Bruno Blanchet was sup-
[16] S. F. Doghmi, J. D. Guttman, and F. J. Thayer, “Searching for shapes
ported by the French National Research Agency (ANR) under in cryptographic protocols,” in International Conference on Tools and
project TECAP (decision no. ANR-17-CE39-0004-03). Work Algorithms for the Construction and Analysis of Systems (TACAS), ser.
by Kevin Liao was supported by the National Science Founda- LNCS, vol. 4424. Springer, 2007, pp. 523–537.
[17] J. Bengtson, K. Bhargavan, C. Fournet, A. D. Gordon, and S. Maffeis,
tion (NSF) through a Graduate Research Fellowship. Work by “Refinement types for secure implementations,” ACM Trans. Program.
Bryan Parno was supported by a gift from Bosch, a fellowship Lang. Syst., vol. 33, no. 2, pp. 8:1–8:45, 2011.
from the Alfred P. Sloan Foundation, the NSF under Grant No. [18] M. Backes, C. Hriţcu, and M. Maffei, “Union, intersection and refine-
ment types and reasoning about type disjointness for secure protocol
1801369, and the Department of the Navy, Office of Naval implementations,” J. Comput. Secur., vol. 22, no. 2, pp. 301–353, Mar.
Research under Grant No. N00014-18-1-2892. 2014.
[19] S. Escobar, C. A. Meadows, and J. Meseguer, “Maude-npa: Crypto-
R EFERENCES graphic protocol analysis modulo equational properties,” in Founda-
tions of Security Analysis and Design (FOSAD), ser. LNCS, vol. 5705.
[1] A. Delignat-Lavaud, C. Fournet, M. Kohlweiss, J. Protzenko, A. Ras- Springer, 2007, pp. 1–50.
togi, N. Swamy, S. Z. Béguelin, K. Bhargavan, J. Pan, and J. K. [20] B. Blanchet, “Modeling and verifying security protocols with the
Zinzindohoue, “Implementing and proving the TLS 1.3 record layer,” applied pi calculus and ProVerif,” Foundations and Trends in Privacy
in IEEE Symposium on Security and Privacy (S&P). IEEE Computer and Security, vol. 1, no. 1–2, pp. 1–135, Oct. 2016.
Society, 2017, pp. 463–482. [21] K. Bhargavan, C. Fournet, A. D. Gordon, and S. Tse, “Verified inter-
[2] C. Cremers, M. Horvat, S. Scott, and T. van der Merwe, “Automated operable implementations of security protocols,” ACM Transactions on
analysis and verification of TLS 1.3: 0-rtt, resumption and delayed Programming Languages and Systems, vol. 31, no. 1, 2008.
authentication,” in IEEE Symposium on Security and Privacy (S&P). [22] V. Cheval, V. Cortier, and M. Turuani, “A little more conversation,
IEEE Computer Society, 2016, pp. 470–485. a little less action, a lot more satisfaction: Global states in proverif,”
[3] K. Bhargavan, B. Blanchet, and N. Kobeissi, “Verified models and in IEEE Computer Security Foundations Symposium (CSF). IEEE
reference implementations for the TLS 1.3 standard candidate,” in IEEE Computer Society, 2018, pp. 344–358.
Symposium on Security and Privacy (S&P). IEEE Computer Society,
[23] D. L. Li and A. Tiu, “Combining proverif and automated theorem
2017, pp. 483–502.
provers for security protocol verification,” in International Conference
[4] C. Cremers, M. Horvat, J. Hoyland, S. Scott, and T. van der Merwe,
on Automated Deduction (CADE), ser. LNCS, vol. 11716. Springer,
“A comprehensive symbolic analysis of TLS 1.3,” in ACM Conference
2019, pp. 354–365.
on Computer and Communications Security (CCS). ACM, 2017, pp.
1773–1788. [24] M. Arapinis, E. Ritter, and M. D. Ryan, “Statverif: Verification of
[5] J. K. Zinzindohoué, K. Bhargavan, J. Protzenko, and B. Beurdouche, stateful processes,” in IEEE Computer Security Foundations Sympo-
“HACL*: A verified modern cryptographic library,” in ACM Confer- sium (CSF). IEEE Computer Society, 2011, pp. 33–47.
ence on Computer and Communications Security (CCS). ACM, 2017, [25] C. J. F. Cremers, “The scyther tool: Verification, falsification, and anal-
pp. 1789–1806. ysis of security protocols,” in International Conference on Computer-
[6] A. Erbsen, J. Philipoom, J. Gross, R. Sloan, and A. Chlipala, “Simple Aided Verification (CAV), ser. LNCS, vol. 5123. Springer, 2008, pp.
high-level code for cryptographic arithmetic - with proofs, without 414–418.
compromises,” in IEEE Symposium on Security and Privacy (S&P). [26] S. Meier, C. J. F. Cremers, and D. A. Basin, “Strong invariants for
IEEE, 2019, pp. 1202–1219. the efficient construction of machine-checked protocol security proofs,”
[7] J. Protzenko, B. Parno, A. Fromherz, C. Hawblitzel, M. Polubelova, in IEEE Computer Security Foundations Symposium (CSF). IEEE
K. Bhargavan, B. Beurdouche, J. Choi, A. Delignat-Lavaud, C. Fournet, Computer Society, 2010, pp. 231–245.
N. Kulatova, T. Ramananandro, A. Rastogi, N. Swamy, C. Winter- [27] S. Meier, B. Schmidt, C. Cremers, and D. A. Basin, “The TAMARIN
steiger, and S. Zanella-Beguelin, “EverCrypt: A fast, verified, cross- prover for the symbolic analysis of security protocols,” in International
platform cryptographic provider,” in IEEE Symposium on Security and Conference on Computer-Aided Verification (CAV), ser. LNCS, vol.
Privacy (S&P). IEEE, 2020. 8044. Springer, 2013, pp. 696–701.
[8] M. Bellare and P. Rogaway, “The security of triple encryption and a [28] S. Kremer and R. Künnemann, “Automated analysis of security proto-
framework for code-based game-playing proofs,” in Annual Interna- cols with global state,” in IEEE Symposium on Security and Privacy
tional Conference on the Theory and Applications of Cryptographic (S&P). IEEE Computer Society, 2014, pp. 163–178.
Techniques (EUROCRYPT), ser. LNCS, vol. 4004. Springer, 2006, [29] M. Turuani, “The cl-atse protocol analyser,” in International Confer-
pp. 409–426. ence on Term Rewriting and Applications (RTA), ser. LNCS, vol. 4098.
[9] R. Canetti, “Universally composable security: A new paradigm for Springer, 2006, pp. 277–286.
cryptographic protocols,” in IEEE Annual Symposium on Foundations [30] D. A. Basin, S. Mödersheim, and L. Viganò, “OFMC: A symbolic
of Computer Science (FOCS). IEEE Computer Society, 2001, pp. model checker for security protocols,” Int. J. Inf. Sec., vol. 4, no. 3,
136–145. pp. 181–208, 2005.
[10] S. Halevi, “A plausible approach to computer-aided cryptographic [31] A. Armando and L. Compagna, “SATMC: A sat-based model checker
proofs,” IACR Cryptology ePrint Archive, vol. 2005, p. 181, 2005. for security protocols,” in European Conference on Logics in Artificial
Intelligence (JELIA), ser. LNCS, vol. 3229. Springer, 2004, pp. 730– [54] D. Basin, J. Dreier, and R. Casse, “Automated symbolic proofs of
733. observational equivalence,” in ACM Conference on Computer and
[32] R. Chadha, V. Cheval, Ştefan Ciobâcă, and S. Kremer, “Automated Communications Security (CCS). New York, NY: ACM Press, Oct.
verification of equivalence properties of cryptographic protocols,” ACM 2015, pp. 1144–1155.
Trans. Comput. Log., vol. 17, no. 4, pp. 23:1–23:32, 2016. [55] G. Barthe, B. Grégoire, and B. Schmidt, “Automated proofs of pairing-
[33] V. Cheval, “APTE: an algorithm for proving trace equivalence,” in based cryptography,” in ACM Conference on Computer and Commu-
International Conference on Tools and Algorithms for the Construction nications Security (CCS). ACM, 2015, pp. 1156–1168.
and Analysis of Systems (TACAS), ser. LNCS, vol. 8413. Springer, [56] G. Barthe, B. Grégoire, and S. Z. Béguelin, “Formal certification
2014, pp. 587–592. of code-based cryptographic proofs,” in Symposium on Principles of
[34] V. Cheval, S. Kremer, and I. Rakotonirina, “DEEPSEC: deciding Programming Languages (POPL). ACM, 2009, pp. 90–101.
equivalence properties in security protocols theory and practice,” in [57] D. A. Basin, A. Lochbihler, and S. R. Sefidgar, “CryptHOL: Game-
IEEE Symposium on Security and Privacy (S&P). IEEE Computer based proofs in higher-order logic,” IACR Cryptology ePrint Archive,
Society, 2018, pp. 529–546. vol. 2017, p. 753, 2017.
[35] V. Cortier, A. Dallon, and S. Delaune, “Sat-equiv: An efficient tool [58] B. Blanchet, “A computationally sound mechanized prover for security
for equivalence properties,” in IEEE Computer Security Foundations protocols,” IEEE Transactions on Dependable and Secure Computing,
Symposium (CSF). IEEE Computer Society, 2017, pp. 481–494. vol. 5, no. 4, pp. 193–207, Oct.–Dec. 2008.
[36] A. Tiu and J. E. Dawson, “Automating open bisimulation checking for [59] G. Barthe, B. Grégoire, S. Heraud, and S. Z. Béguelin, “Computer-
the spi calculus,” in IEEE Computer Security Foundations Symposium aided security proofs for the working cryptographer,” in International
(CSF). IEEE Computer Society, 2010, pp. 307–321. Cryptology Conference (CRYPTO), ser. LNCS, vol. 6841. Springer,
[37] J. K. Millen, “A necessarily parallel attack,” in In Workshop on Formal 2011, pp. 71–90.
Methods and Security Protocols, 1999. [60] N. Swamy, C. Hritcu, C. Keller, A. Rastogi, A. Delignat-Lavaud,
[38] N. Durgin, P. Lincoln, J. C. Mitchell, and A. Scedrov, “Multiset S. Forest, K. Bhargavan, C. Fournet, P. Strub, M. Kohlweiss, J. K.
rewriting and the complexity of bounded security protocols,” Journal Zinzindohoue, and S. Z. Béguelin, “Dependent types and multi-
of Computer Security, vol. 12, no. 2, pp. 247–311, 2004. monadic effects in F,” in Symposium on Principles of Programming
[39] S. Delaune and L. Hirschi, “A survey of symbolic methods for Languages (POPL). ACM, 2016, pp. 256–270.
establishing equivalence-based properties in cryptographic protocols,” [61] A. Petcher and G. Morrisett, “The foundational cryptography frame-
J. Log. Algebr. Meth. Program., vol. 87, pp. 127–144, 2017. work,” in International Conference on Principles of Security and Trust
[40] J. Dreier, C. Duménil, S. Kremer, and R. Sasse, “Beyond subterm- (POST), ser. LNCS, vol. 9036. Springer, 2015, pp. 53–72.
convergent equational theories in automated verification of stateful [62] G. Barthe, J. M. Crespo, B. Grégoire, C. Kunz, Y. Lakhnech,
protocols,” in International Conference on Principles of Security and B. Schmidt, and S. Z. Béguelin, “Fully automated analysis of padding-
Trust (POST). Springer-Verlag, 2017. based encryption in the computational model,” in ACM Conference
[41] C. Cremers and D. Jackson, “Prime, order please! revisiting small on Computer and Communications Security (CCS). ACM, 2013, pp.
subgroup and invalid curve attacks on protocols using Diffie-Hellman,” 1247–1260.
in IEEE Computer Security Foundations Symposium (CSF). IEEE,
[63] “The coq proof assistant.” [Online]. Available: https://coq.inria.fr/
2019, pp. 78–93.
[64] “Isabelle.” [Online]. Available: https://isabelle.in.tum.de/
[42] L. C. Paulson, Isabelle - A Generic Theorem Prover (with a contribu-
tion by T. Nipkow), ser. LNCS. Springer, 1994, vol. 828. [65] C. W. Barrett and C. Tinelli, “Satisfiability modulo theories,” in
Handbook of Model Checking. Springer, 2018, pp. 305–343.
[43] R. Milner, Communicating and mobile systems - the Pi-calculus.
Cambridge University Press, 1999. [66] L. M. de Moura and N. Bjørner, “Z3: an efficient SMT solver,” in
[44] M. Abadi and A. D. Gordon, “A calculus for cryptographic protocols: International Conference on Tools and Algorithms for the Construction
The spi calculus,” in ACM Conference on Computer and Communica- and Analysis of Systems (TACAS), ser. LNCS, vol. 4963. Springer,
tions Security (CCS). ACM, 1997, pp. 36–47. 2008, pp. 337–340.
[45] M. Abadi and C. Fournet, “Mobile values, new names, and secure com- [67] B. Lipp, B. Blanchet, and K. Bhargavan, “A mechanised cryptographic
munication,” in Symposium on Principles of Programming Languages proof of the wireguard virtual private network protocol,” in IEEE
(POPL). ACM, 2001, pp. 104–115. European Symposium on Security and Privacy (EuroS&P). IEEE,
[46] K. Bhargavan, C. Fournet, R. Corin, and E. Zalinescu, “Verified 2019, pp. 231–246.
cryptographic implementations for TLS,” ACM Trans. Inf. Syst. Secur., [68] J. B. Almeida, M. Barbosa, G. Barthe, M. Campagna, E. Cohen,
vol. 15, no. 1, pp. 3:1–3:32, 2012. B. Grégoire, V. Pereira, B. Portela, P. Strub, and S. Tasiran, “A
[47] N. Kobeissi, K. Bhargavan, and B. Blanchet, “Automated verification machine-checked proof of security for AWS key management service,”
for secure messaging protocols and their implementations: A symbolic in ACM Conference on Computer and Communications Security (CCS).
and computational approach,” in IEEE European Symposium on Secu- ACM, 2019, pp. 63–78.
rity and Privacy (EuroS&P). IEEE, 2017, pp. 435–450. [69] J. B. Almeida, C. Baritel-Ruet, M. Barbosa, G. Barthe, F. Dupressoir,
[48] N. Kobeissi, G. Nicolas, and K. Bhargavan, “Noise explorer: Fully B. Grégoire, V. Laporte, T. Oliveira, A. Stoughton, and P. Strub,
automated modeling and verification for arbitrary noise protocols,” “Machine-checked proofs for cryptographic standards: Indifferentiabil-
in IEEE European Symposium on Security and Privacy (EuroS&P). ity of sponge and secure high-assurance implementations of SHA-3,” in
IEEE, 2019, pp. 356–370. ACM Conference on Computer and Communications Security (CCS).
[49] D. A. Basin, J. Dreier, L. Hirschi, S. Radomirovic, R. Sasse, and ACM, 2019, pp. 1607–1622.
V. Stettler, “A formal analysis of 5g authentication,” in ACM Con- [70] K. Bhargavan, C. Fournet, M. Kohlweiss, A. Pironti, and P. Strub,
ference on Computer and Communications Security (CCS). ACM, “Implementing TLS with verified cryptographic security,” in IEEE
2018, pp. 1383–1396. Symposium on Security and Privacy (S&P). IEEE Computer Society,
[50] C. Cremers, M. Dehnel-Wild, and K. Milner, “Secure authentication 2013, pp. 445–459.
in the grid: A formal analysis of DNP3 SAv5,” Journal of Computer [71] K. Bhargavan, C. Fournet, M. Kohlweiss, A. Pironti, P.-Y. Strub, and
Security, vol. 27, no. 2, pp. 203–232, 2019. S. Zanella-Béguelin, “Proving the TLS handshake secure (as it is),” in
[51] G. Girol, L. Hirschi, R. Sasse, D. Jackson, C. Cremers, and D. Basin, International Cryptology Conference (CRYPTO), 2014.
“A Spectral Analysis of Noise: A Comprehensive, Automated, Formal [72] K. Bhargavan, A. Delignat-Lavaud, C. Fournet, A. Pironti, and P. Strub,
Analysis of Diffie-Hellman Protocols,” in Proc. of USENIX Security, “Triple handshakes and cookie cutters: Breaking and fixing authenti-
2020. cation over TLS,” in IEEE Symposium on Security and Privacy (S&P).
[52] B. Blanchet, M. Abadi, and C. Fournet, “Automated verification of IEEE Computer Society, 2014, pp. 98–113.
selected equivalences for security protocols,” Journal of Logic and [73] B. Beurdouche, K. Bhargavan, A. Delignat-Lavaud, C. Fournet,
Algebraic Programming, vol. 75, no. 1, pp. 3–51, Feb.–Mar. 2008. M. Kohlweiss, A. Pironti, P. Strub, and J. K. Zinzindohoue, “A messy
[53] S. Santiago, S. Escobar, C. Meadows, and J. Meseguer, “A formal state of the union: Taming the composite state machines of TLS,” in
definition of protocol indistinguishability and its verification using IEEE Symposium on Security and Privacy (S&P). IEEE Computer
Maude-NPA,” in Security and Trust Management (STM), ser. LNCS, Society, 2015, pp. 535–552.
vol. 8743. Berlin, Heidelberg: Springer, Sep. 2014, pp. 162–177. [74] C. E. Landwehr, D. Boneh, J. C. Mitchell, S. M. Bellovin, S. Landau,
and M. E. Lesk, “Privacy and cybersecurity: The next 100 years,” Proc. [94] A. Rastogi, N. Swamy, and M. Hicks, “Wys*: A DSL for verified secure
of the IEEE, vol. 100, no. Centennial-Issue, pp. 1659–1673, 2012. multi-party computations,” in International Conference on Principles
[75] K. Liao, M. A. Hammer, and A. Miller, “ILC: a calculus for compos- of Security and Trust (POST), 2019, pp. 99–122.
able, computational cryptography,” in ACM SIGPLAN Conference on [95] B. Blanchet, “Security protocol verification: Symbolic and computa-
Programming Language Design and Implementation (PLDI). ACM, tional models,” in International Conference on Principles of Security
2019, pp. 640–654. and Trust (POST), ser. LNCS, vol. 7215. Springer, 2012, pp. 3–29.
[76] R. Canetti, A. Stoughton, and M. Varia, “EasyUC: Using EasyCrypt [96] V. Cortier, S. Kremer, and B. Warinschi, “A survey of symbolic
to mechanize proofs of universally composable security,” in IEEE methods in computational analysis of cryptographic systems,” J. Autom.
Computer Security Foundations Symposium (CSF). IEEE, 2019, pp. Reasoning, vol. 46, no. 3-4, pp. 225–259, 2011.
167–183. [97] R. Dockins, A. Foltzer, J. Hendrix, B. Huffman, D. McNamee, and
[77] A. Lochbihler, S. R. Sefidgar, D. A. Basin, and U. Maurer, “Formal- A. Tomb, “Constructing semantic models of programs with the software
izing constructive cryptography using CryptHOL,” in IEEE Computer analysis workbench,” in International Conference on Verified Software.
Security Foundations Symposium (CSF). IEEE, 2019, pp. 152–166. Theories, Tools, and Experiments (VSTTE), ser. LNCS, vol. 9971, 2016,
[78] J. A. Akinyele, M. Green, and S. Hohenberger, “Using SMT solvers to pp. 56–72.
automate design tasks for encryption and signature schemes,” in 2013 [98] Y. Fu, J. Liu, X. Shi, M. Tsai, B. Wang, and B. Yang, “Signed
ACM SIGSAC Conference on Computer and Communications Security, cryptographic program verification with typed cryptoline,” in ACM
CCS’13, Berlin, Germany, November 4-8, 2013. ACM, 2013, pp. 399– Conference on Computer and Communications Security (CCS). ACM,
410. 2019, pp. 1591–1606.
[79] A. J. Malozemoff, J. Katz, and M. D. Green, “Automated analysis [99] K. R. M. Leino, “Dafny: An automatic program verifier for functional
and synthesis of block-cipher modes of operation,” in IEEE Computer correctness,” in International Conference on Logic for Programming,
Security Foundations Symposium (CSF). IEEE Computer Society, Artificial Intelligence, and Reasoning (LPAR), ser. LNCS, vol. 6355.
2014, pp. 140–152. Springer, 2010, pp. 348–370.
[80] V. T. Hoang, J. Katz, and A. J. Malozemoff, “Automated analysis and [100] P. Cuoq, F. Kirchner, N. Kosmatov, V. Prevosto, J. Signoles, and
synthesis of authenticated encryption schemes,” in ACM Conference B. Yakobowski, “Frama-c - A software analysis perspective,” in In-
on Computer and Communications Security (CCS). ACM, 2015, pp. ternational Conference on Software Engineering and Formal Methods
84–95. (SEFM), ser. LNCS, vol. 7504. Springer, 2012, pp. 233–247.
[81] G. Barthe, E. Fagerholm, D. Fiore, A. Scedrov, B. Schmidt, and [101] D. J. Bernstein and P. Schwabe, “gfverif: Fast and easy verification
M. Tibouchi, “Strongly-optimal structure preserving signatures from of finite-field arithmetic,” 2016. [Online]. Available: http://gfverif.
type II pairings: synthesis and lower bounds,” IET Information Security, cryptojedi.org
vol. 10, no. 6, pp. 358–371, 2016. [102] J. B. Almeida, M. Barbosa, G. Barthe, A. Blot, B. Grégoire, V. La-
[82] B. Parno, J. Howell, C. Gentry, and M. Raykova, “Pinocchio: nearly porte, T. Oliveira, H. Pacheco, B. Schmidt, and P. Strub, “Jasmin:
practical verifiable computation,” Commun. ACM, vol. 59, no. 2, pp. High-assurance and high-speed cryptography,” in ACM Conference on
103–112, 2016. Computer and Communications Security (CCS). ACM, 2017, pp.
[83] C. Costello, C. Fournet, J. Howell, M. Kohlweiss, B. Kreuter, 1807–1823.
M. Naehrig, B. Parno, and S. Zahur, “Geppetto: Versatile verifiable [103] B. Bond, C. Hawblitzel, M. Kapritsos, K. R. M. Leino, J. R. Lorch,
computation,” in IEEE Symposium on Security and Privacy (S&P), B. Parno, A. Rane, S. T. V. Setty, and L. Thompson, “Vale: Verifying
2015, pp. 253–270. high-performance cryptographic assembly code,” in USENIX Security
[84] S. T. V. Setty, V. Vu, N. Panpalia, B. Braun, A. J. Blumberg, and Symposium (USENIX). USENIX Association, 2017, pp. 917–934.
M. Walfish, “Taking proof-based verified computation a few steps
[104] A. Fromherz, N. Giannarakis, C. Hawblitzel, B. Parno, A. Rastogi, and
closer to practicality,” in USENIX Security Symposium (USENIX).
N. Swamy, “A verified, efficient embedding of a verifiable assembly
USENIX Association, 2012, pp. 253–268.
language,” PACMPL, vol. 3, no. POPL, pp. 63:1–63:30, 2019.
[85] E. Ben-Sasson, A. Chiesa, D. Genkin, E. Tromer, and M. Virza,
[105] A. W. Appel, “Verified software toolchain - (invited talk),” in European
“SNARKs for C: verifying program executions succinctly and in zero
Symposium on Programming (ESOP), ser. LNCS, vol. 6602. Springer,
knowledge,” in International Cryptology Conference (CRYPTO), ser.
2011, pp. 1–17.
LNCS, vol. 8043. Springer, 2013, pp. 90–108.
[86] J. B. Almeida, E. Bangerter, M. Barbosa, S. Krenn, A. Sadeghi, and [106] J. Filliâtre and A. Paskevich, “Why3 - where programs meet provers,”
T. Schneider, “A certifying compiler for zero-knowledge proofs of in European Symposium on Programming (ESOP), ser. LNCS, vol.
knowledge based on sigma-protocols,” in European Symposium on 7792. Springer, 2013, pp. 125–128.
Research in Computer Security (ESORICS), 2010, pp. 151–167. [107] D. J. Bernstein and T. Lange, “ebacs: Ecrypt benchmarking of
[87] M. Fredrikson and B. Livshits, “Zø: An optimizing distributing zero- cryptographic systems,” 2009. [Online]. Available: https://bench.cr.yp.
knowledge compiler,” in USENIX Security Symposium (USENIX), to
2014, pp. 909–924. [108] Z. Durumeric, J. Kasten, D. Adrian, J. A. Halderman, M. Bailey,
[88] S. Meiklejohn, C. C. Erway, A. Küpçü, T. Hinkle, and A. Lysyan- F. Li, N. Weaver, J. Amann, J. Beekman, M. Payer, and V. Paxson,
skaya, “ZKPDL: A language-based system for efficient zero-knowledge “The matter of heartbleed,” in Internet Measurement Conference (IMC).
proofs and electronic cash,” in USENIX Security Symposium (USENIX). ACM, 2014, pp. 475–488.
USENIX Association, 2010, pp. 193–206. [109] S. Gueron and V. Krasnov, “The fragility of AES-GCM authentication
[89] M. Backes, M. Maffei, and K. Pecina, “Automated synthesis of secure algorithm,” in Proc. of the Conference on Information Technology: New
distributed applications,” in Symposium on Network and Distributed Generations, Apr. 2014.
System Security (NDSS). The Internet Society, 2012. [110] B. B. Brumley, M. Barbosa, D. Page, and F. Vercauteren, “Practical
[90] M. Hastings, B. Hemenway, D. Noble, and S. Zdancewic, “Sok: realisation and elimination of an ecc-related software bug attack,” in
General purpose compilers for secure multi-party computation,” in Cryptographers’ Track at the RSA Conference (CT-RSA), ser. LNCS,
IEEE Symposium on Security and Privacy (S&P), 2019, pp. 1220– vol. 7178. Springer, 2012, pp. 171–186.
1237. [111] X. Leroy, “Formal verification of a realistic compiler,” Commun. ACM,
[91] J. B. Almeida, M. Barbosa, E. Bangerter, G. Barthe, S. Krenn, and vol. 52, no. 7, pp. 107–115, 2009.
S. Z. Béguelin, “Full proof cryptography: verifiable compilation of [112] T. Oliveira, J. L. Hernandez, H. Hisil, A. Faz-Hernández, and
efficient zero-knowledge protocols,” in ACM Conference on Computer F. Rodrı́guez-Henrı́quez, “How to (pre-)compute a ladder - improving
and Communications Security (CCS). ACM, 2012, pp. 488–500. the performance of X25519 and X448,” in International Conference
[92] J. B. Almeida, M. Barbosa, G. Barthe, F. Dupressoir, B. Grégoire, on Selected Areas in Cryptography (SAC), ser. LNCS, vol. 10719.
V. Laporte, and V. Pereira, “A fast and verified software stack for Springer, 2017, pp. 172–191.
secure function evaluation,” in ACM Conference on Computer and [113] T. Chou, “Sandy2x: New curve25519 speed records,” in International
Communications Security (CCS). ACM, 2017, pp. 1989–2006. Conference on Selected Areas in Cryptography (SAC), ser. LNCS, vol.
[93] C. Fournet, C. Keller, and V. Laporte, “A certified compiler for verifi- 9566. Springer, 2015, pp. 145–160.
able computing,” in IEEE Computer Security Foundations Symposium [114] Y. Chen, C. Hsu, H. Lin, P. Schwabe, M. Tsai, B. Wang, B. Yang,
(CSF), 2016, pp. 268–280. and S. Yang, “Verifying curve25519 software,” in ACM Conference
on Computer and Communications Security (CCS). ACM, 2014, pp. [134] J. B. Almeida, M. Barbosa, G. Barthe, F. Dupressoir, and M. Emmi,
299–309. “Verifying constant-time implementations,” in USENIX Security Sym-
[115] “curve25519-donna: Implementations of a fast elliptic-curve Diffie- posium (USENIX). USENIX Association, 2016, pp. 53–70.
Hellman primitive,” https://github.com/agl/curve25519-donna. [135] C. Watt, J. Renner, N. Popescu, S. Cauligi, and D. Stefan, “Ct-wasm:
[116] D. J. Bernstein, “Curve25519: New Diffie-Hellman speed records,” in type-driven secure cryptography for the web ecosystem,” PACMPL,
IACR International Conference on Practice and Theory of Public-Key vol. 3, no. POPL, pp. 77:1–77:29, 2019.
Cryptography (PKC), ser. LNCS, vol. 3958. Springer, 2006, pp. 207– [136] S. Cauligi, G. Soeller, B. Johannesmeyer, F. Brown, R. S. Wahby,
228. J. Renner, B. Grégoire, G. Barthe, R. Jhala, and D. Stefan, “Fact: a DSL
[117] J. B. Almeida, M. Barbosa, G. Barthe, B. Grégoire, A. Kout- for timing-sensitive computation,” in ACM SIGPLAN Conference on
sos, V. Laporte, T. Oliveira, and P. Strub, “The last mile: High- Programming Language Design and Implementation (PLDI). ACM,
assurance and high-speed cryptographic implementations,” CoRR, vol. 2019, pp. 174–189.
abs/1904.04606, 2019. [137] B. Rodrigues, F. M. Q. Pereira, and D. F. Aranha, “Sparse represen-
[118] J. P. Lim and S. Nagarakatte, “Automatic equivalence checking for tation of implicit flows with applications to side-channel detection,”
assembly implementations of cryptography libraries,” in Proc. of the in International Conference on Compiler Construction (CC). ACM,
IEEE/ACM International Symposium on Code Generation and Opti- 2016, pp. 110–120.
mization, (CGO). IEEE, 2019, pp. 37–49. [138] B. Köpf, L. Mauborgne, and M. Ochoa, “Automatic quantification of
[119] P. L. Montgomery, “Modular multiplication without trial division,” cache side-channels,” in International Conference on Computer-Aided
Mathematics of computation, vol. 44, no. 170, pp. 519–521, 1985. Verification (CAV), ser. LNCS, vol. 7358. Springer, 2012, pp. 564–580.
[120] “The GNU Multiple Precision Arithmetic Library.” [Online]. Available: [139] J. Protzenko, J. K. Zinzindohoué, A. Rastogi, T. Ramananandro,
https://gmplib.org/ P. Wang, S. Z. Béguelin, A. Delignat-Lavaud, C. Hritcu, K. Bhargavan,
[121] G. Klein, J. Andronick, K. Elphinstone, T. C. Murray, T. Sewell, C. Fournet, and N. Swamy, “Verified low-level programming embedded
R. Kolanski, and G. Heiser, “Comprehensive formal verification of in F,” PACMPL, vol. 1, no. ICFP, pp. 17:1–17:29, 2017.
an OS microkernel,” ACM Trans. Comput. Syst., vol. 32, no. 1, pp. [140] M. Wu, S. Guo, P. Schaumont, and C. Wang, “Eliminating timing side-
2:1–2:70, 2014. channel leaks using program repair,” in International Symposium on
Software Testing and Analysis (ISSTA). ACM, 2018, pp. 15–26.
[122] R. Gu, J. Koenig, T. Ramananandro, Z. Shao, X. N. Wu, S. Weng,
[141] G. Barthe, G. Betarte, J. D. Campo, C. D. Luna, and D. Pichardie,
H. Zhang, and Y. Guo, “Deep specifications and certified abstrac-
“System-level non-interference for constant-time cryptography,” in
tion layers,” in Symposium on Principles of Programming Languages
ACM Conference on Computer and Communications Security (CCS).
(POPL). ACM, 2015, pp. 595–608.
ACM, 2014, pp. 1267–1279.
[123] H. Mai, E. Pek, H. Xue, S. T. King, and P. Madhusudan, “Verifying
[142] D. Brumley and D. Boneh, “Remote timing attacks are practical,” in
security invariants in ExpressOS,” in International Conference on
USENIX Security Symposium (USENIX). USENIX Association, 2003.
Architectural Support for Programming Languages and Operating
[143] D. J. Bernstein, “Cache-timing attacks on AES,” 2005.
Systems (ASPLOS). ACM, 2013, pp. 293–304.
[144] J.-P. Aumasson, “Guidelines for Low-Level Cryptography Software,”
[124] G. Morrisett, G. Tan, J. Tassarotti, J. Tristan, and E. Gan, “Rocksalt:
https://github.com/veorq/cryptocoding.
better, faster, stronger SFI for the x86,” in ACM SIGPLAN Confer-
[145] B. Moller, “Security of CBC ciphersuites in SSL/TLS: Problems and
ence on Programming Language Design and Implementation (PLDI).
countermeasures,” 2004. [Online]. Available: http://www.openssl.org/
ACM, 2012, pp. 395–404. ∼bodo/tls-cbc.txt
[125] X. Wang, D. Lazar, N. Zeldovich, A. Chlipala, and Z. Tatlock, “Jitk: A [146] N. J. AlFardan and K. G. Paterson, “Lucky thirteen: Breaking the
trustworthy in-kernel interpreter infrastructure,” in USENIX Conference TLS and DTLS record protocols,” in IEEE Symposium on Security
on Operating Systems Design and Implementation (OSDI). USENIX and Privacy (S&P). IEEE Computer Society, 2013, pp. 526–540.
Association, 2014, pp. 33–47. [147] J. Somorovsky, “Curious padding oracle in OpenSSL (cve-2016-
[126] H. Chen, D. Ziegler, T. Chajed, A. Chlipala, M. F. Kaashoek, and 2107),” 2016. [Online]. Available: https://web-in-security.blogspot.
N. Zeldovich, “Using crash hoare logic for certifying the FSCQ file com/2016/05/curious-padding-oracle-in-openssl-cve.html
system,” in ACM Symposium on Operating Systems Principles (SOSP). [148] P. Kocher, J. Horn, A. Fogh, D. Genkin, D. Gruss, W. Haas, M. Ham-
ACM, 2015, pp. 18–37. burg, M. Lipp, S. Mangard, T. Prescher, M. Schwarz, and Y. Yarom,
[127] A. Vasudevan, S. Chaki, L. Jia, J. M. McCune, J. Newsome, and “Spectre attacks: Exploiting speculative execution,” in IEEE Sympo-
A. Datta, “Design, implementation and verification of an extensible sium on Security and Privacy (S&P). IEEE, 2019, pp. 1–19.
and modular hypervisor framework,” in IEEE Symposium on Security [149] M. Lipp, M. Schwarz, D. Gruss, T. Prescher, W. Haas, A. Fogh,
and Privacy (S&P). IEEE Computer Society, 2013, pp. 430–444. J. Horn, S. Mangard, P. Kocher, D. Genkin, Y. Yarom, and M. Hamburg,
[128] J. R. Wilcox, D. Woos, P. Panchekha, Z. Tatlock, X. Wang, M. D. Ernst, “Meltdown: Reading kernel memory from user space,” in USENIX
and T. E. Anderson, “Verdi: a framework for implementing and for- Security Symposium (USENIX). USENIX Association, 2018, pp. 973–
mally verifying distributed systems,” in ACM SIGPLAN Conference on 990.
Programming Language Design and Implementation (PLDI). ACM, [150] G. Barthe, B. Grégoire, and V. Laporte, “Secure compilation of side-
2015, pp. 357–368. channel countermeasures: The case of cryptographic ”constant-time”,”
[129] O. Padon, K. L. McMillan, A. Panda, M. Sagiv, and S. Shoham, “Ivy: in IEEE Computer Security Foundations Symposium (CSF). IEEE
safety verification by interactive generalization,” in ACM SIGPLAN Computer Society, 2018, pp. 328–343.
Conference on Programming Language Design and Implementation [151] D. Molnar, M. Piotrowski, D. Schultz, and D. A. Wagner, “The program
(PLDI). ACM, 2016, pp. 614–630. counter security model: Automatic detection and removal of control-
[130] C. Hawblitzel, J. Howell, M. Kapritsos, J. R. Lorch, B. Parno, M. L. flow side channel attacks,” in International Conference on Information
Roberts, S. T. V. Setty, and B. Zill, “Ironfleet: proving practical Security and Cryptology (ICISC), ser. LNCS, vol. 3935. Springer,
distributed systems correct,” in ACM Symposium on Operating Systems 2005, pp. 156–168.
Principles (SOSP). ACM, 2015, pp. 1–17. [152] A. Langley, “ctgrind,” 2010. [Online]. Available: https://github.com/
[131] C. Hawblitzel, J. Howell, J. R. Lorch, A. Narayan, B. Parno, D. Zhang, agl/ctgrind/
and B. Zill, “Ironclad apps: End-to-end security via automated full- [153] M. Andrysco, A. Nötzli, F. Brown, R. Jhala, and D. Stefan, “Towards
system verification,” in USENIX Conference on Operating Systems verified, constant-time floating point operations,” in ACM Conference
Design and Implementation (OSDI). USENIX Association, 2014, pp. on Computer and Communications Security (CCS). ACM, 2018, pp.
165–181. 1369–1382.
[132] J. B. Almeida, M. Barbosa, J. S. Pinto, and B. Vieira, “Formal [154] M. Andrysco, D. Kohlbrenner, K. Mowery, R. Jhala, S. Lerner, and
verification of side-channel countermeasures using self-composition,” H. Shacham, “On subnormal floating point and abnormal timing,” in
Sci. Comput. Program., vol. 78, no. 7, pp. 796–812, 2013. IEEE Symposium on Security and Privacy (S&P). IEEE Computer
[133] G. Doychev, D. Feld, B. Köpf, L. Mauborgne, and J. Reineke, Society, 2015, pp. 623–639.
“Cacheaudit: A tool for the static analysis of cache side channels,” [155] D. Kohlbrenner and H. Shacham, “On the effectiveness of mitigations
in USENIX Security Symposium (USENIX). USENIX Association, against floating-point timing channels,” in USENIX Security Symposium
2013, pp. 431–446. (USENIX). USENIX Association, 2017, pp. 69–81.
[156] T. Kaufmann, H. Pelletier, S. Vaudenay, and K. Villegas, “When [174] J. B. Almeida, M. Barbosa, G. Barthe, and F. Dupressoir, “Verifiable
constant-time source yields variable-time binary: Exploiting side-channel security of cryptographic implementations: Constant-time
curve25519-donna built with MSVC 2015,” in International MEE-CBC,” in International Conference on Fast Software Encryption
Conference on Cryptology and Network Security (CANS), ser. (FSE), ser. LNCS, vol. 9783. Springer, 2016, pp. 163–184.
LNCS, vol. 10052, 2016, pp. 573–582. [175] A. Tomb, “Automated verification of real-world cryptographic imple-
[157] G. Barthe, S. Blazy, B. Grégoire, R. Hutin, V. Laporte, D. Pichardie, mentations,” IEEE Security & Privacy, vol. 14, no. 6, pp. 26–33, 2016.
and A. Trieu, “Formal verification of a constant-time preserving C [176] J. K. Zinzindohoue, E. Bartzia, and K. Bhargavan, “A verified extensi-
compiler,” Proc. ACM Program. Lang., vol. 4, no. POPL, pp. 7:1–7:30, ble library of elliptic curves,” in IEEE Computer Security Foundations
2020. Symposium (CSF). IEEE Computer Society, 2016, pp. 296–309.
[158] A. Reid, “Trustworthy specifications of arm R v8-a and v8-m system [177] K. Q. Ye, M. Green, N. Sanguansin, L. Beringer, A. Petcher, and A. W.
level architecture,” in 2016 Formal Methods in Computer-Aided De- Appel, “Verified correctness and security of mbedTLS HMAC-DRBG,”
sign, FMCAD 2016, Mountain View, CA, USA, October 3-6, 2016. in ACM Conference on Computer and Communications Security (CCS).
IEEE, 2016, pp. 161–168. ACM, 2017, pp. 2007–2020.
[159] A. Armstrong, T. Bauereiss, B. Campbell, A. Reid, K. E. Gray, R. M. [178] A. Chudnov, N. Collins, B. Cook, J. Dodds, B. Huffman,
Norton, P. Mundkur, M. Wassell, J. French, C. Pulte, S. Flur, I. Stark, C. MacCárthaigh, S. Magill, E. Mertens, E. Mullen, S. Tasiran,
N. Krishnaswami, and P. Sewell, “ISA semantics for armv8-a, risc-v, A. Tomb, and E. Westbrook, “Continuous formal verification of amazon
and CHERI-MIPS,” PACMPL, vol. 3, no. POPL, pp. 71:1–71:31, 2019. s2n,” in International Conference on Computer-Aided Verification
[160] G. Heiser, “For safety’s sake: We need a new hardware-software (CAV), ser. LNCS, vol. 10982. Springer, 2018, pp. 430–446.
contract!” IEEE Design & Test, vol. 35, no. 2, pp. 27–30, 2018. [179] K. Eldefrawy and V. Pereira, “A high-assurance evaluator for machine-
[161] D. Zhang, Y. Wang, G. E. Suh, and A. C. Myers, “A hardware checked secure multiparty computation,” in ACM Conference on Com-
design language for timing-sensitive information-flow security,” in puter and Communications Security (CCS). ACM, 2019, pp. 851–868.
International Conference on Architectural Support for Programming [180] J. Protzenko, B. Beurdouche, D. Merigoux, and K. Bhargavan, “For-
Languages and Operating Systems (ASPLOS). ACM, 2015, pp. 503– mally verified cryptographic web applications in webassembly,” in
516. IEEE Symposium on Security and Privacy (S&P). IEEE, 2019, pp.
[162] M. Tiwari, H. M. G. Wassel, B. Mazloom, S. Mysore, F. T. Chong, and 1256–1274.
T. Sherwood, “Complete information flow tracking from the gates up,” [181] C. Meyer and J. Schwenk, “Sok: Lessons learned from SSL/TLS
in International Conference on Architectural Support for Programming attacks,” in Proc. of the International Workshop on Information Security
Languages and Operating Systems (ASPLOS). ACM, 2009, pp. 109– Applications (WISA), ser. LNCS, vol. 8267. Springer, 2013, pp. 189–
120. 209.
[163] X. Li, V. Kashyap, J. K. Oberg, M. Tiwari, V. R. Rajarathinam, [182] J. Clark and P. C. van Oorschot, “Sok: SSL and HTTPS: revisiting
R. Kastner, T. Sherwood, B. Hardekopf, and F. T. Chong, “Sap- past challenges and evaluating certificate trust model enhancements,”
per: a language for hardware-level security policy enforcement,” in in IEEE Symposium on Security and Privacy (S&P). IEEE Computer
International Conference on Architectural Support for Programming Society, 2013, pp. 511–525.
Languages and Operating Systems (ASPLOS). ACM, 2014, pp. 97– [183] K. G. Paterson and T. van der Merwe, “Reactive and proactive
112. standardisation of TLS,” in International Conference on Security Stan-
[164] X. Li, M. Tiwari, J. Oberg, V. Kashyap, F. T. Chong, T. Sherwood, and dardisation Research (SSR), ser. LNCS, vol. 10074. Springer, 2016,
B. Hardekopf, “Caisson: a hardware description language for secure pp. 160–186.
information flow,” in ACM SIGPLAN Conference on Programming [184] T. Ringer, K. Palmskog, I. Sergey, M. Gligoric, and Z. Tatlock, “QED
Language Design and Implementation (PLDI). ACM, 2011, pp. 109– at large: A survey of engineering of formally verified software,”
120. Foundations and Trends in Programming Languages, vol. 5, no. 2-3,
[165] K. von Gleissenthall, R. G. Kici, D. Stefan, and R. Jhala, “IODINE: pp. 102–281, 2019.
verifying constant-time execution of hardware,” in USENIX Security [185] D. R. Jeffery, M. Staples, J. Andronick, G. Klein, and T. C. Murray,
Symposium (USENIX). USENIX Association, 2019, pp. 1411–1428. “An empirical research agenda for understanding formal methods
[166] H. Eldib, C. Wang, and P. Schaumont, “Smt-based verification of soft- productivity,” Information & Software Technology, vol. 60, pp. 102–
ware countermeasures against side-channel attacks,” in International 112, 2015.
Conference on Tools and Algorithms for the Construction and Analysis [186] K. Bhargavan, F. Kiefer, and P. Strub, “hacspec: Towards verifiable
of Systems (TACAS), ser. LNCS, vol. 8413. Springer, 2014, pp. 62–77. crypto standards,” in International Conference on Security Standardi-
[167] A. G. Bayrak, F. Regazzoni, D. Novo, and P. Ienne, “Sleuth: Automated sation Research (SSR), ser. LNCS, vol. 11322. Springer, 2018, pp.
verification of software power analysis countermeasures,” in Confer- 1–20.
ence on Cryptographic Hardware and Embedded Systems (CHES), ser. [187] T. C. Hales, “The nsa back door to nist,” Notices of the AMS, vol. 61,
LNCS, vol. 8086. Springer, 2013, pp. 293–310. no. 2, pp. 190–192, 2014.
[168] A. Moss, E. Oswald, D. Page, and M. Tunstall, “Compiler assisted [188] S. Checkoway, J. Maskiewicz, C. Garman, J. Fried, S. Cohney,
masking,” in Conference on Cryptographic Hardware and Embedded M. Green, N. Heninger, R. Weinmann, E. Rescorla, and H. Shacham,
Systems (CHES), ser. LNCS, vol. 7428. Springer, 2012, pp. 58–75. “A systematic analysis of the juniper dual EC incident,” in ACM
[169] H. Eldib and C. Wang, “Synthesis of masking countermeasures against Conference on Computer and Communications Security (CCS). ACM,
side channel attacks,” in International Conference on Computer-Aided 2016, pp. 468–479.
Verification (CAV), ser. LNCS, vol. 8559. Springer, 2014, pp. 114–130. [189] A. Inoue, T. Iwata, K. Minematsu, and B. Poettering, “Cryptanalysis
[170] G. Barthe, S. Belaı̈d, F. Dupressoir, P. Fouque, B. Grégoire, and of OCB2: attacks on authenticity and confidentiality,” in International
P. Strub, “Verified proofs of higher-order masking,” in Annual Inter- Cryptology Conference (CRYPTO), 2019, pp. 3–31.
national Conference on the Theory and Applications of Cryptographic [190] L. Chen, L. Chen, S. Jordan, Y.-K. Liu, D. Moody, R. Peralta,
Techniques (EUROCRYPT), ser. LNCS, vol. 9056. Springer, 2015, R. Perlner, and D. Smith-Tone, Report on post-quantum cryptography.
pp. 457–485. US Department of Commerce, National Institute of Standards and
[171] G. Barthe, S. Belaı̈d, G. Cassiers, P. Fouque, B. Grégoire, and F. Stan- Technology, 2016.
daert, “maskverif: Automated verification of higher-order masking in
presence of physical defaults,” in European Symposium on Research
in Computer Security (ESORICS), ser. LNCS, vol. 11735. Springer,
2019, pp. 300–318.
[172] J. B. Almeida, M. Barbosa, G. Barthe, and F. Dupressoir, “Certified
computer-aided cryptography: efficient provably secure machine code
from high-level implementations,” in ACM Conference on Computer
and Communications Security (CCS). ACM, 2013, pp. 1217–1230.
[173] L. Beringer, A. Petcher, K. Q. Ye, and A. W. Appel, “Verified correct-
ness and security of openssl HMAC,” in USENIX Security Symposium
(USENIX). USENIX Association, 2015, pp. 207–221.