Journal of Computer Security 27 (2019) 137–163
DOI 10.3233/JCS-181136
IOS Press
137
Verifying constant-time implementations by
abstract interpretation
Sandrine Blazy ∗ , David Pichardie and Alix Trieu
Univ Rennes, Inria, CNRS, IRISA, France
Abstract. Constant-time programming is an established discipline to secure programs against timing attackers. Several realworld secure C libraries such as NaCl, mbedTLS, or Open Quantum Safe, follow this discipline. We propose an advanced static
analysis, based on state-of-the-art techniques from abstract interpretation, to report time leakage during programming. To that
purpose, we analyze source C programs and use full context-sensitive and arithmetic-aware alias analyses to track the tainted
flows.
We give semantic evidence of the correctness of our approach on a core language. We also present a prototype implementation
for C programs that is based on the CompCert compiler toolchain and its companion Verasco static analyzer. We present
verification results on various real-world constant-time programs and report on a successful verification of a challenging SHA256 implementation that was out of scope of previous tool-assisted approaches.
Keywords: Abstract interpretation, constant-time programming, timing attacks, verification of C implementations
1. Introduction
To protect their implementations, cryptographers follow a very strict programming discipline called
constant-time programming. They avoid branchings controlled by secret data as an attacker could use
timing attacks, which are a broad class of side-channel attacks that measure different execution times of
a program in order to infer some of its secret values [1,17,27,32]. They also avoid memory loads and
stores indexed by secret data because of cache-timing attacks. Several real-world secure C libraries such
as NaCl [12], mbedTLS [36], or Open Quantum Safe [40], follow this programming discipline.
Despite the name, constant-time programming does not ensure that the program runs in constanttime, only that its running time does not depend on secrets. For instance, two different branches may
have different execution times, but it is not harmful if the branching cannot leak information on secrets.
Balancing the running time of branches by adding no-op instructions may seem an attractive solution,
but they may be removed by compilers, and it is still open to other attacks as illustrated by [50].
The constant-time programming discipline requires the transformation of source programs. These
transformations may be tricky and error-prone, mainly because they involve low-level features of C and
non-standard operations (e.g., bit-level manipulations). We argue that programmers need tool assistance
to use this programming discipline. First, they need feedback at the source level during programming,
in order to verify that their implementation is constant time and also to understand why a given implementation is not constant time as expected. Second, they need to trust that their compiler will not break
the security of source programs when translating the guarantees obtained at the source level. Indeed,
* Corresponding
author. E-mail: sandrine.blazy@irisa.fr.
0926-227X/19/$35.00 © 2019 – IOS Press and the authors. All rights reserved
138
S. Blazy et al. / Verifying constant-time implementations by abstract interpretation
compiler optimizations could interfere with the previous constant-time transformations performed by
the programmer. In this paper, we choose to implement a static analysis at source level to simplify error
reporting, but couple the static analyzer to the highly trustworthy CompCert compiler [35]. This strategic
design choice allows us to take advantage of static analysis techniques that would be hard to apply at the
lowest program representation levels.
Static analysis is frequently used for identifying security vulnerabilities in software, for instance to detect security violations pertaining to information flow [23,30,44]. In this paper, we propose an advanced
static analysis, based on state-of-the-art techniques from abstract interpretation [19] (mainly fixpoint
iterations operating over source programs, use of widening operators, computations performed by several abstract domains including a memory abstract domain handling pointer arithmetic), to report time
leakage during programming.
Data originating from a statement where information may leak is tainted with the lowest security
level. Our static analysis uses two security levels, that we call secret (high level) and public (low level);
it analyzes source C programs and uses full context-sensitive (i.e., the static analysis distinguishes the
different invocations of a same function) and arithmetic-aware alias analyses (i.e., the cells of an array
are individually analyzed, even if they are accessed using pointer dereferencing and pointer arithmetic)
to track the tainted flows.
We follow the abstract interpretation methodology: we design an abstract interpreter that executes over
security properties instead of concrete values, and use approximation of program executions to perform
fixpoint computations. We hence leverage the inference capabilities of advanced abstract interpretation
techniques as relational numeric abstractions [37], abstract domain collaborations [28], arithmetic-aware
alias analysis [14,38], to build a very precise taint analysis on C programs. As a consequence, even if a
program uses the same memory block to store both secret and public values during computations, our
analysis should be able to track it, without generating too many spurious false alarms. This programming
pattern appears in real-world implementations, such as the SHA-256 implementation in NaCl that we
are able to analyze.
In this paper, we make the following contributions:
• We define a new methodology for verifying constant-time security of C programs. Our static analysis is fully automatic and sound by construction.
• We instrument our approach in the Verasco static analyzer [31]. Verasco is a formally-verified static
analyzer, that is connected to the formally-verified CompCert C compiler. We thus benefit from the
CompCert correctness theorem, stating roughly that a compiled program behaves as prescribed by
the semantics of its source program.
• We report our results obtained from a benchmark of representative cryptographic programs that are
known to be constant time. Thanks to the precision of our static analyzer, we are able to analyze
programs that are out of reach of state-of-the-art tools.
This paper is organized as follows. First, Section 2 presents the Verasco static analyzer. Then, Section 3 explains our methodology and details our abstract interpreter. Section 4 describes the experimental
evaluation of our static analyzer. Related work is described in Section 5, followed by conclusions.
2. The Verasco abstract interpreter
Verasco is a static analyzer based on abstract interpretation that is formally verified in Coq [31]. Its
proof of correctness ensures the absence of runtime errors (such as out-of-bound array accesses, null
S. Blazy et al. / Verifying constant-time implementations by abstract interpretation
139
pointer dereferences, and arithmetic exceptions) in the analyzed C programs. Verasco relies on several
abstract domains, including a memory domain that finely tracks properties related to memory contents,
taking into account type conversions and pointer arithmetic [14].
Verasco is connected to the CompCert formally-verified C compiler, that is also formally verified in
Coq [35]. Its correctness theorem is a semantics preservation theorem; it states that the compilation
does not introduce bugs in compiled programs. More precisely, Verasco operates over C#minor a C-like
language that is the second intermediate language in the CompCert compilation pipeline.
Verasco raises an alarm as soon as it detects a potential runtime error. Its correctness theorem states
that if Verasco returns no alarm, then the analyzed program is safe (i.e., none of its observable behaviors
is an undefined behavior, according to the C#minor semantics). The design of Verasco is inspired by
Astrée [13], a milestone analyzer that was able to successfully analyze realistic safety-critical software
systems for aviation and space flights. Verasco follows a similar modular architecture as Astrée, that is
shown in Fig. 1.
First, at the bottom of the figure, a large hub of numerical abstract domains is provided to infer numerical invariants on programs. These properties can be relational as for example j + 1 i j + 2 in
a loop (with Octagons or Polyhedra abstract domains). All these domains finely analyze the behavior of
machine integers and floating-points (with potential overflows) while unsound analyzers would assume
ideal arithmetic. They are connected all-together via communication channels that allow each domain
to improve its own precision via specific queries to other domains. As a consequence, Verasco is able to
Fig. 1. Architecture of the Verasco static analyzer.
140
S. Blazy et al. / Verifying constant-time implementations by abstract interpretation
infer subtle numerical invariants that require complex reasoning about linear arithmetic, congruence and
symbolic equalities.
Second, on top of these numerical abstractions sits an abstract memory functor [14] that tracks finegrained aliases and interacts with the numerical domains. This functor can choose to represent every
cell of the same memory block with a single property, or to finely track each specific property of every
position in the block. Contrary to many other alias analyses, this approach allows us to reason on local
and global variables with the same level of precision, even when the memory addresses are manipulated
by the programmer. Some unavoidable approximations are performed when the target of a memory
dereference corresponds to several possible targets, but Verasco makes the impact of such imprecision as
limited as possible. Because of ubiquitous pointer arithmetic in C programs (even simple array accesses
are represented via pointer arithmetic in C semantics), the functor needs to ask advanced symbolic
numerical queries to the abstract numerical domain below it. In return, its role is to hide from them the
load and store operations, and only communicate via symbolic numerical variables.
Third, the last piece of the analyzer is an advanced abstract interpreter that builds a fixpoint for the
analysis result. This task is a bit more complex than in standard dataflow analysis techniques that look
for the least solution of dataflow equation systems. In such settings, each equation is defined by means
of monotone operators in a well-chosen lattice without infinite ascending chains. By computing the
successive iterates of the transfer functions attached to each equation, starting from a bottom element,
the fixpoint computation always terminates on the least element of the lattice that satisfies all equations.
In contrast, the Verasco abstract interpreter relies on infinite lattices, where widening and narrowing
operators [19] are used for ensuring and accelerating the convergence. Smart iteration strategies are
crucial when using such accelerating operators because they directly impact the precision of the analysis
diagnosis. Verasco builds its strategy by following the structure of the program. On every program loop,
it builds a local fixpoint using accelerating techniques. At every function call, it makes a recursive call
of the abstract interpreter on the body of the callee. The callee may be resolved thanks to the state
abstraction functor in presence of function pointers. The recursive nature of the abstract interpreter
makes the analysis very precise because each function is independently analyzed as many times as there
are calling contexts that invoke it.
Furthermore, C#minor is classically structured in functions, statements, and expressions. Expressions
have no side effects; they include reading temporary variables (which do not reside in memory), taking
the address of a non-temporary variable, constants, arithmetic operations, and dereferencing addresses.
The arithmetic, logical, comparison, and conversion operators are roughly those of C, but without overloading: for example, distinct operators are provided for integer multiplication and floating-point multiplication. Likewise, there are no implicit casts: all conversions between numerical types are explicit.
Statements offer both structured control and goto with labels. C loops as well as break and continue statements are encoded as infinite loops with a multi-level exit n that jumps to the end of the
(n + 1)th enclosing block.
3. Verifying constant-time security
Our static analyzer operates over C#minor programs. In this paper, we use a simpler While toylanguage for clarity. It is defined in the first part of this section. Then, we detail our model for constanttime leakage, and explain the tainting semantics we have defined to track data dependencies in programs.
Last, we explain the main algorithm of our static analyzer, and detail its proof of correctness.
S. Blazy et al. / Verifying constant-time implementations by abstract interpretation
141
Fig. 2. Syntax of While programs.
3.1. The While language
Our While language is classically structured in statements and expressions, as shown in Fig. 2. Expressions include integer constants, array identifiers, variable identifiers, arithmetic operations and tests.
Statements include skip statements, stores ∗x ← y, loads x ← ∗y, assignments x ← y, sequences, if
and while statements.
Our While language is peculiar as it supports arrays in order to model memory aliasing. We will mainly
use a for array identifiers and x for variable identifiers. As an example, the program x ← a + 2; y ←
∗(x + 3) first starts by assigning the value a + 2 to variable x and then loads the value at offset 5 of the
array a into the variable y. In this example, x − 2 is an alias of a.
The semantics of While is defined in Fig. 3 using a small-step style for statements and a big-step style
for expressions, supporting the reasoning on non-terminating programs. Contrary to the C language, the
semantics is deterministic (and so is the semantics of C#minor).
A location, usually named l, is a pair of an array identifier and an offset represented by a positive
integer. A value v can either be a location or an integer. An environment σ is a pair (σX , σA ) composed
of a partial map from variables in set X of variable identifiers to values and a partial map from memory
locations A×N to values where A is a set of array identifiers and values V are either locations or integers.
We will write σ (x) to mean σX (x) and σ (l) for σA (l).
Given an environment σ , an expression e evaluates to a value v (written σ, e → v). A constant
is interpreted as an integer. An array identifier a evaluates to its location, it is equivalent to writing
a + 0. To evaluate a variable, its value is looked up in the environment σ and more precisely in its σX
component. Finally, to evaluate e1 ⊕ e2 , it is simply needed to evaluate e1 and e2 separately and combine
the resulting values by interpreting ⊕ into its corresponding operator ⊞, where ⊞ is the usual semantics
of the operator ⊕ ∈ {+, −, ×, /, =, <, >}. For example, e1 = e2 returns 0 if the test is false, and 1
otherwise.
The execution of a statement s results in an updated state with a new environment σ ′ and a new
statement to execute s ′ , written σ, s → σ ′ , s ′ . We write σ (e) to denote the value of expression e in
state σ (i.e., σ, e → σ (e)) and we use σ [l → v] to denote the environment that behaves the same
as σ except that it returns value v for location l. We consider all arrays to be of finite size and initially
declared, similarly to global variables in C. Thus, σ (x, n) and σ [(a, n) → v] may fail either because
a is not a valid array name or because it is an out-of-bound access. σ (l) = v means that l is a valid
location for σ , whereas σ (l) = ⊥ means the opposite. Similarly, σ [l → v] = σ ′ indicates the success
of the update. We assume a memory model similar to C’s except that variables have no addresses but
behave more like registers.
To execute a store ∗e1 ← e2 , it is first needed for e1 to evaluate into a location l and e2 to evaluate into a
value v; the environment is then updated so that location l maps to v. Similarly, to execute a load x ← ∗e,
the expression e must first evaluate into a location l. It is then needed to retrieve its corresponding value
v in the environment and update the environment so that x maps to v. To execute the assignment x ← e,
it is only needed to evaluate e and update the environment so that x maps to the resulting value. To
142
S. Blazy et al. / Verifying constant-time implementations by abstract interpretation
Fig. 3. Semantics of While programs.
execute a sequence p1 ; p2 , either p1 is a skip and p2 is the only statement left to execute, or we first need
to execute p1 , resulting in a new state σ ′ , p1′ . Then, p1′ ; p2 is left to execute in the new environment σ ′ .
Classically, in order to execute a conditional branching if e then p1 else p2 , it is needed to
evaluate e and execute accordingly the appropriate branch. Similarly, a loop while e do p stops if e
evaluates to false and continues otherwise.
As a remark, the evaluation of an expression can only be stuck in two ways, either because it is trying
to retrieve the value of an undefined variable (i.e., σ (x) fails when x is not defined in σ ), or because
S. Blazy et al. / Verifying constant-time implementations by abstract interpretation
143
v1 ⊞ v2 is not defined (e.g., because of a division by 0). Finally, the execution of statements can only be
stuck when the semantic rule evaluates an expression and gets stuck, or the corresponding result has the
wrong value type, or the result is a non-valid location. For instance, a branching statement cannot branch
on a location value, or σ (l) fails because it is an out-of-bound access or there is no associated value yet
in the environment.
The reflexive transitive closure of this small-step semantics represents the execution of a program.
When the program terminates (resp. diverges, e.g. when an infinite loop is executed), it is a finite (resp.
infinite) execution of steps. The execution of a program is safe iff either the program terminates (i.e., its
final semantic state is σ, skip, meaning that there is no more statement to execute) or the program
diverges. The execution of a program is stuck on σ, s when s differs from skip and no semantic
rule can be applied. A program is safe when all of its executions are safe. We write (σi , pi )i for the
execution σ0 , p0 → σ1 , p1 → . . . of program p0 with initial environment σ0 .
3.2. Constant-time security
In our model, we assume that branching statements and memory accesses may leak information
through their execution. We use a similar definition of constant-time security to the one given in [3]. We
define a leakage model L as a map from semantic states σ, p to sequences of observations L (σ, p)
with ε being the empty observation. Two executions are said to be indistinguishable when their observations are the same:
L σ0 , p0 · L σ1 , p1 · . . . = L σ0′ , p0′ · L σ1′ , p1′ · . . . .
Definition 1 (Constant-time leakage model). Our leakage model is such that the following equalities
hold.
(1)
(2)
(3)
(4)
(5)
(6)
L (σ, if e then p1 else p2 ) = σ (e)
L (σ, while e do p) = σ (e)
L (σ, ∗e1 ← e2 ) = σ (e1 )
L (σ, x ← ∗e) = σ (e)
L (σ, p1 ; p2 ) = L (σ, p1 )
L (σ, p) = ε otherwise
The first and second lines mean that the value of branching conditions is considered as leaked. The
third and fourth lines mean that the address of a store and load access is also considered as leaked. The
fifth line explains that a sequence leaks exactly what is leaked by the first part of the sequence; this is
due to the semantics of sequence which depends on the execution of the first statement. Finally, the other
statements produce a silent observation. Given this leakage model, the following lemma follows.
Lemma 1 (Same control-flow). If (σ1 , p1 ) → (σ2 , p2 ) and (σ1′ , p1′ ) → (σ2′ , p2′ ) such that p1 =
p1′ and L (σ1 , p1 ) = L (σ1′ , p1′ ) then p2 = p2′ .
Proof. By induction on (σ1 , p1 ) → (σ2 , p2 ):
• In the assign, store and load cases, p2 = p2′ = skip.
• In the skipseq case, there exists p such that p1 = p1′ = skip;p and thus p2 = p2′ = p.
144
S. Blazy et al. / Verifying constant-time implementations by abstract interpretation
• In the seq case, there exists q1 , q1′ , q1′′ and q2 such that p1 = p1′ = q1 ; q2 and (σ1 , q1 ) → (σ2 , q1′ )
and (σ1′ , q1 ) → (σ2′ , q1′′ ). In order to use the induction hypothesis to prove q1′ = q1′′ , we first
need to prove that L (σ1 , q1 ) = L (σ1′ , q1 ). This is true by definition since L (σ1 , p1 ) =
L (σ1 , q1 ; q2 ) = L (σ1 , q1 ) and also L (σ1′ , p1′ ) = L (σ1′ , q1 ; q2 ) = L (σ1′ , q1 ) and
L (σ1 , p1 ) = L (σ1′ , p1′ ). Thus, since p2 = q1′ ; q2 and p2′ = q1′′ ; q2 , we have finally p2 = p2′ .
• In the iftrue, iffalse, whiletrue and whilefalse cases, we simply use L (σ1 , p1 ) = L (σ1′ , p1′ ) to
justify that the same branch is taken.
Finally, the following theorem follows by induction.
Theorem 1. Two indistinguishable executions of a program necessarily have the same control flow.
Proof. Suppose we have two indistinguishable executions (σi , pi )i and (σi′ , pi′ )i such that p0 = p0′ .
We prove by induction on i that for all i, pi = pi′ .
• It’s true by hypothesis for i = 0.
• Suppose that pi = pi′ .
∗ If the execution is stuck for σi , pi , then necessarily it is because pi tries to write or read an
invalid location (i.e. the value is not a location but a constant or it is an out-of-bound location) or
it tries to branch on a non-boolean value. However, by definition of indistinguishability and the
leakage model, these values must be the same in both executions, thus the execution is also stuck
for σi′ , pi′ .
∗ Symmetrically, if the execution is stuck for σi′ , pi′ , it is also stuck for σi , pi .
′
′
′
′
∗ If σi , pi → σi+1 , pi+1 , then there exists σi+1
, pi+1
such that σi′ , pi′ → σi+1
, pi+1
or both
′
.
executions would have been stuck. By using the previous lemma, we prove that pi+1 = pi+1
Both executions have thus the same control flow.
Given a program, we assume that the attacker has access to the values of some of its inputs, which we
call the public input array variables, and does not have access of the other ones, which we call the secret
input array variables. Given a set X of array names, and two environments σ and σ ′ , we say that σ and
σ ′ are X-equivalent if σ and σ ′ both share the same public input values. Two executions (σi , pi )i and
(σi′ , pi′ )i are initially X-equivalent if σ0 and σ0′ are X-equivalent.
Definition 2 (Constant-time security). A program p is constant time if for any set Xi of public input
array variables, all of its initially Xi -equivalent executions are indistinguishable.
This definition means that a constant-time program is such that, any pair of its executions that only
differ on its secrets must leak the exact same information. This also gives a definition of constant-time
security for infinite execution.
3.3. Reducing security to safety
In order to prove that a program satisfy constant-time security as defined in Definition 2, we reduce
the problem to checking whether the program is safe in a different semantics. The issue is thus twofold,
we first need to prove that safety in this instrumented semantics implies constant-time security in the
standard semantics and second, we need to design an analyzer for this second semantics. This can also
S. Blazy et al. / Verifying constant-time implementations by abstract interpretation
145
Fig. 4. Methodology.
be obtained by modifying an analyzer for the standard semantics as illustrated by Fig. 4. Plain lines
indicate what we assume to already have, while dashed lines indicates what needs to be designed or
proved.
We introduce an intermediate tainting semantics for While programs in Fig. 5, and use the symbol
to distinguish its executions from those of the original semantics. The tainting semantics is an instrumentation of the While semantics that tracks dependencies related to secret values. In the tainted semantics,
a program gets stuck if branchings or memory accesses depend on secrets. We introduce taints, either
H (High) or L (Low) to respectively represent secret and public values and a union operator on taints
defined as follows: L ⊔ L = L and for all t, H ⊔ t = t ⊔ H = H. It is used to compute the taint of
a binary expression. In the instrumented semantics, we take into account taints in semantic values: the
semantic state σ becomes a tainted state σ , where locations are now mapped to pairs made of a value
and a taint.
Let us note that for a dereferencing expression ∗e to have a value, the taint associated to e must be L.
Indeed, we forbid memory read accesses that might leak secret values. This concerns dereferencing
expressions (loads) and assignment statements. Similarly, test conditions in branching statements must
also have a L taint.
The instrumented semantics strictly forbids more behaviors than the regular semantics (defined in
Fig. 3) as shown by the following lemma.
Lemma 2. Any execution (σi , pi )i of program p0 in the tainting semantics implies that (σi , pi )i is
an execution of p0 in the regular semantics where for all i, σi = E ◦ σi and E (a, b) = a for all pairs
(a, b) is an erasure function.
Proof. For all σ , σ ′ , p, p′ such that σ , p → σ ′ , p′ , we can easily prove by immediate induction that
σ, p → σ ′ , p′ where σ = E ◦ σ and σ ′ = E ◦ σ ′ .
Finally, by induction on the execution and using this lemma, the theorem is easily proven.
However, the converse is not necessarily true. For instance, suppose that variable x contains a secret
value. Then, ∗(a + x) ← 2 is not safe in the instrumented semantics because a + x has taint H, while it
is safe in the regular semantics provided that a + x corresponds to a valid location.
An immediate consequence of the lemma is that the instrumented semantics preserves the regular
behavior of programs, as stated by the following theorem.
146
S. Blazy et al. / Verifying constant-time implementations by abstract interpretation
Fig. 5. Tainting semantics for While programs.
Theorem 2. Any safe execution (σi , pi )i of program p0 in the tainting semantics implies that the
execution (σi , pi )i is also safe in the regular semantics.
As an immediate corollary, any safe program according to the tainting semantics is also safe according
to the regular semantics.
S. Blazy et al. / Verifying constant-time implementations by abstract interpretation
147
Proof. Let (σi , pi )i be a safe execution of p0 in the tainting semantics. As it is a safe execution, it
either diverges or terminates.
• If (σi , pi )i is diverging (i.e. infinite), then so is (σi , pi )i thanks to the previous lemma.
• If (σi , pi )i is terminating, then there exists some n such that pn = skip, therefore (σi , pi )in
is also terminating.
(σi , pi )i is a safe execution in the regular semantics.
Theorem 2 is useful to prove our main theorem relating our instrumented semantics and the constanttime property we want to verify on programs.
Theorem 3. Any safe program w.r.t. the tainting semantics is constant time.
Proof. Let p0 be a safe program with respect to the tainting semantics. Let Xi be a set of public variables
and let (σi , pi )i and (σi′ , pi′ )i be two safe executions of p0 that are initially Xi -equivalent.
We now need to prove that both executions are indistinguishable. Let σ0 be such that for all x ∈ Xi ,
/ Xi , n ∈ N, σ0 (x, n) = (σ0 (x, n), H).
n ∈ N, σ0 (x, n) = (σ0 (x, n), L) and also for all x ∈
By safety of program p0 according to the tainting semantics, there exists some states σ1 , σ2 , . . . such
that σ0 , p0 σ1 , p1 . . . is a safe execution. Let σn′ = E ◦ σn , we prove by strong induction on n
that σn′ = σn .
• It is clearly true for n = 0 by definition of σ0 .
• Suppose it is true for all k < n and let us prove it for n. By using theorem 2, we know that there
exists a safe execution σ0 , p0 → σ1′ , p1′ → · · · → σn′ , pn′ → . . .. Furthermore, the semantics
is deterministic and we know that σ0 , p0 → σ1 , p1 → . . .. Therefore, we have the following
series of equalities: σ1′ = σ1 , p1′ = p1 , . . . σn′ = σn , pn′ = pn .
Thus, for all k ∈ N, the state σk verifies σk = E ◦ σk . Similarly, we define σ0 ′ , σ1 ′ , . . . for the second
execution which also satisfies the same property by construction.
Finally, we need to prove that for all n ∈ N, L(σn , pn ) = L(σn′ , pn′ ).
First, we informally define the notation σn =L σn′ for all n ∈ N as σ n and σ ′n , as previously defined,
agree on the taints of both variables and locations, and if the taint is L, then they also agree on the value.
Formally, this means that for all r where r is either a location l or a variable x, either σ n (r) and σ ′n (r)
are undefined, or there exists a taint t such that σ n (r) = (σn (r), t) and σ ′n (r) = (σn′ (r), t) and if t = L,
then σn (r) = σn′ (r).
Second, we prove by induction on e that for all n and e that if σn =L σn′ , σn , e (v, t) and
σn′ , e (v ′ , t ′ ), then t = t ′ and if t = t ′ = L, then v = v ′ .
• This is trivially true if e = n or e = a.
• If e = x, then it is true by definition of σn =L σn′
• If e = e1 ⊕e2 , then we apply the induction hypotheses on σn , e1 (v1 , t1 ) and σn ′ , e1 (v1′ , t1′ )
and on σn , e2 (v2 , t2 ) and σn ′ , e2 (v2′ , t2′ ). Since t = t1 ⊔ t2 and t ′ = t1′ ⊔ t2′ and t1 = t1′
and t2 = t2′ , we have that t = t ′ . If t = t ′ = L, then t1 = t1′ = L and t2 = t2′ = L, thus
v = v1 ⊞ v2 = v1′ ⊞ v2′ = v ′ .
This lemma is thus proven.
′
Finally, for all n ∈ N, let us prove by induction on pn that if pn = pn′ and σn =L σn′ , then pn+1 = pn+1
′
and σn+1 =L σn+1
.
148
S. Blazy et al. / Verifying constant-time implementations by abstract interpretation
′
′
• If pn = skip;p′ , it is true because pn+1 = pn+1
= p′ , σn+1 = σn and also σn+1
= σn′ .
′
• If pn = p; p , it is true by induction hypothesis.
′
• If pn = if e . . . or pn = while e . . ., we have σn+1 = σn and σn+1
= σn′ . Furthermore, we
know that there exists some v such that σn , e (v, L) and similarly, there exists v ′ such that
σn ′ , e (v ′ , L) because of the safety in the tainting semantics. Since σn (e) = v, σn′ (e) = v ′ and
′
.
σn =L σn′ , we have v = v ′ by using the previous lemma and thus pn+1 = pn+1
′
• If pn = x ← ∗e, we can prove as previously that σn (e1 ) = σn (e1 ) = l. Furthermore, we have
′
′
pn+1 = pn+1
= skip. It is left to prove that σn+1 =L σn+1
. If σn (l) = (v, t) and σn ′ (l) = (v ′ , t ′ ),
′
′
′
′
′
= σn′ [x →
then t = t since σn =L σn . If t = t = L, then v = v and σn+1 = σn [x → v] and σn+1
′
′
′
′
v ], thus σn+1 =L σn+1 . Similarly, if t = t = H, then σn+1 =L σn+1 .
′
= skip. Furthermore, there exists v, v ′ , t, t ′ such that
• If pn = x ← e, we know that pn+1 = pn+1
′
′ ′
σn , e (v, t) and σn , e (v , t ). By using the previous lemma, we know that t = t ′ , and if
′
.
t = t ′ = L, then v = v ′ . Thus σn+1 = σn [x → v] =L σn′ [x → v ′ ] = σn+1
′
• If pn = ∗e1 ← e2 , we have pn+1 = pn+1 = skip. By using the same reasoning as previously,
we can prove that σn (e1 ) = σn′ (e1 ) = l. There exists v, v ′ , t, t ′ such that σn , e2 (v, t) and
′
[l → v ′ ]. By using the previous lemma,
σn ′ , e2 (v ′ , t ′ ) and thus σn+1 = σn [l → v] and σn+1
′
′
′
′
we know that t = t and if t = t = L, then v = v and σn+1 =L σn+1
. If t = t ′ = H, then
′
by definition.
σn+1 =L σn+1
Finally, by exploiting this lemma, an induction proves that for all n ∈ N, pn = pn′ and σn =L σn′ .
Furthermore, a direct consequence is that for all n ∈ N, L(σn , pn ) = L(σn′ , pn′ ) and thus both
executions are indistinguishable: the program is constant time.
3.4. Abstract interpreter
To prove that a program is safe according to the tainting semantics, we design a static analyzer based
on abstract interpretation. It computes a correct approximation of the execution of the analyzed program,
thus if the approximative execution is safe, then the actual execution must necessarily be safe.
Similarly to how we built a tainting semantics from a regular semantics, we explain how to modify an
abstract interpreter for the regular semantics into an abstract interpreter for the tainting semantics. First,
we suppose that the regular abstract interpreter provides a domain of abstract values V♯ that supports an
operator concretize♯ : V♯ → P (V) which takes an abstract value and returns the concrete values represented by the abstract value. We also suppose that the abstract interpreter provides M♯ , an abstraction
of concrete environments that maps locations and variables to values. We do not need nor want to know
exactly how M♯ is defined, as it might use relational definitions which are quite complex. We only need
to use M♯ to modify the abstract analyzer.
Finally, we suppose that the abstract analyzer provides the following abstract operators:
• eval♯ : M♯ → expr → V♯ takes an abstract environment, an expression and evaluates it in the
abstract environment and returns the corresponding abstract value;
• assign♯ : M♯ → X → expr → M♯ takes an abstract environment, a variable identifier, an expression and models an assignment to a variable;
• store♯ : M♯ → expr → expr → M♯ takes an abstract environment and two expression e1 and e2
and models ∗e1 ← e2 ;
• load♯ : M♯ → X → expr → M♯ takes an abstract environment, a variable identifier, an expression
and models a load x ← ∗e;
S. Blazy et al. / Verifying constant-time implementations by abstract interpretation
149
Fig. 6. Abstract taint lattice.
• assert♯ : M♯ → expr → M♯ takes an abstract environment, an expression and returns an abstract
environment where the expression is true. This is useful when analyzing a branching condition such
as x < 5, if we know beforehand that x ∈ [0, 42], we can restrict x to [0, 4] in the “then” branch,
and restrict it to [5, 42] in the “else” branch.
The abstract operators form an interface that is parameterized by V♯ and M♯ that we will name
AbMem(V♯ , M♯ ).
Now, in order for the analyzer to handle the tainting semantics, we need to introduce an abstraction
of taints T♯ = {L♯ , H♯ } which forms a lattice represented in Fig. 6. We will use L♯ to indicate a value
that has exactly taint L while H♯ indicates that a value may have taint L or H. In order to analyze the
following snippet, it is necessary to correctly approximate the taint of the value that will be assigned to
variable x after execution.
As it can either be L or H, we use the approximation H♯ . We could have used H♯ to indicate that a
variable or location can only have a H value, however constant-time security is not interested in knowing
that value has exactly H taint, but only in knowing that it may have a H taint.
Now, we explain how to modify the analyzer so that it can track abstract taints, this effectively forms
♯
♯
a functor from the previous interface AbMem(V♯ , M♯ ) to a new interface AbMem(V , M ) that can track
♯
♯
abstract taints where V = V♯ × T♯ and M = M♯ × ((X + L) → T♯ ).
♯
♯
We first start by defining taint : M → expr → T♯ + ⊥ which returns the abstract taint corresponding
to the evaluation of an expression. We use T (a, b) = b as tainting function, the companion of the erasure
function E .
♯
♯
♯
taint σ ♯ , n = L♯
taint σ ♯ , a = L♯
taint σ ♯ , x = T σ ♯ (x)
♯
♯
♯
taint σ ♯ , e1 ⊕ e2 = taint σ ♯ , e1 ⊔♯ taint σ ♯ , e2
We now define the following abstract operators (i.e., transfer functions).
♯
♯
• eval (σ ♯ , e) = (eval♯ (E (σ ♯ ), e), taint (σ ♯ , e))
♯
♯
• assign (σ ♯ , x, e) = (assign♯ (E (σ ♯ ), x, e), T (σ ♯ )[x → taint (σ ♯ , e)])
150
S. Blazy et al. / Verifying constant-time implementations by abstract interpretation
♯
• assert (σ ♯ , e) = (assert♯ (E (σ ♯ ), e), T (σ ♯ ))
♯
♯
• store (σ ♯ , e1 , e2 ) = (store♯ (E (σ ♯ ), e1 , e2 ), T (σ ♯ )[l → taint (σ ♯ , e2 )]l∈concretize♯ (eval♯ (E (σ ♯ ),e1 )) )
♯
• load (σ ♯ , x, e) = (load♯ (E (σ ♯ ), x, e), T (σ ♯ )[x → ♯l∈concretize♯ (eval♯ (E (σ ♯ ),e)) T (σ ♯ )(l)])
♯
♯
♯
The definitions of eval , assign and assert reuse the operators of AbMem(V♯ , M♯ ) and modify
♯
♯
slightly the tainting part. The definitions of store and load are more complex. In both cases, we
♯
need to use eval to deduce all possible locations affected by the memory accesses and suitably up♯
date the tainting parts. For store (σ ♯ , e1 , e2 ), all possible write locations given by the concretization of
♯
eval♯ (E (σ ♯ ), e1 ) are updated, as for load (σ ♯ , x, e), we approximate the taints from all possible read lo♯
♯
cations given by the concretization of eval♯ (E (σ ♯ ), e). This concludes the definition of AbMem(V , M ).
Finally, the abstract analysis ❏p❑(σ ♯ , τ ♯ ) of program p starting with tainted abstract environment
♯
σ is defined in Fig. 7. To analyze (p1 ; p2 ), first p1 is analyzed and then p2 is analyzed using the
environment given by the first analysis. Similarly, to analyze a statement (if e then p1 else p2 ),
p1 is analyzed assuming that e is true and p2 is analyzed assuming the opposite, ⊔♯ is then used to get
an over-approximation of both results.
The loop (while e do p) is the trickiest part to analyze, as the analysis cannot just analyze one
iteration of the loop body and then recursively analyze the loop again since this may never terminate. The
analysis thus tries to find a loop invariant. The standard method in abstract interpretation is to compute
a post-fixpoint of the function iter(e, p, σ ♯0 , ·) as defined in Fig. 7. It represents a loop invariant, the
final result is thus the invariant where the test condition does not hold anymore. In order to compute the
post-fixpoint, we use pfp(f ) which computes a post-fixpoint of monotone function f by successively
computing ⊥, f (⊥), f (f (⊥)), . . ., and forces convergence using a widening-narrowing operator on the
M♯ part. Computing ⊥, f (⊥), f (f (⊥)), . . . converges toward the least fixpoint (i.e., the most precise
invariant) when the process terminates according to Kleene theorem, but this process is not guaranteed
to end. The usual solution in abstract interpretation to ensure termination is to use a widening operator ∇
which overapproximates the limit of ⊥, f (⊥), f (f (⊥)), . . . by instead computing the sequence x0 = ⊥,
Fig. 7. Abstract execution of statements.
S. Blazy et al. / Verifying constant-time implementations by abstract interpretation
151
xn+1 = xn ∇f (xn ) which converges in a finite number of step. However, the resulting post-fixpoint may
be grossly imprecise, and a narrowing operator can be used in the same way to improve precision.
The taint part does not require convergence help because taints form a finite lattice.
3.5. Correctness of the abstract interpreter
In order to specify and prove the correctness of the analyzer, we follow the usual methodology in
abstract interpretation and define a collecting semantics, aiming at facilitating the proof. The semantics
still expresses the dynamic behavior of programs but takes a closer form to the analysis. It operates over
properties of concrete environments, thus bridging the gap between concrete environments and abstract
environments, which represent sets of concrete environments.
The collecting semantics aims at describing the resulting environments that can be reached given a
specific instruction and a set of environments. The collecting semantics of a program p with a set of
concrete environments is written ❏p❑().
Similarly to the abstract interpreter, we define Assign, Store, Load, Assert. They will respectively
♯
♯
♯
♯
serve as counterparts to assign , store , load and assert . We first start with Assign:
Assign(, x, e) = σ x → (v, t) |∃v ∈ V, t ∈ T, σ (e) = (v, t) ∧ σ ∈
Given a set of concrete environments , Assign(, x, e) computes the set of all possible reachable
environments from environments in after executing x ← e in the tainting semantics.
Next are Store and Load:
Store(, e1 , e2 ) = σ l → (v, t) |
∃l ∈ L, v ∈ V, t ∈ T, σ (e1 ) = (l, L) ∧ σ (e2 ) = (v, t) ∧ σ ∈
Load(, x, e) = σ x → (v, t) |∃l ∈ L, v ∈ V, t ∈ T, σ (e) = (l, L) ∧ σ (l) = (v, t) ∧ σ ∈
Given a set of concrete environments , Store(, e1 , e2 ) (resp. Load(, x, e)) computes the set of all
possible reachable environments from environments in after executing ∗e1 ← e2 (resp. x ← ∗e) in
the tainting semantics.
Assert removes the environments where e is not true:
Assert(, e) = σ ∈ |∃t, σ (e) = (true, t)
Finally, the collecting semantics is defined in Fig. 8. Looking at the rules in Fig. 7 and Fig. 8, one can
notice that the collecting semantics follows closely the shape of the abstract interpreter. The collecting
♯
semantics of assignment is defined using Assign, the counterpart of assign . Similarly to the abstract
interpreter, to evaluate conditional branchings, the first branch is evaluated assuming the condition is
true using Assert and the second branch is evaluated assuming the opposite. The results are then merged
to obtain all the possible states that can be reached.
We first start by proving that the collecting semantics is sound with regards to the tainting semantics.
Theorem 4. For any programs p and environment σ , σ , p ∗ σ ′ , skip =⇒ σ ′ ∈ ❏p❑({σ }).
152
S. Blazy et al. / Verifying constant-time implementations by abstract interpretation
Fig. 8. Definition of the collecting semantics ❏·❑(·).
Proof. This is a fairly standard proof in abstract interpretation. As the theorem statement does not directly fit well with induction, we first start by proving the following more general lemma:
∀p, σ , σ ′ , , σ ∈ =⇒ σ , p ∗ σ ′ , skip =⇒ σ ′ ∈ ❏p❑()
The proof is by induction on p.
• If p = skip, it is trivially true.
• If p = ∗e1 ← e2 or p = x ← ∗e or p = x ← e, it is true by definition of Store, Load, Assign and
by definition of the tainting semantics.
• If p = p1 ; p2 , then there exists σ ′′ such that σ , p1 ∗ σ ′′ , skip and σ ′′ , p2 ∗ σ ′ , skip.
By induction hypothesis on the first execution, we obtain that σ ′′ ∈ ❏p1 ❑(). Combining this
with using the induction hypothesis on the second execution allows us to conclude that σ ′ ∈
❏p2 ❑(❏p1 ❑()) = ❏p1 ; p2 ❑() = ❏p❑().
• If p = if e then p1 else p2 , then either σ (e) = true and σ , p1 ∗ σ ′ , skip or
σ (e) = false and σ , p2 ∗ σ ′ , skip. In the first case, σ ∈ Assert(, e) and in the latter,
σ ∈ Assert(, not e) which allows us to conclude in both cases by using the induction hypothesis.
• If p = while e do p, then we know that σ ′ (e) = false. Furthermore, we remark that for all
σ ′′ such that σ , while e do p ∗ σ ′′ , while e do p, σ ′′ ∈ I by definition of I. Thus,
σ ′ ∈ I and since σ ′ (e) = false, σ ′ ∈ Assert(I, not e).
The lemma is thus proven, and the theorem is a direct consequence of it.
The regular semantics also has a collecting semantics with the operators Assign, Store, Load, Assert
and a corresponding soundness theorem that we will not detail. The operators are defined as follows:
Assign(, x, e) = σ [x → v]|∃v ∈ V, σ (e) = v ∧ σ ∈
Store(, e1 , e2 ) = σ [l → v]|∃l ∈ L, v ∈ V, σ (e1 ) = l ∧ σ (e2 ) = v ∧ σ ∈
Load(, x, e) = σ [x → v]|∃l ∈ L, v ∈ V, σ (e) = l ∧ σ (l) = v ∧ σ ∈
Assert(, e) = σ ∈ |σ (e) = true
S. Blazy et al. / Verifying constant-time implementations by abstract interpretation
153
Finally, we also need to introduce the concept of concretization to state and prove the correctness of
our abstract interpreter. We already introduced concretize♯ previously which is actually a concretization
function. We will rename it γV♯ as γ is the usual name for a concretization function in abstract interpretation. We use v ∈ γV♯ (v ♯ ) to say that v is in the concretization of abstract value v ♯ , which means that v ♯
represents a set of concrete values of which v is a member.
The abstract memory domain M♯ also provides a concretization function γM♯ : M♯ → P (M) which is
used to define the correctness of the assign♯ , store♯ , load♯ and assert♯ operators:
Assign γM♯ σ ♯ , x, e ⊆ γM♯ assign♯ σ ♯ , e
Store γM♯ σ ♯ , e1 , e2 ⊆ γM♯ store♯ σ ♯ , e1 , e2
Load γM♯ σ ♯ , x, e ⊆ γM♯ load♯ σ ♯ , x, e
Assert γM♯ σ ♯ , e ⊆ γM♯ assert♯ σ ♯ , e
♯
We now need to define γT♯ : T♯ → P (T) and γM♯ : M → P (M).
The first one is simple, γT♯ (L# ) = {L} and γT♯ (H# ) = {L, H}. L# corresponds to value that we know
are necessarily public data, while H# corresponds to value that we only know may depends on secrets.
Now, we define γM♯ :
γM♯ σ ♯ = σ |E ◦ σ ∈ γM♯ E σ ♯ ∧ ∀r, T σ (r) ∈ γT♯ T σ ♯ (r)
This means that an environment σ is in the concretization of σ ♯ if there exists σ ∈ γM♯ (E (σ ♯ )) such
that E ◦ σ = σ and such that T (σ (r)) ∈ γT♯ (T (σ ♯ )(r)) for all location or variable r.
♯
♯
♯
♯
We now need to prove the correctness of the assign , store , load and assert operators:
♯
Assign γM♯ σ ♯ , x, e ⊆ γM♯ assign σ ♯ , x, e
♯
Store γM♯ σ ♯ , e1 , e2 ⊆ γM♯ store σ ♯ , e1 , e2
♯
Load γM♯ σ ♯ , x, e ⊆ γM♯ load σ ♯ , x, e
♯
Assert γM♯ σ ♯ , e ⊆ γM♯ assert σ ♯ , e
♯
Proof. We need to prove that for all σ ∈ Assign(γM♯ (σ ♯ ), x, e), σ ∈ γM♯ (assign (σ ♯ , e)). We first
define E () = {E ◦ σ |σ ∈ } for all ∈ P (M). We then notice that E (Assign(γM♯ (σ ♯ ), x, e)) =
Assign(γM♯ (E (σ ♯ )), x, e) by definitions.
Then, by correctness of assign♯ , we have that Assign(γM♯ (E (σ ♯ )), x, e) ⊆ γM♯ (assign♯ (E (σ ♯ ), x, e)).
♯
♯
And by definition of assign , we have that E (assign (σ , x, e)) = assign♯ (E (σ ♯ ), x, e). Thus,
♯
γM♯ (E (assign (σ , x, e))) = γM♯ (assign♯ (E (σ ♯ ), x, e)) which implies that E (Assign(γM♯ (σ ♯ ), x, e)) ⊆
♯
♯
γM♯ (E (assign (σ , x, e))) and therefore, there exists σ ∈ γM♯ (E (assign (σ , x, e))) such that E (σ ) = σ .
♯
♯
It is then left to prove that for all r, T (σ (r)) ∈ γT♯ (T (assign (σ ♯ , x, e))(r)). By definition of assign ,
♯
♯
T (assign (σ ♯ , x, e)) = T (σ ♯ )[x → taint (σ ♯ , e)]. By definition of Assign, we know that there exists
♯
σ 1 ∈ γM♯ (σ ) such that σ = σ 1 [x → (v, t)] with σ 1 (e) = (v, t).
154
S. Blazy et al. / Verifying constant-time implementations by abstract interpretation
♯
The correctness of taint can easily be proven by induction on e:
♯
σ ∈ γM♯ σ ♯ =⇒ T σ (e) ∈ γT♯ taint σ ♯ , e
♯
By exploiting the lemma, the correctness of assign is thus proven. The correctness of the other operators is similarly proven.
The following theorem which states the correctness of the abstract analyzer with regards to the collecting semantics can now be proven.
Theorem 5. For all abstract environment σ ♯ and program p,
❏p❑ γM♯ σ ♯ ⊆ γM♯ ❏p❑♯ σ ♯
Proof. We first remark that ❏p❑ is a monotone function, i.e. 1 ⊆ 2 =⇒ ❏p❑(1 ) ⊆ ❏p❑(2 ). The
proof is by induction on p. The theorem is also proven by induction on p. We have that:
• if p = skip, it is trivially true;
• if p = ∗e1 ← e2 or p = x ← e or p = x ← ∗e, it is a direct consequence of the correctness of the
corresponding operators;
• if p = p1; p2, we have ❏p1 ❑(γM♯ (σ ♯ )) ⊆ γM♯ (❏p1 ❑♯ (σ ♯ )) by induction hypothesis on p1 and
❏p2 ❑(γM♯ (❏p1 ❑♯ (σ ♯ ))) ⊆ γM♯ (❏p2 ❑♯ (❏p1 ❑♯ (σ ♯ ))) on p2 . And by monotony of ❏p2 ❑, we have
❏p1 ; p2 ❑(γM♯ (σ ♯ )) = ❏p2 ❑(❏p1 ❑(γM♯ (σ ♯ ))) ⊆ ❏p2 ❑(γM♯ (❏p1 ❑♯ (σ ♯ ))) ⊆ γM♯ (❏p2 ❑♯ (❏p1 ❑♯ (σ ♯ ))) =
γM♯ (❏p1 ; p2 ❑♯ (σ ♯ )) which is what we needed to prove;
♯
• if p = if e then p1 else p2 , it is a consequence of the correctness of assert ;
• if p = while e do p, it is a consequence of the correctness of pfp with regard with the invariant,
♯
and the correctness of assert .
The theorem is thus proven.
This theorem intuitively means that the abstract analyzer is correct with regards to the collecting
semantics since if γM♯ (❏p❑♯ (σ ♯ )) is empty, ❏p❑(γM♯ (σ ♯ )) must necessarily be empty too, and thus the
execution is stuck with regards to the collecting semantics.
Finally, combining Theorems 4 and 5, the following correctness theorem is a direct consequence:
Theorem 6. For all program p, environment σ and abstract environment σ ♯ such that σ ∈ γM♯ (σ ♯ ), if
we have the execution σ , p ∗ σ ′ , skip, then we also have σ ′ ∈ γM♯ (❏p❑♯ (σ ♯ )).
This is the main theorem of correctness of the abstract interpreter. It ensures that we compute correct
over-approximations of reachable states in the tainting semantics. We can then safely perform abstract
tests on the program to check that no tainting state may reach a stuck configuration. By that, we mean
that the analyzer may fail or raise alarms during the analysis. For instance, when analyzing if (e)
..., it may raise an alarm to say that e may potentially depend on a secret (i.e., it has a high taint)
at this program point. Hence, we can conclude that if no alarm is raised, then the program is safe with
regard to the tainting semantics and is thus constant-time.
Finally, Fig. 9 summarizes the relationships between the different semantics and the theorems that
links them.
S. Blazy et al. / Verifying constant-time implementations by abstract interpretation
155
Fig. 9. Diagram relating the different semantics.
4. Implementation and experiments
Following the methodology presented in Section 3, we have implemented a prototype leveraging the
Verasco static analyzer. We have been able to evaluate our prototype by verifying multiple actual C
code constant-time algorithms taken from different cryptographic libraries such as NaCl [12], mbedTLS
[36], Open Quantum Safe [16] and curve25519-donna [24]. The implementation is available at the
following address http://www.irisa.fr/celtique/ext/esorics17/.
In order to use our tool, the user simply has to indicate which variables are to be considered as secrets
and the prototype will either raise alarms indicating where secrets may leak, or indicate that the input
program is constant time. The user can either indicate a whole global variable to be considered as secret
at the start of the program, or use the verasco_any_int_secret built-in function to produce a
random signed integer to be considered as secret.
The While language we presented has a few differences with the C#minor language of CompCert
that we analyze using Verasco. First, C#minor allows more constructs such as switch and does not
use while loops, but infinite loops that must be exited using a break statement. Secondly, C#minor
expressions can contain memory reads whereas our While language models a memory load as a statement. However, this is only a slight difference as C#minor programs such as x = ∗y + ∗z are already
transformed into x1 = ∗y; x2 = ∗z; x = x1 + x2 by Verasco in order to improve the precision of the
analysis.
4.1. Memory separation
By leveraging Verasco, the prototype has no problem handling difficult problems such as memory
separation. For example, the small example of Fig. 10 is easily proven as constant time. In this program,
an array t is initialized with random values, such that the values in odd offsets are considered as secrets,
contrary to values in even offsets. So, the analyzer needs to be precise enough to distinguish between the
array cells and to take into account pointer arithmetic. The potential leak happens on line 6. However,
156
S. Blazy et al. / Verifying constant-time implementations by abstract interpretation
Fig. 10. An example program that is analyzed as constant time.
the condition on line 5 constrains i%2 == 0 to be true, and thus i must be even on line 6, so t[i]
does not contain a secret. A naive analyzer would taint the whole array as secret and would thus not be
able to prove the program constant-time, however our prototype has no problem to prove it.
Interestingly, an illustration of the problem can be found in real-world programs. For example, the
NaCl implementation of SHA-256 is not handled by [3] due to this. Indeed, in this program, the hashing
function uses the following C struct as an internal state that contains both secret and public values
during execution.
The hashing function is defined as follows.
It first starts by initializing the internal state with some constant value and then updates it using
the input value in which is considered secret as it can be a password that an user is trying to hash.
Both fields state and buf may contain secret dependent values as a result of the update. Last,
crypto_hash_sha256_final contains a conditional branching that depends on the count field
of the internal state: if ((state->count[1] += bitlen[1]) < bitlen[1]). However,
the whole internal state struct is allocated as a single memory block at low level (i.e., LLVM) and
[3] does not manage to prove the memory separation and cannot thus ensure that the hashing function is
secure.
S. Blazy et al. / Verifying constant-time implementations by abstract interpretation
157
Table 1
Verification of cryptographic primitives
Example
aes
curve25519-donna
des
rlwe_sample
salsa20
sha3
snow
tea
Size
1171
1210
229
145
341
531
871
121
Loc
1399
608
436
1142
652
251
460
109
Time
41.39
586.20
2.28
30.76
5.34
57.62
4.37
3.47
bear_aes_ct
bear_des_ct
bear_sha1
bear_sha256
803
454
243
259
766
560
197
329
1.97
2.54
2.45
2.83
nacl_chacha20
nacl_sha256
384
368
307
287
0.34
1.85
mbedtls_sha1
mbedtls_sha256
mbedtls_sha512
544
346
310
354
346
399
0.33
0.62
0.58
1959
939
933.37
mee-cbc
Result
4.2. Cryptographic algorithms
We report in Table 1 our results on a set of cryptographic algorithms. All executions times reported
were obtained on a 3.1 GHz Intel i7 with 16 GB of RAM. Sizes are reported in terms of numbers of
C#minor statements (i.e., close to C statements), lines of code are measured with cloc and execution
times are reported in seconds. The last column indicates whether the corresponding program has been
managed to be secure by the analysis.
The first block of lines gathers test cases for the implementations of a representative set of cryptographic primitives including TEA [47], an implementation of sampling in a discrete Gaussian distribution by Bos et al. [16] (rlwe_sample) taken from the Open Quantum Safe library [40], an implementation of elliptic curve arithmetic operations over Curve25519 [11] by Langley [24] (curve25519donna), and various primitives such as AES, DES, etc. The second block reports on implementations
from the BearSSL library [9]. The third block reports on different implementations from the NaCl library [12]. The fourth block reports on implementations from the mbedTLS [36] library. Finally, the last
result corresponds to an implementation of MAC-then-Encode-then-CBC-Encrypt (MEE-CBC).
All the analyses are done using the interval abstract domain to track numerical values as it is sufficient
in our case and is the fastest. Prior to analysis, the programs are slightly transformed using the funload option of Verasco which lifts loads out of expressions. For instance, x = *(p + 3) + 4 is
transformed into y = *(p + 3); x = y + 4 where y is a fresh variable. This allows the analysis
to be more precise. Furthermore, Verasco is not able to compute a precise enough loop invariant for some
of the programs, we thus indicate to Verasco to unroll these loops. Some of the timings differ with the
results reported in the conference version due to a bug in the modification made to Verasco which caused
158
S. Blazy et al. / Verifying constant-time implementations by abstract interpretation
it to silently fail. Unfortunately, nacl_sha512 was erroneously reported as verified in the conference
version.
All these examples are proven constant time, except for AES and DES which both make use of look up
tables. Our prototype rightfully reports memory accesses depending on secrets, so these two programs
are not constant time. Similarly to [3], rlwe_sample is only proven constant time, provided that the
core random generator is also constant time, thus showing that it is the only possible source of leakage.
The last example mee-cbc is a full implementation of the MEE-CBC construction using low-level
primitives taken from the NaCl library. Our prototype is able to verify the constant-time property of this
example, showing that it scales to large code bases (939 loc).
Our prototype is able to verify a similar amount of programs than [3], except for a constant-time fixed
point operations library named libfixedtimefixedpoint [4] which unfortunately does not use
standard C and is not handled by CompCert. The library uses extensively a GNU extension known as
statement-expressions and would require heavy rewriting to be accepted by our tool.
On the other hand, our tool shows its agility with memory separation on the program SHA-256 that
was out of reach for [3] and its restricted alias management. In terms of analysis time, our tool behaves
similarly to [3]. On a similar experiment platform, we observe a speedup between 0.1 and 10. This is
very encouraging for our tool whose efficiency is still in an upgradeable stage, compared to the tool of
[3] that relies on decades of implementation efforts for the LLVM optimizer and the Boogie verifier.
5. Related work
This paper deals with static program verification for information-flow tracking [44]. Different formal
techniques have been used in this area. The type-based approach [39] provides an elegant modular
verification approach but requires program annotations, especially for each program function. Because
a same function can be called in very different security contexts, providing an expressive annotation
language is challenging and annotating programs is a difficult task. This approach has been mainly
proposed for programming languages with strong typing guarantees such as Java [39] and ML [41]. The
deductive approach [21] is based on more expressive logics than type systems and then allows to express
subtle program invariants. On the other hand, the loop invariant annotation effort requires strong formal
method expertise and is very much time consuming.
The static analysis approach only requires minimal annotation (the input taints) and then tries to infer
all the remaining invariants in the restricted analysis logic. This approach has been followed to track
efficiently implicit flows using program dependence graphs [29,43]. We also follow a static approach but
our backbone technique is an advanced value analysis for C, that we use to infer fine-grained memory
separation properties and finely track taints in an unfolded call graph of the program. Building a program
dependence graph for memory is a well-known challenge and scaling this approach to a Verasco (or
Astrée) memory analysis is left for further work.
This paper deals however with a restricted notion of information flow: constant-time security. Here,
implicit flow tracking is simplified since we must reject1 control-flow branching that depends on secret
inputs. Our abstract interpretation approach proposes to companion a taint analysis with a powerful value
analysis. The tool tis-ct [46] uses a similar approach but based on the Frama-C value analysis, instead
of Verasco (and its Astrée architecture). The tool is developed by the TrustInSoft company and not
associated with any scientific publication. It has been used to analyze OpenSSL. Frama-C and Verasco
1 We
could accept some of them if we were able to prove that all branches provide a similar timing behavior.
S. Blazy et al. / Verifying constant-time implementations by abstract interpretation
159
value analysis are based on different abstract interpretation techniques and thus the tainting power of
tis-ct and our tool will differ. As an example of difference, Verasco provides relational abstraction of
memory contents while tis-ct is restricted to non-relational analysis (like intervals). CacheAudit [25] is
also based on abstract interpretation but analyzes cache leakage at binary level. Analyzing program at
this low level tempers the inference capabilities for memory separation, because the memory is seen as a
single memory block. Verasco benefits from a source-level view where each function has its own region
for managing local variables.
In a previous work of the second author [8], C programs were compiled by CompCert to an abstraction
of assembly before being analyzed. A simple data-flow analysis was then performed, flow insensitive for
every memory block except the memory stack, and constant-time security was verified. The precision
of this approach required to fully inline the program before analysis. It means that every function call
was replaced by its function body until no more function call remained. This has serious impact on the
efficiency of the analysis and a program like curve25519-donna was out of reach. The treatment
of memory stack was also very limited since no value analysis was available at this level of program
representation. There was no way to finely taint an array content if this array laid in the stack (which
occurs when C arrays are declared as local variables). Hence, numerous manual program rewritings were
required before analysis. Our current approach releases these restrictions but requires more trust on the
compiler (see our discussion in the conclusion).
A very complete treatment of constant-time security has been recently proposed by the ct-verif tool
[3]. Its verification is based on a reduction of constant-time security of a program P to safety of a product
program Q that simulates two parallel executions of P . The tool is based on the LLVM compiler and
operates at the LLVM bytecode level, after LLVM optimizations and alias analyses. Once obtained, the
bytecode program is transformed into a product program which, in turn, is verified by the Boogie verifier
[6] and its traditional SMT tool suite. In Section 4, we made a direct experimental comparison with this
tool. We list here the main design differences between this work and ours.
First we do not perform the analysis at a similar program representation. LLVM bytecode is interesting
because one can develop analyses that benefit from the rich collection of tools provided by the LLVM
platform. For example, [3] benefits from LLVM data-structure analysis [33] to partition memory objects
into disjoint regions. Still, compiler alias analyses are voluntarily limited because compilers translate
programs in almost linear time. Verasco (and its ancestor Astrée) follows a more ambitious approach and
tracks very finely the content of the memory. Using Verasco requires a different tool design but opens
the door for more verified programs, as for example the SHA-256 example. Second, we target a more
restricted notion of constant-time security than [3] which relaxes the property with a so-called notion of
publicly observable outputs. The extension is out of scope of our current approach but seems promising
for specific programs. Only one program in our current experiment is affected by this limitation. At last,
we embed our tool in a more foundational semantic framework. Verasco and CompCert are formally
verified. It leaves the door open for a fully verified constant-time analyzer, while a fully verified ct-verif
tool would require to prove SMT solvers, Boogie verifier and LLVM. The Vellvm [49] is a first attempt
in the direction of verifying the LLVM platform, but it is currently restricted to a core subset (essentially
the SSA generation) of the LLVM passes, and suffers from time-performance limitations.
Constant-time security is part of the larger field of high-assurance cryptography [7] which is a fertile
area that has spawned many recent projects. There are two broad categories of methods of ensuring high
assurance: either it is formally verified using a proof assistant such as Coq or F*, or it is verified using
automatic tools such as Boogie. Each method has its own drawbacks: the former usually needs a highly
160
S. Blazy et al. / Verifying constant-time implementations by abstract interpretation
experienced user to be accomplished while the latter is automatic but needs to trust in unverified and
non-trivial tools such as SMT solvers. Our work places itself in the first category.
Vale [15] is a tool for producing verified cryptographic assembly code. Users write code in the Vale
language which is similar to assembly, and then add a functional specification of the code in Dafny
[34], an automatic program verifier. The tool then automatically verifies that the code complies with the
specification by using SMT solvers such as Z3 [22]. The authors also implemented a verified analyzer
to ensure absence of timing and cache based side channels. However, like [3], their analyzer does not
handle memory separation problems well and has to rely on handwritten annotation from the user. For
instance, they also cannot automatically handle the same SHA256 example as [3].
HACL* [51] is a formally verified C cryptographic library. Similarly to [15], the library was created
by first writing cryptographic code in the proof assistant F* [45]. The code is then verified for functional
correctness, memory safety and freedom of timing side channels. However, unlike [15], proofs are entirely manual and thus need an experienced user. While their work tackles functional correctness and
constant-time security of cryptographic programs, an experienced user is needed, whereas ours is automatic but only deals with constant-time security. Furthermore, as F* is a high-level language, it is more
difficult to make sure that these properties are preserved during compilation, while our work focuses on
C#minor which is a language closer to assembly than F*.
Jasmin [2] is a formally-verified compiler from the Jasmin language down to assembly. The Jasmin
language is a small low-level language similar to Bernstein’s qhasm that also supports function calls
and high-level control-flow constructs such as loops. The authors have implemented a sound embedding
of Jasmin into Dafny and users can thus automatically prove memory safety and constant-time security
of their Jasmin programs using SMT solvers. Constant-time security is proven using product programs
similarly to the ct-verif tool [3]. However, they do not mention if they suffer from the same memory
separation issues.
FaCT [18] proposes a domain-specific language (DSL) to replace C as it is very prone to errors that
can enable side channels. Their DSL can be basically seen as C enhanced with new annotations for
expressing security levels such as which inputs can be considered secret or public. The language contains
also new instructions that directly map to useful hardware instructions such as add-with-carry that are
rarely produced by general purpose compilers. They use the Z3 SMT solver to prove memory safety
of code written in this new language. Furthermore, as secret and public annotations are built in the
language, they can adjust the compiler in order to take advantage of constant-time aware optimizations.
Finally, as the tool is built upon LLVM, they can use the ct-verif tool [3] to verify that the generated
code is secure, but must thus suffer the same limitations.
Fiat-crypto [26] is a formally verified compiler specifically optimized for generating efficient ellipticcurve code used in cryptography. However, the proven properties are only concerned with functional
correctness. The compilation results in straightline code and they thus do not have to worry about secret
dependent branching, but only about secret dependent memory accesses. The resulting code is thus not
entirely proven constant-time.
In a series of publications [5,10,48], the authors leverage the Verified Software Toolchain and CompCert to prove the functional correctness of an ASM implementation of SHA-256 and an implementation
of HMAC with SHA-256, as well as functional correctness and cryptographic security of an implementation of HMAC-DBRG. However, they do not prove anything about side-channels resistance.
Other approaches rely on dynamic analysis (e.g. [20] that extends of Valgrind in order to check
constant-address security) or on statistical analysis of execution timing [42]. These approaches are not
sound, contrary to our approach.
S. Blazy et al. / Verifying constant-time implementations by abstract interpretation
161
6. Conclusion
In this paper, we presented a methodology to ensure that a software implementation is constant time.
Our methodology is based on advanced abstract interpretation techniques and scales on commonly used
cryptographic libraries. Our implementation sits in a rich foundational semantic framework, Verasco
and CompCert, which give strong semantic guarantees. The analysis is performed at source level and
can hence give useful feedback to the programmer that needs to understand why his program is not
constant time.
There are multiple possible directions for future work. The first one concerns semantic soundness. By
inspecting CompCert transformation passes, we conjecture that they preserve the constant-time property
of source programs that have been successfully analyzed. Informally, no conditional branching is added
during compilation. Furthermore, memory accesses can only be added due to spilling during register
allocation; however, these spilled variables are allocated at constant offsets on the stack and cannot thus
depend on secrets. We left as further work a formal proof of this conjecture.
A second direction concerns expressiveness. In order to verify more relaxed properties, we could try to
mix the program-product approach of [3] with the Verasco analysis. The current loop invariant inference
and analysis of [3] are rather restricted. Using advanced alias analysis and relational numeric analysis
could strengthen the program-product approach, if it was performed at the same representation level as
Verasco.
Another possible direction is to also verify that multiplications and divisions are not secret dependent
as on some platforms, their execution times may differ depending on the data they operate on.
References
[1] O. Aciiçmez, Ç.K. Koç and J.-P. Seifert, On the power of simple branch prediction analysis, in: ACM Symposium on
Information, Computer and Communications Security, ACM, 2007, pp. 312–320.
[2] J. Almeida, M. Barbosa, G. Barthe, A. Blot, B. Grégoire, V. Laporte, T. Oliveira, H. Pacheco, B. Schmidt and P.-Y. Strub,
Jasmin: High-assurance and high-speed cryptography, in: CCS 2017 – Proceedings of the 2017 ACM SIGSAC Conference
on Computer and Communications Security, 2017.
[3] J.B. Almeida, M. Barbosa, G. Barthe, F. Dupressoir and M. Emmi, Verifying constant-time implementations, in: 25th
USENIX Security Symposium, USENIX Security 16, Austin, TX, USA, August 10–12, 2016, 2016, pp. 53–70.
[4] M. Andrysco, D. Kohlbrenner, K. Mowery, R. Jhala, S. Lerner and H. Shacham, On subnormal floating point and abnormal timing, in: Proceedings of the 2015 IEEE Symposium on Security and Privacy, SP ’15, IEEE Computer Society,
Washington, DC, USA, 2015, pp. 623–639. ISBN 978-1-4673-6949-7. doi:10.1109/SP.2015.44.
[5] A.W. Appel, Verification of a cryptographic primitive: SHA-256, ACM Transactions on Programming Languages and
Systems (TOPLAS) 37(2) (2015), 7. doi:10.1145/2701415.
[6] M. Barnett, B.E. Chang, R. DeLine, B. Jacobs and K.R.M. Leino, Boogie: A modular reusable verifier for object-oriented
programs, in: Proc. of FMCO 2005, 2005, pp. 364–387.
[7] G. Barthe, High-assurance cryptography: Cryptographic software we can trust, IEEE Security & Privacy 13(5) (2015),
86–89. doi:10.1109/MSP.2015.112.
[8] G. Barthe, G. Betarte, J.D. Campo, C.D. Luna and D. Pichardie, System-level non-interference for constant-time cryptography, in: ACM SIGSAC Conference on Computer and Communications Security, 2014, pp. 1267–1279.
[9] BearSSL, https://www.bearssl.org/.
[10] L. Beringer, A. Petcher, Q.Y. Katherine and A.W. Appel, Verified correctness and security of OpenSSL HMAC., in:
USENIX Security Symposium, 2015, pp. 207–221.
[11] D.J. Bernstein, Curve25519: New Diffie-Hellman speed records, M. Yung, Y. Dodis, A. Kiayias and T. Malkin, eds,
Springer, Berlin, Heidelberg, 2006, pp. 207–228. ISBN 978-3-540-33852-9.
[12] D.J. Bernstein, T. Lange and P. Schwabe, The security impact of a new cryptographic library, in: International Conference
on Cryptology and Information Security in Latin America, Springer, 2012, pp. 159–176.
[13] B. Blanchet, P. Cousot, R. Cousot, J. Feret, L. Mauborgne, A. Miné, D. Monniaux and X. Rival, A static analyzer for large
safety-critical software, in: Programming Language Design and Implementation, ACM, 2003, pp. 196–207.
162
S. Blazy et al. / Verifying constant-time implementations by abstract interpretation
[14] S. Blazy, V. Laporte and D. Pichardie, An abstract memory functor for verified C static analyzers, in: Proc. of the 21st
ACM SIGPLAN Int. Conference on Functional Programming (ICFP 2016), ACM, 2016.
[15] B. Bond, C. Hawblitzel, M. Kapritsos, K.R.M. Leino, J.R. Lorch, B. Parno, A. Rane, S. Setty and L. Thompson, Vale:
Verifying high-performance cryptographic assembly code, in: Proceedings of the USENIX Security Symposium, 2017.
[16] J.W. Bos, C. Costello, M. Naehrig and D. Stebila, Post-quantum key exchange for the TLS protocol from the ring learning
with errors problem, in: 2015 IEEE Symposium on Security and Privacy, SP 2015, San Jose, CA, USA, May 17–21, 2015,
2015, pp. 553–570.
[17] B. Canvel, A. Hiltgen, S. Vaudenay and M. Vuagnoux, Password interception in a SSL/TLS channel, in: CRYPTO, LNCS,
Vol. 2729, Springer, 2003, pp. 583–599.
[18] S. Cauligi, G. Soeller, F. Brown, B. Johannesmeyer, Y. Huang, R. Jhala and D. Stefan, FaCT: A flexible, constant-time programming language, in: Cybersecurity Development (SecDev), 2017 IEEE, IEEE, 2017, pp. 69–76. doi:10.1109/SecDev.
2017.24.
[19] P. Cousot and R. Cousot, Abstract interpretation: A unified lattice model for static analysis of programs by construction
or approximation of fixpoints, in: Conference Record of the Fourth ACM Symposium on Principles of Programming
Languages,, ACM, 1977, pp. 238–252.
[20] ctgrind, https://github.com/agl/ctgrind.
[21] Á. Darvas, R. Hähnle and D. Sands, A theorem proving approach to analysis of secure information flow, in: Proc. 2nd International Conference on Security in Pervasive Computing, D. Hutter and M. Ullmann, eds, LNCS, Vol. 3450, SpringerVerlag, 2005, pp. 193–209. doi:10.1007/978-3-540-32004-3_20.
[22] L. De Moura and N. Bjørner, Z3: An efficient SMT solver, in: International Conference on Tools and Algorithms for the
Construction and Analysis of Systems, Springer, 2008, pp. 337–340.
[23] D.E. Denning, A lattice model of secure information flow, Commun. ACM 19(5) (1976), 236–243. doi:10.1145/360051.
360056.
[24] donna, https://code.google.com/archive/p/curve25519-donna.
[25] G. Doychev, D. Feld, B. Köpf, L. Mauborgne and J. Reineke, CacheAudit: A tool for the static analysis of cache side
channels, in: USENIX Conference on Security, USENIX Association, Berkeley, CA, USA, 2013, pp. 431–446.
[26] A. Erbsen, J. Philipoom, J. Gross, R. Sloan and A. Chlipala, Simple high-level code for cryptographic arithmetic – With
proofs, without compromises, in: Proceedings of the 40th IEEE Symposium on Security and Privacy (S&P’19), 2019.
[27] N.J.A. Fardan and K.G. Paterson, Lucky thirteen: Breaking the TLS and DTLS record protocols, in: Symp. on Security
and Privacy (SP 2013), IEEE Computer Society, 2013, pp. 526–540. doi:10.1109/SP.2013.42.
[28] J. Feret, Static analysis of digital filters, in: European Symposium on Programming (ESOP’04), LNCS, Vol. 2986,
Springer, 2004.
[29] C. Hammer and G. Snelting, Flow-sensitive, context-sensitive, and object-sensitive information flow control based on
program dependence graphs, Int. J. Inf. Sec. 8(6) (2009), 399–422. doi:10.1007/s10207-009-0086-1.
[30] D. Hedin and A. Sabelfeld, A perspective on information-flow control, in: Software Safety and Security – Tools for
Analysis and Verification, IOS Press, 2012, pp. 319–347.
[31] J.-H. Jourdan, V. Laporte, S. Blazy, X. Leroy and D. Pichardie, A formally-verified C static analyzer, in: Proc. of the 42th
ACM SIGPLAN–SIGACT Symposium on Principles of Programming Languages, POPL 2015, ACM, 2015, pp. 247–259.
[32] P. Kocher, Timing attacks on implementations of Diffie-Hellman, RSA, DSS, and other systems, in: Advances in Cryptology – CRYPTO ’96, LNCS, Vol. 1109, Springer, 1996, pp. 104–113.
[33] C. Lattner, A. Lenharth and V.S. Adve, Making context-sensitive points-to analysis with heap cloning practical for the
real world, in: Proc. of PLDI’07, 2007, pp. 278–289. doi:10.1145/1250734.1250766.
[34] K.R.M. Leino, Dafny: An automatic program verifier for functional correctness, in: International Conference on Logic for
Programming Artificial Intelligence and Reasoning, Springer, 2010, pp. 348–370. doi:10.1007/978-3-642-17511-4_20.
[35] X. Leroy, Formal verification of a realistic compiler, Communications of the ACM 52(7) (2009), 107–115. doi:10.1145/
1538788.1538814.
[36] mbed TLS (formerly known as PolarSSL), https://tls.mbed.org/.
[37] A. Miné, The octagon abstract domain, Higher-Order and Symbolic Computation 19(1) (2006), 31–100. doi:10.1007/
s10990-006-8609-1.
[38] A. Miné, Field-sensitive value analysis of embedded C programs with union types and pointer arithmetics, in: Proc. of the
ACM SIGPLAN–SIGBED Conference on Languages, Compilers, and Tools for Embedded Systems (LCTES’06), ACM,
2006, pp. 54–63.
[39] A.C. Myers, JFlow: Practical mostly-static information flow control, in: Proc. of POPL’99, 1999, pp. 228–241. doi:10.
1145/292540.292561.
[40] Open Quantum Safe, https://openquantumsafe.org/.
[41] F. Pottier and V. Simonet, Information flow inference for ML, ACM Trans. Program. Lang. Syst. 25(1) (2003), 117–158.
doi:10.1145/596980.596983.
[42] O. Reparaz, J. Balasch and I. Verbauwhede, Dude, is my code constant time, in: Proc. of DATE 2017, 2017.
S. Blazy et al. / Verifying constant-time implementations by abstract interpretation
163
[43] B. Rodrigues, F.M. Quintão Pereira and D.F. Aranha, Sparse representation of implicit flows with applications to sidechannel detection, in: Compiler Construction, ACM, 2016, pp. 110–120.
[44] A. Sabelfeld and A.C. Myers, Language-based information-flow security, IEEE Journal on Selected Areas in Communications 21(1) (2003), 5–19. doi:10.1109/JSAC.2002.806121.
[45] N. Swamy, J. Chen, C. Fournet, P. Strub, K. Bhargavan and J. Yang, Secure distributed programming with valuedependent types, in: Proceeding of the 16th ACM SIGPLAN International Conference on Functional Programming, M.M.T. Chakravarty, Z. Hu and O. Danvy, eds, ACM, 2011, pp. 266–278, https://www.microsoft.com/en-us/
research/publication/secure-distributed-programming-with-value-dependent-types/. ISBN 978-1-4503-0865-6. doi:10.
1145/2034773.2034811.
[46] TIS-CT, http://trust-in-soft.com/tis-ct/.
[47] D.J. Wheeler and R.M. Needham, TEA, a tiny encryption algorithm, B. Preneel, ed., Springer, Berlin, Heidelberg, 1995,
pp. 363–366. ISBN 978-3-540-47809-6.
[48] K.Q. Ye, M. Green, N. Sanguansin, L. Beringer, A. Petcher and A.W. Appel, Verified correctness and security of mbedTLS
HMAC-DRBG, in: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS
’17, ACM, New York, NY, USA, 2017, pp. 2007–2020, http://doi.acm.org/10.1145/3133956.3133974. ISBN 978-1-45034946-8. doi:10.1145/3133956.3133974.
[49] J. Zhao, S. Zdancewic, S. Nagarakatte and M. Martin, Formalizing the LLVM intermediate representation for verified
program transformation, in: POPL’12, ACM, New York, NY, USA, 2012, pp. 427–440. doi:10.1145/2103656.2103709.
[50] M. Zhao and G.E. Suh, FPGA-based remote power side-channel attacks, in: 2018 IEEE Symposium on Security and
Privacy (SP), pp. 839–854, doi.ieeecomputersociety.org/10.1109/SP.2018.00049. ISSN 2375-1207. doi:10.1109/SP.2018.
00049.
[51] J.-K. Zinzindohoué, K. Bhargavan, J. Protzenko and B. Beurdouche, HACL*: A verified modern cryptographic library,
in: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS ’17, ACM, New
York, NY, USA, 2017, pp. 1789–1806, http://doi.acm.org/10.1145/3133956.3134043. ISBN 978-1-4503-4946-8. doi:10.
1145/3133956.3134043.