Keywords

1 Introduction

While most cryptosystems require access to a perfect source of randomness for their security, such sources are extremely difficult to obtain in practice. For this reason, concrete implementations of cryptographic schemes often use a pseudo-random number generator (PRNG). The latter allows to generate a sequence of bits whose distribution is computationally indistinguishable from the uniform distribution, when given as input a secret short random value, called seed.

To get around the need for a truly random seed, Barak and Halevi [6] proposed a tweaked primitive, called PRNG with inputs, which still generates pseudo-random values but now remains secure even in presence of a potentially biased random source. Moreover, they also proposed a new security notion, called robustness, which states that a PRNG with inputs should meet three security properties: resilience, forward security, and backward security. While resilience models the inability of an adversary to predict future PRNG outputs even when manipulating the entropy source, forward and backward security ensures that an adversary cannot predict past or future outputs of the PRNG even when compromising its internal state. More recently, Dodis et al. [10] extended the work of Barak and Halevi to integrate the process of accumulation of entropy into the internal state. For this purpose, they refined the notion of robustness and proposed a very practical scheme satisfying it. Under the robustness security notion, an adversary can observe the inputs and outputs of a PRNG, manipulate its entropy source, and compromise its internal state.

Side-Channel Resistance for PRNGs. While the notion of robustness seems reasonably strong for practical purposes, it still does not fully consider the reality of embedded devices, which may be subject to side-channel attacks. In these attacks, an attacker can exploit the physical leakage of a device by several means such as power consumption, execution time or electromagnetic radiation. In order to consider such attacks, a first and important step was made by Micali and Reyzin [18] who proposed the framework of physically observable cryptography. In particular, they formally defined a classical assumption according to which only computation leaks information. Later, Dziembowski and Pietrzak went a step further by defining the leakage-resilient cryptography model [13]. In the latter, every computation leaks a limited amount of information whose size is bounded by some parameter \(\lambda \). It benefits from capturing most of the known side-channel attacks and was consequently used to build many recent primitives [14, 15, 19]. In a different direction, we should mention a recent and important work which was proposed by Prouff and Rivain [20] and then extended by Duc et al. [12] to formally prove the security of masking implementations. In the latter works, the sensitive variables are split into different share and the adversary needs to recover all of them to reconstruct the secret.

In the specific context of PRNGs and stream ciphers, several constructions have been proposed so far and proved secure in the leakage-resilient cryptography model (e,g., [22, 24, 25]). The work of Yu et al. [24], for instance, proposes a very efficient construction of a leakage-resilient PRNG. Likewise, the work of Standaert et al. [22] shows how to obtain very efficient constructions of leakage-resilient PRNGs by relying on empirically verifiable assumptions. None of these works, however, consider potentially biased random sources, which is our main goal here.

Our Contributions. In this paper, we aim to build a practical and robust PRNG with input that can resist side-channel attacks. Since the construction proposed by Dodis et al. [10] seems to be a good candidate, we use it as the basis of our work. In doing so, we extend its security model to include the leakage-resilient security and we prove the whole construction secure under stronger requirements for the underlying deterministic pseudo-random generatorFootnote 1. Since it is not obvious how to instantiate the construction to meet our stronger needs, we propose three solutions based on \(\mathsf {AES}\) in counter mode that are only slightly less efficient than the original instantiation proposed in [10]. Two of them are tweaked existing constructions and the third one is a new proposal which may be better in specific cases. All three instantiations only require that the implementation of \(\mathsf {AES}\) in counter mode is secure against Simple Power Analysis attacks since very few calls are made with the same secret key.

Organization. From a theoretical side, we propose in Sect. 3 a new formal security model for PRNGs with input, which, in addition to encompassing all previous security notions [10], also guarantees security in the leakage-resilient cryptography model. In Sect. 4, we analyze the robust construction based on polynomial hash functions given in [10] showing why its instantiation may be vulnerable to side-channel attacks. We then prove that, under non-restrictive conditions on the underlying deterministic pseudo-random generator, the generic construction actually meets our stronger security property. Finally, in Sect. 5, we discuss the instantiations of this construction.

2 Preliminaries

2.1 Notations and Definitions

Probabilities. When X is a distribution, or a random variable following this distribution, we denote \(x \mathop {\leftarrow }\limits ^{{}_\$}X\) when x is sampled according to X. For a variable X and a set S, the notation \(X \mathop {\leftarrow }\limits ^{{}_\$}S\) denotes both assigning X a value uniformly chosen from S and letting X be a uniform random variable over S. The uniform distribution over n bits is denoted \(\mathcal {U}_n\).

Indistinguishability. Two distributions X and Y are said \((t,\varepsilon )\) -computationally indistinguishable (and we denote this property by \(\mathbf {CD}_t(X,Y)\)), if for any distinguisher \(\mathcal {A}\) running within time t, its advantage in distinguishing a random variable following X from a random variable following Y, denoted \(|\Pr [\mathcal {A}(X) = 1]\) \(- \Pr [\mathcal {A}(Y) = 1]|\) is bounded by \(\varepsilon \). When \(t=\infty \), meaning \(\mathcal {A}\) is unbounded, we say that X and Y are \(\varepsilon \) -close.

Pseudo-Random Generators. A function \(\mathbf {G}:\{0,1\}^m\rightarrow \{0,1\}^n\) is a (deterministic) \((t,\varepsilon )\)-pseudo-random generator (PRG) if \(\mathbf {CD}_t(\mathbf {G}(\mathcal {U}_m),\mathcal {U}_n) \leqslant \varepsilon \).

Pseudo-Random Functions. A keyed family of functions \(\mathbf {F}: \{0,1\}^{\mu } \times \{0,1\}^{\mu }\) \(\rightarrow \{0,1\}^{\mu }\) is a \((t,q,\varepsilon )\)-pseudo-random function (PRF) if no adversary can have an advantage greater than \(\varepsilon \), within time t, in distinguishing, for a random key \(K\mathop {\leftarrow }\limits ^{{}_\$}\{0,1\}^\mu \), q answers \(F_K(x_i)\) for adaptively chosen inputs \((x_i)\), from q random answers \(y_i \mathop {\leftarrow }\limits ^{{}_\$}\{0,1\}^\mu \), for \(i=1,\ldots ,q\).

Entropy. For a discrete distribution X, we denote its min-entropy by \(\mathbf {H}_\infty (X) = \min _{x \in X} \{ - \log \Pr [X = x] \}\).

Extractors. Let \(\mathcal {H}= \{h_X~:~ \{0,1\}^n \rightarrow \{0,1\}^m\}_{X \in \{0,1\}^d}\) be a hash function family. We say that \(\mathcal {H}\) is a \((k, \varepsilon )\) -extractor if for any random variable I over \(\{0,1\}^n\) with \(\mathbf {H}_\infty (I) \geqslant k\), the distributions \((X, h_X(I))\) and (XU) are \(\varepsilon \)-close where X is uniformly random over \(\{0,1\}^d\) and U is uniformly random over \(\{0,1\}^m\). We say that \(\mathcal {H}\) is \(\rho \)-universal if for any inputs \(I \ne I' \in \{0,1\}^n\) we have \(\Pr _{X \mathop {\leftarrow }\limits ^{{}_\$}\{0,1\}^d}[h_X(I) = h_{X}(I')] \leqslant \rho .\)

Lemma 1

(Leftover-Hash Lemma). [21, Theorem 8.37] Assume that \(\mathcal {H}\) is \(\rho \)-universal where \(\rho = (1+ \alpha )2^{-m}\) for some \(\alpha > 0\). Then, for any \(k > 0\), it is also a \((k, \varepsilon )\)-extractor for \(\varepsilon = \frac{1}{2}\sqrt{2^{m-k} + \alpha }\).

2.2 Basic Security Model

In this section, we recall notations and the security notion of a PRNG with input and of distribution sampler, introduced in [10]. In Sect. 3, we will propose a stronger security model that takes into account possible leakage of information in the context of side-channel attacks.

Definition 1

(PRNG with Input). A PRNG with input is a triple of algorithms \({\mathcal {G}}= (\mathsf {setup},\mathsf {refresh},\mathsf {next})\), with \(n\) the state length, \(\ell \) the output length, and p the input length:

  • \(\mathsf {setup}\) is a probabilistic algorithm that outputs some public parameters \(\mathsf {seed}\);

  • \(\mathsf {refresh}\) is a deterministic algorithm that, given \(\mathsf {seed}\), a state \(S \in \{0,1\}^{n}\) and an additional input \(I \in \{0,1\}^{p}\), outputs a new state \(S' = \mathsf {refresh}(S, I;\mathsf {seed}) \in \{0,1\}^{n}\);

  • \(\mathsf {next}\) is a deterministic algorithm that, given \(\mathsf {seed}\) and a state \(S \in \{0,1\}^{n}\), outputs a pair \((S',R) = \mathsf {next}(S;\mathsf {seed})\) where \(S' \in \{0,1\}^{n}\) is the new state and \(R \in \{0,1\}^{\ell }\) is the output randomness.

The parameter \(\mathsf {seed}\) is public and fixed in the system once for all. For the sake of clarity, we drop it in the notations and we write \(S' = \mathsf {refresh}(S, I)\) instead of \(\mathsf {refresh}(S, I;\mathsf {seed})\) and \((S',R) = \mathsf {next}(S)\) instead of \(\mathsf {next}(S;\mathsf {seed})\). When a specific part of the \(\mathsf {seed}\) will be required, we will explicitly add it as input.

In practice, the global internal state contains all the PRNG features and some structured and redundant information, such as counters, in addition to a random pool of length n. When the adversary will have access to the state, this will be to all this information, with both reading (\(\mathsf {get}\text {-}\mathsf {state}\)) and writing (\(\mathsf {set}\text {-}\mathsf {state}\)) capabilities. However, when we denote S in this paper, for the sake of simplicity, we refer to the randomness pool, that we expect to be truly random, or with as much entropy as possible.

Adversary. We consider an attacker divided in two parts: a distribution sampler \(\mathcal {D}\) and a classical attacker \(\mathcal {A}\). The former generates seed-independent inputs that will be used by the PRNG to improve the quality of its entropy with the \(\mathsf {refresh}\) algorithm. These inputs, potentially biased under partial adversarial control, are generated in practice from the device activities (e.g., system interrupts) and cannot consequently depend on the parameter \(\mathsf {seed}\). Note that as explained in [10] and in [11], seed-independence is necessary to achieve security of the scheme.

Definition 2

(Distribution Sampler). A distribution sampler \(\mathcal {D}\) is a stateful and probabilistic algorithm which, given the current state \(\sigma \), outputs a tuple \((\sigma ',I,\gamma ,z)\) where \(\sigma '\) is the new state for \(\mathcal {D}\), \(I \in \{0,1\}^p\) will be the next input for the \(\mathsf {refresh}\) algorithm, \(\gamma \) is some entropy estimation of I, z is the possible leakage about I given to the adversary \(\mathcal {A}\) Footnote 2.

If q denotes an upper bound on the number of executions of \(\mathcal {D}\), such a distribution sampler is said legitimate if the min-entropy of every input \(I_j\) is not smaller than the entropy estimate \(\gamma _j\), even given all the additional information: \(\mathbf {H}_\infty (I_j \mid I_1,\dots ,I_{j-1},I_{j+1},\dots ,I_{q}, z_1,\ldots ,z_{q}, \gamma _1,\ldots ,\gamma _{q}) \geqslant \gamma _j\), for all \(j \in \{1,\dots ,q\}\) where \((\sigma _i,I_i,\gamma _i,z_i) = \mathcal {D}(\sigma _{i-1})\) for \(i \in \{1,\dots ,q\}\) and \(\sigma _0 = 0\).

Robustness. We now recall the security game \(\mathsf {ROB}(\gamma ^*)\), from [10], that defines the main security notion for a PRNG with input, the robustness. We have slightly modified the initial definition, but in an equivalent way (see Fig. 1, without the leaking procedures nor the leakage function f as input to the \(\mathsf {initialize}\) procedure). In the security game,

  • the parameter \(\gamma ^*\) defines the minimal entropy that is required in the internal state of the PRNG so that the output looks random. Under this threshold, the PRNG has not accumulated enough entropy in its internal state, and then is not considered safe for generating random-looking outputs;

  • the variable c is an estimation of the actual entropy collected in the internal state of the PRNG. It does not make use of any entropy estimator, but just considers the lower-bound provided by the distribution sampler on the entropy of the input. In case of a legitimate distribution sampler, this lower-bound is correct;

  • the flag/function \(\mathsf {compromised}\) is a Boolean variable that is \(\mathsf {true}\) if the actual entropy (the parameter c) is under the threshold \(\gamma ^*\). In such a case, the PRNG is with an unsafe status, and thus the adversary may have some control on it;

  • the challenge b is a bit that will be used to challenge the adversary, whose goal is to guess it.

The game \(\mathsf {ROB}(\gamma ^*)\) starts with an \(\mathsf {initialize}\) procedure, applies procedures to answer to oracle queries from the adversary \(\mathcal {A}\), and ends with a \(\mathsf {finalize}\) procedure. The procedure \(\mathsf {initialize}\) sets the parameter \(\mathsf {seed}\) with a call to algorithm \(\mathsf {setup}\), the internal state S of the PRNG, as well as c and b. After all oracle queries, the adversary \(\mathcal {A}\) outputs a bit \(b^*\), given as input to the procedure \(\mathsf {finalize}\), which compares the response of \(\mathcal {A}\) to the challenge bit b. The procedures used to answer to oracle queries are the following:

  • the procedures \(\mathsf {get}\text {-}\mathsf {state}/\mathsf {set}\text {-}\mathsf {state}\) allow the adversary \(\mathcal {A}\) to learn or to fix the whole internal state of the PRNG, including the structured part. However, as mentioned above, we just model the impact on the random pool in this analysis;

  • the procedure \(\mathsf {next}\text {-}\mathsf {ror}\) is used to challenge the adversary \(\mathcal {A}\) on its capability to distinguish the output of the PRNG from a truly random output. In the safe case (when \(c \geqslant \gamma ^*\)), the new estimated entropy of the internal state, in the variable c, is then set to the state length, that is n. In the unsafe case, as explained in [10], the real value for R, which might reveal non-trivial information about the weak internal state, is first output and then the new estimated entropy of the internal state, in the variable c, is reset to 0Footnote 3.

  • the procedure \(\mathcal {D}\text {-}\mathsf {refresh}\) allows the adversary \(\mathcal {A}\) to call the distribution sampler \(\mathcal {D}\) to get a new input and to run the \(\mathsf {refresh}\) algorithm with this specific input to improve the quality of the internal state. In addition to the input I for the PRNG, the distribution sampler \(\mathcal {D}\) also outputs the leakage z on input I that is given to \(\mathcal {A}\) and an estimate \(\gamma \) of the entropy of the input (with respect to all the other inputs and the leakage information z). We use a more conservative definition, by considering that it really accumulates more entropy only if c was below \(\gamma ^*\), otherwise, it stays unchanged. The new estimated entropy of the internal state, in the variable c, is thus set to \(c + \gamma \) if c was below \(\gamma ^*\), but of course with a maximum of n.

Note that we dropped the \(\mathsf {get}\text {-}\mathsf {next}\) procedure from [10], but as noted by [4, 8], multiple calls to \(\mathsf {next}\text {-}\mathsf {ror}\) are enough to capture a similar security level.

Definition 3

(Robustness of PRNG with Input). A pseudo-random number generator with input \({\mathcal {G}}= (\mathsf {setup},\mathsf {refresh},\mathsf {next})\) is called \((t,{q_r},{q_n},{q_s},\gamma ^*,\varepsilon )\) -robust, if for any adversary \(\mathcal {A}\) running within time t, that first generates a legitimate distribution sampler \(\mathcal {D}\) (for the \(\mathcal {D}\text {-}\mathsf {refresh}\) procedure), that thereafter makes at most \({q_r}\) calls to \(\mathcal {D}\text {-}\mathsf {refresh}\), \({q_n}\) calls to \(\mathsf {next}\text {-}\mathsf {ror}\), and \({q_s}\) calls to \(\mathsf {get}\text {-}\mathsf {state}/\mathsf {set}\text {-}\mathsf {state}\), the advantage of \(\mathcal {A}\) in game \(\mathsf {ROB}(\gamma ^*)\) is at most \(\varepsilon \).

2.3 Model for Information Leakage

In this paper, we aim to protect the construction of Dodis et al. against side-channel attacks. In this purpose, we describe hereafter the leakage model.

Only Computation Leaks. From the axiom “Only Computation Leaks” of Micali and Reyzin [18], we assume that only the data being manipulated in a computation can leak during this computation. This assumption actually well fits the reality of practical observations. As a consequence, we can split our cryptographic primitives into small blocks that independently leak functions of their inputs. We also authorize the adversary to choose different leakage functions for each block.

Bounded Leakage Per Iteration. Since it is more suited for a PRNG [7], we follow the model of leakage-resilient cryptography, with the strong requirement to preserve reasonable performances. We let the adversary the choice of the polynomial time leakage functions with a bound \(\lambda \) on their output length. This parameter is closely related to the security parameter of the underlying cryptographic primitives and will be part of global scheme’s security bound.

Non-adaptive Leakage. The choice of leakage functions left to the adversary reveals the desire to consider every possible component whatever its way of leaking. However, we based our work on the practical observation whereby leakage functions completely depend on the inherent device. On the contrary, a few works (e.g., [13, 19]) give the adversary the possibility to modify its leakage functions according to its current knowledge. Even if this model aims to be more general, it leads to unrealistic scenarios since the adversary is then able to predict further steps of the algorithm through impossible leakage functions. For these reasons, this work, as many others before [2, 15, 24, 25], only consider non-adaptive leakage functions.

3 Leakage-Resilient Robustness of a PRNG with Input

In the security model of [10], recalled in the previous section, the distribution sampler \(\mathcal {D}\) generates the external inputs used to refresh the PRNG and already gives the adversary \(\mathcal {A}\) some information about how the environment of the PRNG behaves when it generates these inputs. This information is modeled by z. In order to model information leakage during the executions of the PRNG algorithms \(\mathsf {refresh}\) and \(\mathsf {next}\), we give the adversary the choice of the leakage functions, that we globally name f, associated to each algorithm, or even each small block. Since we restrict our model to non-adaptive leakage, we ask the adversary to choose them beforehand. So they are provided as input to the \(\mathsf {initialize}\) procedure by the adversary (see Fig. 1). Then, each leakage function will be implicitly used by our two new procedures named \(\mathsf {leak}\text {-}\mathsf {refresh}\) and \(\mathsf {leak}\text {-}\mathsf {next}\) that, in addition to the usual outputs, also provide some leakage L about the manipulated data, as described in Sect. 2.3. We thus have a new parameter \(\lambda \), that bounds the output length of the leakage function. Our new Leakage-Resilient Robustness security game \(\mathsf {LROB}(\gamma ^*, \lambda )\) makes use of the procedures described in Fig. 1 and is described in details below:

Fig. 1.
figure 1

Procedures in the leakage-resilient robusteness security game \(\mathsf {LROB}(\gamma ^*, \lambda )\)

  • the parameter \(\gamma ^*\), the variable c, and the Boolean flag/function \(\mathsf {compromised}\) are the same as in Sect. 2.2 for the basic robustness;

  • the new parameter \(\lambda \) fixes the maximal information leakage which can be collected during the execution of operations \(\mathsf {refresh}\) and \(\mathsf {next}\). Namely, for each operation (\(\mathsf {refresh}\) or \(\mathsf {next}\)), the leakage functions globally output at most \(\lambda \) bits. Such a leakage will be available when querying the leaking procedures \(\mathsf {leak}\text {-}\mathsf {refresh}\) and \(\mathsf {leak}\text {-}\mathsf {next}\) below;

  • the new parameter \(\alpha \) is an integer that models the minimal expected entropy of S after a \(\mathsf {leak}\text {-}\mathsf {next}\) (\(\mathsf {next}\) with leakage) call, in a safe case (\(\mathsf {compromised}\) is \(\mathsf {false}\)), that is when the entropy of the internal state was assumed greater than \(\gamma ^*\). This captures both the creation of computational entropy during a \(\mathsf {next}\) execution and the smaller loss of entropy caused by the leakage. We could expect \(\alpha = n-\lambda \), but it may depend on the explicit construction;

  • the procedures \(\mathsf {initialize}(\mathcal {D},f)/\mathsf {finalize}(b^*)\) initiate the security game with the additional leakage function f, check whether the adversary has won the game and output 1 in this case or 0 otherwise. Contrary to the choice made in [10] (which is also valid), the initial state S is here set to zero (as well as the entropy counter) so that no assumption needs to be made on its initialization;

  • the procedures \(\mathsf {get}\text {-}\mathsf {state}/\mathsf {set}\text {-}\mathsf {state}\), \(\mathcal {D}\text {-}\mathsf {refresh}\), and \(\mathsf {next}\text {-}\mathsf {ror}\) are the same as for the basic robustness;

  • the procedure \(\mathsf {leak}\text {-}\mathsf {refresh}\) runs the \(\mathsf {refresh}\) algorithm but additionally provides some information leakage L on the input (SI) and \(\mathsf {seed}\), as above. As for the \(\mathsf {next}\text {-}\mathsf {ror}\)-queries, the leakage can reveal non-trivial information about a weak internal state even before the effectiveness of the refresh, and then we reduce c by \(\lambda \) bits. And if it drops below the threshold \(\gamma ^*\), it is reset to 0. Again, we could have strengthened this definition, but we preferred to keep a conservative notion. Furthermore, this strict notion is important w.r.t. our new definitions of recovering and preserving security with leakage. Note that if the \(\mathcal {D}\text {-}\mathsf {refresh}\) algorithm is complex, several leakage functions can be defined at every step, but the global leakage is limited to \(\lambda \), hence the notation \(\{\ldots \}\), since they can be interleaved.

  • the procedure \(\mathsf {leak}\text {-}\mathsf {next}\) runs the \(\mathsf {next}\) algorithm but additionally provides some information leakage L on the input S and \(\mathsf {seed}\), according to the leakage function f provided to the \(\mathsf {initialize}\) procedure. If the status was safe, then the new entropy estimate c is set to \(\alpha \), otherwise, it is reset to 0 (as for the \(\mathsf {next}\text {-}\mathsf {ror}\)). As above, if the \(\mathsf {next}\) algorithm is complex, several leakage functions can be defined at each step, but the global leakage is limited to \(\lambda \).

As in [10], attackers have two parts: a distribution sampler and a classical attacker with the former only used to generate seed-independent inputs (potentially partially biased) from device activities. Examples of the entropy’s traces for the procedures defined in [10] and in our new model are provided in Fig. 2. The threshold \(\gamma ^*\) has to be slightly higher in our new model, because for a similar \(\mathsf {next}\) algorithm, we need to accumulate a bit more of entropy to maintain security even in presence of leakage. Typically, it has to be increased by \(\lambda \). Now we detailed the new security game, we can define the notion of leakage-resilient robustness of a PRNG with input.

Fig. 2.
figure 2

Traces of entropy estimates in the model [10] (up) and in our new model (down)

Definition 4

(Leakage-Resilient Robustness of PRNG with Input). A pseudo-random number generator with input \({\mathcal {G}}= (\mathsf {setup},\mathsf {refresh},\mathsf {next})\) is called \((t,{q_r},{q_n},{q_s},\gamma ^*,\lambda ,\varepsilon )\) -leakage-resilient robust, if for any adversary \(\mathcal {A}\) running in time t, that first generates a legitimate distribution sampler \(\mathcal {D}\) (for the \(\mathcal {D}\text {-}\mathsf {refresh}\)/ \(\mathsf {leak}\text {-}\mathsf {refresh}\) procedure), that after makes at most \({q_r}\) calls to \(\mathcal {D}\text {-}\mathsf {refresh}/\mathsf {leak}\text {-}\mathsf {refresh}\), \({q_n}\) calls to \(\mathsf {next}\text {-}\mathsf {ror}/\mathsf {leak}\text {-}\mathsf {next}\), and \({q_s}\) calls to \(\mathsf {get}\text {-}\mathsf {state}/\mathsf {set}\text {-}\mathsf {state}\) with a leakage bounded by \(\lambda \), the advantage of \(\mathcal {A}\) in game \(\mathsf {LROB}(\gamma ^*,\lambda )\) is at most \(\varepsilon \).

4 New Construction

In this section, we show how to modify the original construction of [10] to achieve the robustness together with the resistance against side-channel attacks.

4.1 Original Construction

We first recall the robust PRNG construction of [10], named \({\mathcal {G}}\). It makes use of a \((t,\varepsilon )\)-secure pseudo-random generator (PRG) \(\mathbf {G}:\{0,1\}^m\rightarrow \{0,1\}^{n+\ell }\). The \(\mathsf {seed}\) is a pair \((X,X')\), n is the state length, \(\ell \) is the output length, and \(p =n\) is the input length. This construction uses multiplication because it gives a proven seeded extractor that accumulates entropy, which we do not know how to do with a hash function. Plus, it is more efficient:

  • \(\mathsf {setup}()\) outputs \(\mathsf {seed}= (X,X') \leftarrow \{0,1\}^{2n}\);

  • \(S' = \mathsf {refresh}(S,I;X) = S \cdot X + I\), where all operations are over \(\mathbb {F}_{2^n}\);

  • \((S',R) = \mathsf {next}(S;X')= \mathbf {G}(U)\), where \(U = [X' \cdot S]^{m}_1\), the truncation of \(X' \cdot S\).

Unfortunately, even a secure PRG is not enough to resist information leakage. As shown below, the instantiation proposed in [10] is vulnerable to side-channel attacks. However, with a secure and leakage-resilient PRG, we prove that the whole construction remains secure even in the presence of leakage.

4.2 Limitations of the Original Construction

In the original paper [10], \(\mathbf {G}\) is instantiated with the pseudo-random function \(\mathsf {AES}\) in counter mode with the truncated product U as the secret key. Depending on the parameters, several calls to the PRF are required. We show hereafter that when the implementation is leaking, this construction faces vulnerabilities.

As shown in [7], several calls to \(\mathsf {AES}\) with known inputs and one single secret key may lead to very efficient side-channel attacks that can help to recover the secret key. Because of the numerous executions of \(\mathsf {AES}\) with the same key, one essentially performs a differential power analysis (DPA) attack. Then, for the above construction, during a \(\mathsf {leak}\text {-}\mathsf {next}\), even with a safe state, the DPA can reveal the secret key of the internal \(\mathsf {AES}\), that is also used to generate the new internal state from public plaintexts. This internal state, after the \(\mathsf {leak}\text {-}\mathsf {next}\), can thus be recovered, whereas it is considered as safe in the security game. A \(\mathsf {next}\text {-}\mathsf {ror}\) challenge can then be easily broken.

Furthermore, even if we only make a few calls with the same key, with a counter as input, the adversary can predict future randomness. This vulnerability applies to \(\mathsf {AES}\) with predictable inputs. As determined by the security games, the adversary chooses a leakage function f to further collect the leakage during the product and the truncation between the internal state S and the public seed \(X'\). Assume that this function is \(f(S,X')=\)

$$\begin{aligned}&\left[ \mathsf {AES}_{\left( \left[ X' \cdot \left( \mathsf {AES}_{[X'\cdot S]_1^m}(C_0) || \dots || \mathsf {AES}_{[X'\cdot S]_1^m}(C_0+\lceil \frac{n}{m} \rceil -1)\right) \right] _1^m \right) } \left( C_0 + \left\lceil \frac{n+\ell }{m} \right\rceil \right) \right] _1^{\lambda } \end{aligned}$$

with \(C_0\) an integer arbitrarily chosen by the attacker. With this leakage function set, the adversary can make a \(\mathsf {set}\text {-}\mathsf {state}\)-call and fix the counter C to \(C_0\). Indeed, this counter is a part of the global internal state which can be compromised by the adversary. Following this compromission, sufficient calls to \(\mathcal {D}\text {-}\mathsf {refresh}\) are made to refresh S so that its entropy increases above the threshold \(\gamma ^*\). Then, the attacker can ask a \(\mathsf {leak}\text {-}\mathsf {next}\)-query and gets back the leakage \(f(S,X')\) described above. Eventually, the attacker asks a challenge \(\mathsf {next}\text {-}\mathsf {ror}\)-query, and either gets the real output or a random one. The \(\lambda \) bits it got from the leakage are exactly the first \(\lambda \) bits of the real output. The attacker has consequently a significant advantage in the \(\mathsf {next}\text {-}\mathsf {ror}\) challenge.

4.3 New Assumption

We slightly modify the requirements of [10] on the PRG \(\mathbf {G}\), to keep the PRNG secure even in the presence of leakage: The PRG \(\mathbf {G}: \{0,1\}^m \rightarrow \{0,1\}^{n+\ell }\) instantiated with the truncated product \(U=[X' \cdot S]_1^m\) is now required to be a \((\alpha , \lambda )\)-leakage-resilient and \((t,\varepsilon )\)-secure PRG according to Definition 5. In that definition, \(\lambda \) denotes the leakage during one execution of \(\mathbf {G}\), and \(\alpha \) is the expected entropy of the output, even given the leakage.

Definition 5

A PRG \(\mathbf {G}: \{0,1\}^m \rightarrow \{0,1\}^{N}\) is \((\alpha ,\lambda )\)-leakage-resilient and \((t,\varepsilon )\)-secure if it is first a \((t,\varepsilon )\)-secure PRG, but in addition, for any adversary \(\mathcal {A}\), running within time t, that first outputs a leakage f with \(\lambda \)-bit outputs, there exists a source \(\mathcal {S}\) that outputs couples \((L,T)\in \{0,1\}^{\lambda } \times \{0,1\}^{N}\), so that the entropy of T, conditioned on L being greater than \(\alpha \), and the advantage with which \(\mathcal {A}\) can distinguish \((f(\mathcal {U}_m),\mathbf {G}(\mathcal {U}_m))\) from (LT) is bounded by \(\varepsilon \). Note that \(f(\mathcal {U}_m)\) denotes the information leakage generated by f during this execution of \(\mathbf {G}\) (on the inputs at the various atomic steps of the computation, that includes \(\mathcal {U}_m\) and possibly some internal values).

This definition ensures that for one execution of \(\mathbf {G}\), its output is indistinguishable from a source of min-entropy \(\alpha \), with a leakage of size \(\lambda \) on the input of \(\mathbf {G}\).

4.4 Security Analysis

Theorem 1 shows that the PRNG \({\mathcal {G}}\) is leakage-resilient robust.

Theorem 1

Let m, n, \(\alpha \), and \(\gamma ^*\) be integers, such that \(n>m\) and \(\alpha > \gamma ^*\), and \(\mathbf {G}: \{0,1\}^m \rightarrow \{0,1\}^{n+\ell }\) an \((\alpha +\ell ,\lambda )\)-leakage-resilient and \((t,\varepsilon _{\mathbf {G}})\)-secure PRG. Then, the PRNG \({\mathcal {G}}\) previously defined and instantiated with \(\mathbf {G}\) is \((t',{q_r},{q_n},{q_s},\) \(\gamma ^*,\lambda , \varepsilon )\)-leakage-resilient robust where \(t' \approx t\), after at most \(q={q_r}+{q_n}+{q_s}\) queries, where \({q_r}\) is the number of \(\mathcal {D}\text {-}\mathsf {refresh}/\) \(\mathsf {leak}\text {-}\mathsf {refresh}\)-queries, \({q_n}\) the number of \(\mathsf {next}\text {-}\mathsf {ror}/\mathsf {leak}\text {-}\mathsf {next}\)-queries, and \({q_s}\) the number of \(\mathsf {get}\text {-}\mathsf {state}/\mathsf {set}\text {-}\mathsf {state}\)-queries, where \(\varepsilon \le q {q_n}\cdot \left( ({q_r}^2+1) \cdot \varepsilon _{ext}+3\varepsilon _{\mathbf {G}} \right) \) and \(\varepsilon _{ext} = \sqrt{2^{m+1-\delta }}\) for \(\delta =\min \{n-\log {q_r},\gamma ^*-\lambda \}\).

To prove Theorem 1, we need to adapt the notions of recovering and preserving introduced in [10] to also capture information leakage. We then prove an intermediate result which states that the combination of recovering and preserving, both with leakage, imply leakage-resilient robustness. Finally, we show that the PRNG \({\mathcal {G}}\) satisfies both the recovering security with leakage and the preserving security with leakage. Full details of the proof are given in the full version [3].

5 Instantiations of the PRG \(\mathbf {G}\)

In the previous section, we explained that the original instantiation in [10] was vulnerable to side-channel attacks, and needs a stronger PRG \(\mathbf {G}\), namely a leakage-resilient PRG which takes as input a perfectly random m-bit string U, and generates an \((n+\ell )\)-bit output \(T=(S,R)\) that looks random. Even in case of leakage, S should have enough entropy. In this section, we first discuss the use of existing primitives for such a leakage-resilient PRG \(\mathbf {G}\). Then, we propose a new concrete instantiation that may achieve better performances in specific scenarios by taking advantage of the PRNG design. Eventually, we provide a security analysis of our solution and we implement it to give some benchmarks.

5.1 Existing Constructions

To instantiate the PRG \(\mathbf {G}\), we need a leakage-resilient construction which can get use of a bounded part of the internal state. We recall here two leakage-resilient constructions which can be tweaked to fit these requirements at a reasonable cost. The first one is a binary tree PRF introduced by Faust, Pietzrak and Schipper at CHES 2012 [15] and the second one is a sequential PRNG with minimum public randomness proposed by Yu and Standaert at CT-RSA 2013 [25]. We voluntarily ignore the chronological order and start the description with the second instantiation since it will be used to complete the first one.

Sequential PRNG from [25]. The PRNG of Yu and Standaert comes with an internal state made of two randomly chosen values : a secret key \(K_0 \in \{0,1\}^\mu \) and a public seed \(s \in \{0,1\}^\mu \). The construction is made of two stages. In the upper stage, a (non leakage-resilient) generator \(\mathbf {F'}\) is processed in counter mode to expand the seed s into uniformly random values \(p_0, p_1, \dots \). In the lower stage, a (non leakage-resilient) PRF \(\mathbf {F}\) generates outputs with public values \(p_i\) and updates the secret so it is never used more than twice. The parameter s can be included in our PRNG \(\mathsf {seed}\) (under the notation \(X''\)) since it shares the same properties than X and \(X'\). However, the current counter is varying and thus need to be stored in the deterministic part of the internal state. In the proof of [25], the counter is implicitly required to be different at each use since the public values \(p_i\) need to be independent. But in our model of leakage-resilient robustness, the deterministic part of the internal state can be definitively compromised. Attacker could, in this case, set the counter to a previous value, making the public \(p_i\) not independent anymore. To thwart this issue, we suggest to extend the internal state so that the truncated part of full entropy can contain both the secret key \(K_0\) and a uniformly random counter used only for a single execution of \(\mathsf {next}\). Hence, no parameter can be compromised and we are back to the context of the original proof. The only difference in the security comes from the probability of collisions when using a uniformly random counter at each call.

This two-stage instantiation is illustrated in Fig. 3. One can note that the input U is split in two slices, to initiate the secret key \(K_0\) and the counter C, each of size \(\mu \). In order to relate these parameters with the parameters of our PRNG from Sect. 4 that provides an m-bit random string U as input to the PRG \(\mathbf {G}\), and wants to receive back an N-bit string, where \(N=n+\ell \), that is \(\kappa =N/\mu \) blocks generated with \(\kappa \) keys. The \(\kappa \) blocks of output and new internal state are all generated using \(2 \kappa -1\) calls to \(\mathbf {F'}\) and \(2 \kappa -1\) calls to \(\mathbf {F}\).

Fig. 3.
figure 3

Instantiation of generator \(\mathbf {G}\) from [25] with random input \(U = (C,K_0)\)

Tweaked Binary Tree PRF from [15]. The second solution was proposed by Faust et al. at CHES 2012 [15]. Thanks to its structure of binary tree, it requires less calls to \(\mathbf {F}\) and consequently overtakes the performances of the first solution. However, the original construction does not provide sources for the required randomness. That is why we suggest to use the same upper stage than [25] recommended in and proven secure in [2, 25]. Plus, with the advantageous reuse of the two same random values at each layer, the number of calls to \(\mathbf {F'}\) is limited to \(2 \log _2(\kappa )\) which is less than the number of \(\mathbf {F}\) calls. Eventually, for the reasons depicted above, we also need to use a uniformly random counter, updated at each call to \(\mathsf {next}\). The tweaked construction is depicted in Fig. 4.

Fig. 4.
figure 4

Instantiation of generator \(\mathbf {G}\) from [15] with random input \(U = (C,K_0')\)

5.2 New Proposal

To thwart the first attack of Sect. 4.2, we still make use of a PRF with a regular re-keying whose frequency depends on the parameters of the inherent device. To thwart the second attack and for the needs of the proof, we continue to make use of unpredictable values as inputs of the PRF. Combining these two solutions, we get close to the two stages exhibited by existing constructions. However, while we keep the same upper stage, we modify the lower one to try to achieve better performances in function \(\mathsf {next}\). The latter still makes several calls to the PRF \(\mathbf {F}: \{0,1\}^{\mu } \times \{0,1\}^{\mu } \rightarrow \{0,1\}^{\mu }\), with public but uniformly distributed inputs and \(\kappa \) distinct secret keys (as in the second existing construction). However, the secret keys are all directly extracted from the input value \(U = [X' \cdot S]^{m}_1\) together with the counter C. In this way, there is no need to derive the next keys in function \(\mathsf {next}\) but the internal state is much more larger: the extracted value U is of length \((\kappa +1) \mu \) instead of \(2 \mu \) in previous constructions. The precise security requirements are formalized in Definition 6 and the performances comparison with existing solutions is given in Table 1, Sect. 5.3.

Definition 6

(Leakage-Resilient PRF). A PRF \(\mathbf {F}: \{0,1\}^{\mu } \times \{0,1\}^{\mu } \rightarrow \{0,1\}^{\mu }\) is \((\alpha ,\lambda )\)-leakage-resilient and \((t,q,\varepsilon )\)-secure if it is a \((t,q,\varepsilon )\)-PRF and if, for any adversary \(\mathcal {A}\), running within time t, that first outputs a leakage f with \(\lambda \)-bit outputs, there exists a source \(\mathcal {S}\) that outputs \((L_i,P_i,T_i)_i\in (\{0,1\}^{\lambda }\times \{0,1\}^\mu \times \{0,1\}^\mu )^q\), with a uniform distribution for the P’s, so that the entropy of \((T_i)_i\), conditioned to \((L_i,P_i)_i\), is greater than \(\alpha \), and the advantage with which \(\mathcal {A}\) can distinguish the tuple \((f(K_i,P_i),P_i,\mathbf {F}_K(P_i))_i\) from \((L_i,P_i,T_i)_i\) is bounded by \(\varepsilon \).

When q is large, such a requirement implies security against DPA, but when q is small only SPA is available which is limited in practice. Such an assumption is implicitly done in [25] with \(\alpha =\mu -\lambda \), since the loss of entropy in the output corresponds to the leakage. This new two-stage instantiation is illustrated in Fig. 5. The input U is split in \(\kappa +1\) slices, to initiate the \(\kappa \) keys \(\{K_i\}_{0 \leqslant i \leqslant \kappa -1}\) and the counter C, each of size \(\mu \). Theorem 2 shows that this proposal achieves the security requirements in Definition 5.

Fig. 5.
figure 5

New instantiation of generator \(\mathbf {G}\) with random input \(U = (C,K_0,\ldots ,K_{\kappa -1})\)

Theorem 2

Let \(\mu \) and \(\kappa \) be paramaters such that \(\nu \kappa \mu =N\). Let \(\mathbf {F}:\{0,1\}^\mu \times \{0,1\}^\mu \rightarrow \{0,1\}^\mu \) be a \((\alpha /\kappa ,\lambda )\)-leakage-resilient and \((t,\nu ,\varepsilon _{\mathbf {F}})\)-secure PRF and \(\mathbf {F'}:\{0,1\}^\mu \times \{0,1\}^\mu \rightarrow \{0,1\}^\mu \) be a \((t,q\nu \kappa ,\varepsilon _{\mathbf {F'}})\)-secure PRF, where q is a bound on the global number of executions of \(\mathbf {G}\). The instantiation proposed for \(\mathbf {G}\) in Sect. 5.2 with \(\mathbf {F}\) and \(\mathbf {F'}\) provides an \((\alpha ,\lambda )\)-leakage-resilient and \((t,\varepsilon _{\mathbf {G}})\)-secure PRG where \(\varepsilon _{\mathbf {G}} \le \kappa \cdot \varepsilon _{\mathbf {F}} + \varepsilon _{\mathbf {F'}} + q^2 \nu \kappa /2^\mu \).

In the proposal, each call to \(\mathbf {G}\) makes \(\nu \kappa \) calls to the PRF \(\mathbf {F}\): \(\kappa \) keys are used at most \(\nu \) times. The inputs of \(\mathbf {F}\) are generated by \(\mathbf {F'}\) with the key \(X''\) (randomly set in \(\mathsf {seed}\)) on a counter C randomly initialized, and then incremented for each \(\mathbf {F'}\) call in an execution of \(\mathbf {G}\). The details of the proof which uses a similar argument as [25] in the \(\mathsf {minicrypt}\) world [16], can be found in the full version [3]. However, for the global security, we need all the intermediate values \((p_j^i)\) to be distinct and unpredictable to avoid the aforementioned attack. We thus require \(\mathbf {F'}\) to be secure after \({q_n}\nu \kappa \) queries and the inputs to be all distinct: by setting the \(\log (\nu \kappa )\) least significant bits of C to zero, we just have to avoid collisions on the \(\mu -\log (\nu \kappa )\) most significant bits for the \(q_n\) queries. The probability of collision is thus less than \({q_n}^2 \nu \kappa /2^\mu \) and can appear once and for all in the global security:

Corollary 1

Let us consider parameters n, m, and \(\ell \) in the construction of the PRNG with input \({\mathcal {G}}\) from Sect. 4.1, using the generator \(\mathbf {G}\) as described in this section. Let \(\mu \) and \(\kappa \) be paramaters such that \(\nu \kappa \mu =n+\ell \), and \(\alpha >\gamma ^*\). Let \(\mathbf {F}:\{0,1\}^\mu \times \{0,1\}^\mu \rightarrow \{0,1\}^\mu \) be a \((\alpha /\kappa ,\lambda )\)-leakage-resilient and \((t,\nu ,\varepsilon _{\mathbf {F}})\)-secure PRF, and \(\mathbf {F'}:\{0,1\}^\mu \times \{0,1\}^\mu \rightarrow \{0,1\}^\mu \) be a \((t,{q_n}\nu \kappa ,\varepsilon _{\mathbf {F'}})\)-secure PRF. Then, \({\mathcal {G}}\) is \((t,{q_r},{q_n},{q_s},\gamma ^*,\lambda , \varepsilon )\)-leakage-resilient robust after at most \(q={q_r}+{q_n}+{q_s}\) queries, where \({q_r}\) is the number of \(\mathcal {D}\text {-}\mathsf {refresh}/\mathsf {leak}\text {-}\mathsf {refresh}\)-queries, \({q_n}\) the number of \(\mathsf {next}\text {-}\mathsf {ror}/\mathsf {leak}\text {-}\mathsf {next}\)-queries, and \({q_s}\) the number of \(\mathsf {get}\text {-}\mathsf {state}/\mathsf {set}\text {-}\mathsf {state}\)-queries, where \(\varepsilon \le q {q_n}\cdot \left( ({q_r}^2+1) \cdot \sqrt{2^{m+1-\delta }}+3 (\kappa \cdot \varepsilon _{\mathbf {F}} + \varepsilon _{\mathbf {F'}}) \right) + {q_n}^2\nu \kappa /2^\mu \), for \(\delta =\min \{n-\log {q_r},\gamma ^*-\lambda \}\).

It seems reasonable to have \((\alpha ,\lambda )\)-leakage resilience with \(\alpha = n+\ell - \nu \kappa \lambda \): with a large \(\gamma ^*\), \(\varepsilon \) can be made small.

5.3 Practical Analysis: Implementation and Benchmarks

We present some benchmarks of the construction of [10] and the three instantiations. Since our leakage-resilient construction is based on [10], we use the latter as a reference when measuring efficiency. Thus, we simply implemented them on an Intel Core i7 processor to show that the new property does not significantly impact the performances. This is mainly due to the use of SPA-resistant AES implementations instead of DPA-resistant (e.g., masked) ones. We used the same public cryptographic libraries that in [10] and to achieve a similar security level as the construction of [10], our experiments show that the tweaked binary tree construction is only less than 4 times slower.

General Benchmarks. We recall that our construction is based on the construction of [10]: \(\mathsf {refresh}(S,I) = S \cdot X + I \in \mathbb {F}_{2^n}\) and \(\mathsf {next}(S) = \mathbf {G}(U)\), with \(U = [X' \cdot S]^{m}_1\). In [10], the PRG \(\mathbf {G}\) is defined by \(\mathbf {G}(U) = \mathsf {AES}_U(0) \Vert \ldots \Vert \mathsf {AES}_U(\nu -1) \), where \(\nu \) is the number of calls to \(\mathsf {AES}\) with a 128-bit key U, and thus \(m=128\). For a security parameter \(k = 40\), the security analysis leads to \(n = 489\), \(\gamma ^*= 449\), and \(\nu = 5\). To achieve leakage-resilience, we need additional security requirements for the PRG \(\mathbf {G}\). The three instantiations split \(\mathbf {G}\) between two PRFs \(\mathbf {F}\) and \(\mathbf {F'}\), where \(\mathbf {F}\) is used with public uniformly distributed inputs and \(\kappa \) different secret keys. In the existing constructions, a first key is extracted from the truncated product U and the other ones are derived through a re-keying process. In the new instantiation, all the secret keys are extracted from U. The public inputs of \(\mathbf {F}\) are generated by the PRF \(\mathbf {F'}\) in counter mode, with a secret initial value for the counter also extracted from U: \(m = 2 \cdot 128\) for the existing constructions or \(m= 128 (\kappa +1)\) for the new instantiation if both \(\mathbf {F}= \mathbf {F'}= \mathsf {AES}\) with 128-bit keys. To provide the security bounds of the three constructions, we need to fix the security bounds of functions \(\mathbf {F}\) and \(\mathbf {F'}\). As far as we know, the best key recovery attacks on \(\mathsf {AES}\) without leakage [9] require a complexity of \(2^{162.1}\) with \(2^{88}\) data. However, our functions being executed at most twice (resp. 6 times) with the same secret keys for \(2^{-40}\) security (resp. for \(2^{-64}\) security), such a complexity is unreachable. As for the leakage, we give the adversary \(\lambda \) bits of useful information by leaking query. Nevertheless, until now it remains unclear how these \(\lambda \) bits of information in a single trace may reduce the security bound of the \(\mathsf {AES}\). In [23] for instance, the authors show that a single trace on the \(\mathsf {AES}\) might give the adversary all the required knowledge to recover the secret key, namely, when a sufficient number of noisy Hamming Weight values are available. But summing the useful information of these noisy Hamming Weight values would give a very large \(\lambda \) for which we cannot guarantee anything. However, we can expect either a larger amount of noise, a desynchronization of the traces or a low leaking from the inherent component which would result in a reasonable value for \(\lambda \). In this case, we can fix \(\varepsilon _{\mathbf {F}} = \varepsilon _{\mathbf {F'}} \approx 2^{-127}\). The resulting security bounds are given in Table 1 with the size n of the internal state, the number of 128 or 256-bit keys and the number of \(\mathsf {AES}\) calls in function \(\mathsf {next}\), for \(2^{-40}\) and \(2^{-64}\) security.

Table 1. Security bounds and complexity of the three instantiations

The best instantiation in terms of complexity is the construction from [15]. This is not surprising considering the advantageous binary shape of this function. However, if we relax the security assumptions on the \(\mathsf {AES}\) with \(\varepsilon _F=\varepsilon _F'=2^{-126}\), the conditions of the security proof are not met and therefore we cannot guarantee its security based on Corollary 1. In these specific cases, our construction seems to be the best one to use since it guarantees that the conditions of the security proof are met. Note that for \(2^{-64}\) security, as explained in Sect. 5.3, we cannot get a provable security with 128-bits input blocks, and we need \(\varepsilon _{\mathbf {F}}\) and \(\varepsilon _{\mathbf {F'}}\) to be smaller than \(2^{-200}\), and then use \(\mathsf {AES}\) with 256-bit keys. Since the implementation built from [15] appears to be the best one in the general case, we implement it to compare it with the instantiation of [10]. As in [10], we use \(\mathsf {fb\_mul\_lodah}\) and \(\mathsf {fb\_add}\) from RELIC open source library [5], extended with the necessary fields (\(\mathbb {F}_{2^{489}}\), defined with \(X^{489} + X^{83} + 1\) and \(\mathbb {F}_{2^{896}}\), defined with \(X^{896} + X^{7} + X^{5} + X^{3} + 1\)). We use public functions \(\mathsf {aes\_setkey\_enc}\) and \(\mathsf {aes\_crypt\_ctr}\) from PolarSSL open source library [1]. As in [10], we measure the number of CPU cycles for a recovering process and a key generation process. The CPU cycles count is done using ASM instruction \(\mathsf {RDTSC}\), our C code is optimized with \(\mathsf {O2}\) flag. We simulate a full recovery of the PRNG for [15] and [10] implementations, with an input containing one bit of entropy per byte. Then, 8 inputs of size 489 bits are necessary to recover from a compromise for [10], whereas, for [15], 8 inputs of size 896 bits are necessary. Then we simulate the generation of 2048-bit keys that each requires 16 calls to \(\mathsf {next}\), as every call outputs 128 bits. Figure 6 gives the numbers of CPU cycles for 100 complete recovering experiments (left) and 100 key generations (right) for [10] and [15]. Both processes require on average 4 times less CPU cycles to perform for [15] implementation than for [10] implementation.

Fig. 6.
figure 6

Benchmarks between [15] and [10]

The Tweaked Binary Tree Instantiation. We first recall the constraints (similar to Corollary 1): the quality of the pseudo-random number generator is measured by \(\varepsilon \le q {q_n}\cdot \left( ({q_r}^2+1) \cdot \sqrt{2^{m+1-\delta }}+3 (2\kappa \cdot \varepsilon _{\mathbf {F}} + \varepsilon _{\mathbf {F'}}) \right) + {q_n}^2(2\log _2(\kappa )+\nu \kappa )/2^\mu ,\) for \(\delta =\min \{n-\log {q_r},\gamma ^*-\lambda \}\). With \({q_r}= {q_n}= {q_s}= 2^k\), we get:

$$\begin{aligned} \varepsilon\le & {} 3\cdot 2^{2k} \cdot \left( (2^{2k}+1) \cdot \sqrt{2^{m+1-\delta }} +3 (2\kappa \cdot \varepsilon _{\mathbf {F}} + \varepsilon _{\mathbf {F'}})\right) + (2\log _2(\kappa )+\nu \kappa ) \cdot 2^{2k}/2^\mu \\\le & {} \varepsilon _1 + \varepsilon _2 + \varepsilon _3 + \varepsilon _4 \end{aligned}$$

with \(\varepsilon _1 = 2^{4k + 2 + (m+1-\delta )/2}\), \(\varepsilon _2 = 18 \kappa \cdot 2^{2k} \cdot \varepsilon _{\mathbf {F}}\), \(\varepsilon _3 = 9 \cdot 2^{2k} \cdot \varepsilon _{\mathbf {F'}}\) and \(\varepsilon _4 = 2^{2k-\mu } \cdot (2\log _2(\kappa )+\nu \kappa )\).

\(\mathbf{2}^{{\varvec{-v}}}\) Security. With \(m = 256\), \(\mu =128\), \(\varepsilon _{\mathbf {F}} = \varepsilon _{\mathbf {F'}} \approx 2^{-127}\): \(\varepsilon _1 < 2^{-v}\), as soon as \(8k + 2v+ 5 +m < \delta \), which is verified for \(n>9k+2v+5+m\) and \(\gamma ^*>n+\lambda -k\); \(\varepsilon _2 < 2^{-v}\), as soon as \(2k+v < 127 - \log _2(18\kappa )\); \(\varepsilon _3 < 2^{-v}\), as soon as \(2k+v < 127 - \log _2(9) < 123\); \(\varepsilon _4 < 2^{-v}\), as soon as \(2k+v < 128 - \log _2(2\log _2(\kappa )+\nu \kappa )\).

\(\mathbf{2}^{\mathbf{-40}}\) Security. For \(k=v=40\), the constraint on \(\varepsilon _3\) is satisfied. The constraints on \(\varepsilon _1\) are satisfied as soon as \(n > 701\) and \(\gamma ^*> n + \lambda - 40\). With \(\nu =2\), we need \(n = 256\kappa - 128 > 701\) and thus \(\kappa = 4\), which ensures that the constraints on \(\varepsilon _2\) and \(\varepsilon _4\) are satisfied . Finally, \(n = 896\) and \(\gamma ^*= 858\) for \(\lambda \approx 2\).

Fig. 7.
figure 7

Example of instantiation of generator \(\mathbf {G}\) for higher security bounds

\(\mathbf{2}^{\mathbf{-64}}\) Security. Unfortunately, for \(k = v= 64\), one cannot get a provable security with the size of the input block \(\mu =128\), because of the collisions on the counters. In order to increase the size of the input blocks, one can XOR PRPs to get a PRF on larger inputs [17]. This makes \(\varepsilon _4\) negligible: \(2^{2k-2\mu } = 2^{-128}\), and thus the factor \(\nu \kappa \) will not affect it. On the other hand, to make \(\varepsilon _2\) and \(\varepsilon _3\) small enough, we need \(\varepsilon _{\mathbf {F}}\) and \(\varepsilon _{\mathbf {F'}}\) to be smaller than \(2^{-200}\), and then use \(\mathsf {AES}\) with 256-bit keys. But then we have to use the same key 6 times in order to extract 384 bits (see a 3-block extraction in Fig. 7), where \(\kappa \) keys are used \(\nu =6\) times, and two counters \(C_0\) and \(C_1\) are extracted: \(m = 3 \cdot 128 = 384\), \(n = 3 \times 128 \times \kappa - 128 = 384 \kappa - 128\). As for the constraint on \(\varepsilon _1\), we need \(384\kappa > 1221\). We can take \(\kappa =4\). Then, \(n = 1408\) and \(\gamma ^*= 1346\).

6 Conclusion

We have put forward a new property for PRNGs with input, that captures security in a setting where partial sensitive information may leak. Then, we have tweaked the PRNG with input proposed by Dodis et al. to meet our new property of leakage-resilient robustness. Finally, we have proposed three secure instantiations of the new PRNG with input including a new one which provide the same level of security as the construction of [10] for a limited additional cost in efficiency and size.

As further work, the security bounds could be made tighter if the construction was proven robust with leakage without going through the preserving-with-leakage and recovering-with-leakage steps. Another interesting future work would be to implement the construction on constrained devices.