1 Introduction

Side-channel attacks have been considered as a major threat against cryptographic implementations since their apparition in the late 90’s. It was indeed shown that even a tiny dependence between the data processed by a device and its side-channel leakage (e.g. running time, power consumption, electromagnetic emanation) could allow devastating key-recovery attacks against the implementation of any cryptosystem secure in the black-box model (i.e. the model in which the adversary only sees the input-output behaviour of the cryptosystem) [15, 18, 19]. The so-called physical security of cryptographic implementations has then become a very active research area and many efficient countermeasures have been proposed to mitigate these side-channel attacks. However, most of these countermeasures are only empirically validated or they are proven secure in a weak adversarial model where, for instance, an attacker only exploits a small part of the available leakage.

An important step towards a more formal treatment of side-channel security was made by Micali and Reyzin in 2004 in their physically observable cryptography framework [20]. In particular, they formalized the assumptions that a cryptographic device can at least keep some secrets and that only computation leaks information. This framework was then specialized into the leakage resilient cryptography model introduced by Dziembowski and Pietrzak in [14] which gave rise to a huge amount of subsequent works. In this model, a leaking computation is divided into elementary operations that are assumed to leak some information about their inputs through a leakage function whose range is bounded (i.e. taking values in \(\{0,1\}^\lambda \) for some parameter \(\lambda \)). Many new leakage-resilient cryptographic primitives were proposed as well as so-called compilers that can make any computation secure in this model [16].

While the leakage resilient literature has achieved considerable theoretical advances, the considered model does not fully capture the physical reality of power or electromagnetic leakages (see for instance [24]). In particular for a leakage function \(f : \{0,1\}^m \rightarrow \{0,1\}^\lambda \), the parameter \(\lambda \) must be (significantly) smaller than m. This means, for instance, that the leakage of an AES computation should be smaller than 128 bits, whereas in practice a power trace resulting from an AES computation can take several kilobytes (or even megabytes). On the other hand, it is fair to assume that the side-channel leakage is noisy in such a way that the information f(x) leaked by an elementary operation on a variable x is not enough to fully recover x. This intuition was formalized in the noisy leakage model introduced by Prouff and Rivain in 2013 [21]. In a nutshell, this model considers that an elementary operation with some input x leaks a noisy leakage function f(x) (with a random tape parameter which is omitted from the presentation). The noisy feature is then captured by assuming that an observation of f(x) only implies a bounded bias in the probability distribution of x. Namely the statistical distance between the distribution of x and the distribution of x given the observation f(x) is bounded by some parameter \(\delta \) which shall be called the leakage rate in the following. The function f is then said to be a \(\delta \)-noisy leakage function. Notably, this model does not imply any restriction on the leakage size (i.e. on the range of f) but only on the amount of useful information it contains.

1.1 Related Works

Probing-Secure Circuits. In a seminal paper of 2003, Ishai, Sahai and Wagner considered the problem of building Boolean circuits secure against probing attacks [17]. In the so-called probing model, an adversary is allowed to adaptively probe up to t wires of the circuit. They show how to transform any circuit C with q logic gates into a circuit \(C'\) with \(O(q t^2)\) logic gates that is secure against a t-probing adversary. Their scheme consists in encoding each Boolean variable x as a random sharing \((x_1, x_2, \ldots , x_n)\) satisfying \(x_1 + x_2 + \ldots + x_n = x\) over \(\mathbb {F}_2\), where \(n=2t+1\). They show how to transform each logic gate into a gadget that work on encoded variables. Their construction is actually secure against an adversary that can adaptively place up to t probes per such gadget. The so-called ISW construction has since then served as a building block in many practical side-channel countermeasures known as higher-order masking schemes (see for instance [10, 11, 22]). Its efficiency has also been improved in recent works [5, 6] which respectively show how to optimize the randomness consumption and the number of multiplications (while conserving similar asymptotic complexity).

Towards Noisy-Leakage Security. In [21], Prouff and Rivain proposed the first formal security proof for an ISW-like masking scheme in the noisy leakage model. In particular they generalize the previous work of Chari et al. [7] and show that in the presence of noisy leakage on the shares \(x_1\), \(x_2\), ..., \(x_n\) the information on x becomes negligible as n grows. Specifically, they show that for any \(\delta \)-noisy leakage function f, the mutual information between x and the leakage \((f(x_1), f(x_2), \ldots , f(x_n))\) is of order \(O(\delta ^n)\). They also provide a security proof for full masked computation in the noisy leakage model, however their result has two important limitations. First they assume the existence of a leak-free component that can refresh a sharing without leaking any information. Second, their proof can only tolerate an \(\delta \)-noisy leakage with \(\delta = O(1/n)\). Namely, the leakage rate must decrease linearly with the number of shares. Note that this second limitation is inherent to masking schemes based on the ISW construction since it implies that each share leaks O(n) times. Some practical attacks have been exhibited that exploit this issue [4].

Avoiding Leak-Free Components. In [13], Duc, Dziembowski and Faust tackled the first of these two limitations. Namely they show how to avoid the requirement for a leak-free component with a nice and conceptually simpler security proof. Applying the Chernoff bound, they show that the ISW scheme is secure in the \(\delta \)-random probing model in which each operation leaks its full input with a given probability \(\delta = O(1/n)\) (and leaks nothing with probability \(1-\delta \)). Their main contribution is then to show that any \(\delta '\)-noisy leakage f(x) can be simulated from a \(\delta \)-random probing leakage \(\phi (x)\) with \(\delta ' \le \delta \cdot |\mathcal {X}|\), where \(\mathcal {X}\) denotes the definition space of x. In other words, if the \(\delta \)-random probing leakage of a computation contains no significant information, then neither does any \(\delta '\)-noisy leakage of this computation as long as \(\delta \le \delta ' \cdot |\mathcal {X}|\). The ISW scheme is therefore secure against \(\delta '\)-noisy leakage for \(\delta ' = O(1/n |\mathcal {X}|)\). Note that for an arithmetic program working over some field \(\mathbb {F}\), each elementary operation takes up to two inputs on \(\mathbb {F}\), meaning \(\mathcal {X} = \mathbb {F}^2\) and \(\delta ' = O(1/n |\mathbb {F}|^2)\). This way, the work of Duc et al. avoid the strong requirement of leak-free components. However, it still requires a leakage rate of O(1 / n).

Towards a Constant Leakage Rate. This second limitation was addressed by Andrychowicz, Dziembowski, and Faust [3]. They propose a scheme –that we call ADF scheme hereafter– which is based on Shamir’s secret sharing [23] and a refreshing algorithm from expander graphs due to Ajtai [1]. The number of instructions in the protected program is multiplied by a factor \(O(n^3)\) which can be reduced to \(O(n^2 \log n)\) using an FFT-based multiplication. They show that this construction achieves security in the strong probing model where an adversary can adaptively place up to \(O(1/\log n)\) probes per elementary operation. In the random probing model, the result is improved to a constant ratio. Applying the reduction from [13] they obtain the security in the noisy model for a leakage rate \(\delta = O(1/|\mathbb {F}|^2)\).Footnote 1 For the standard version of their scheme based on Shamir’s secret sharing, the base field \(\mathbb {F}\) must be of size O(n) which implies a leakage rate \(\delta = O(1/n^2)\) in the noisy leakage model. Fortunately, their scheme can be improved by using secret sharing based on algebraic geometric codes [8] (at the cost of weaker parameters). As argued in [3], these codes operate over fields of constant size and hence there basic operations can be implemented by constant size Boolean circuits, which gives a \(\delta = O(1)\) noisy leakage rate with the reduction from [13]. We show in this paper that a simple reduction actually exists to achieve \(\delta = \tilde{O}(1)\) noisy leakage security from a random-probing secure scheme on a field \(\mathbb {F}=O(n)\). This reduction could also be used to get tight noisy-leakage security for the ADF scheme without algebraic geometric codes (i.e. with simple Shamir’s secret sharing).

Towards a Quasilinear Complexity. The leakage-secure schemes in the current state-of-the-art imply a (quasi)quadratic blow-up of the complexity: the protected program (or circuit) scales from |P| arithmetic instructions to \(\tilde{O}(|P|\,n^2)\). Another challenging issue is hence to bridge the gap between this \(\tilde{O}(n^2)\) blow-up and the theoretically achievable \(\tilde{O}(n)\) blow-up. In [3], the authors claim that the complexity of their scheme can be improved by using packed secret sharing [2, 12]. As explained in [2], the use of packed secret sharing allows to securely compute (in the presence of leakage) an addition or a multiplication on several encoded values in parallel at the same asymptotic cost as a single operation with a standard secret sharing. Using the transform of [12], one can improve the complexity of the ADF scheme on an arithmetic program P from \(O(|P| \, n^2 \log n)\) to \(O(|P| \log |P| \, n^2 \log n / w)\) where w denotes the width of P. Roughly speaking, the width of P is the number of operations that can be computed in parallel throughout an execution of P which satisfies \(w = O(|P|/d)\) where d is the depth of P (considered as a circuit). For a circuit of width \(w=\varTheta (n)\), this approach hence results in a complexity blow-up quasilinear in n. For a constant-size circuit (as the AES cipher) on the other hand, only a constant factor can be saved and the complexity blow-up remains (quasi)quadratic.

1.2 Our Contribution

In this paper we show how to securely compute any arithmetic program (or circuit) in the noisy leakage model with a leakage rate \(\tilde{O}(1)\) and with complexity blow-up \(\tilde{O}(n)\). Our scheme is conceptually very simple and also practically efficient provided that the computation relies on a base field \(\mathbb {F}\) with appropriate structure.

We consider an arithmetic program P that executes basic arithmetic instructions over some prime field \(\mathbb {F}\) (additions, subtractions, and multiplications) satisfying \(|\mathbb {F}| = \alpha \cdot n+1 \) for n being a power of 2 (in particular \(|\mathbb {F}| = O(n)\) as in [3]). Note that we prefer the terminology of (arithmetic) program composed of instructions to the terminology of (arithmetic) circuit composed of gates but the two notions are strictly equivalent.

Each internal variable \(a \in \mathbb {F}\) of the computation is encoded into a random tuple \((a_0, a_1, \ldots , a_{n-1})\) that satisfies the relation \(a = \sum _{i=0}^{n-1} a_i \omega ^i\) for some random element \(\omega \in \mathbb {F}\). In other words, a is encoded as the coefficient of a random n-degree polynomial Q satisfying \(Q(\omega ) = a\). It is worth noting that the security of our scheme does not rely on the secrecy of \(\omega \) but on its random distribution. We then show how to transform each arithmetic instruction of P into a corresponding secure gadget that works on encoded variables. Using a fast Number Theoretic Transform (NTT), we then achieve a multiplication gadget with \(O(n \log n)\) instructions.

We first show that our scheme is secure in the \(\delta \)-random-probing model for a parameter \(\delta = O(1/\log n)\). Specifically, we show that for any program P with a constant number of instructions |P|, the advantage of a \(\delta \)-random-probing adversary can be upper bounded by \(\mathsf {negl}(\lambda ) + \mathsf {negl}'(n)\) where \(\mathsf {negl}\) and \(\mathsf {negl}'\) are some negligible functions and where \(\lambda \) denotes some security parameter that impact the size of \(\mathbb {F}\) (specifically we have \(\lambda = \log \alpha \) where \(|\mathbb {F}| = \alpha \cdot n+1\)). This is shown at the level of a single NTT-based secure multiplication in a first place. Then we show how to achieve compositional security, by interleaving each gadget by a refreshing procedure that has some input-output separability property. Using the Chernoff bound as in [13] we can then statistically bound the number of leaking intermediate variables in each gadget. Specifically, we show that the leakage in each gadget can be expressed as linear or quadratic combinations of the input shares that do not reveal any information with overwhelming probability (over the random choice of \(\omega \)).

From our result in the random probing model, the security reduction of Duc et al. [13] directly implies that our construction is secure in the \(\delta '\)-noisy leakage model for \(\delta ' = O(1/|\mathbb {F}|^2 \log n)\). However, since we require \(|\mathbb {F}| = O(n)\) (as in the standard ADF scheme) this reduction is not satisfactory. We then refine the granularity of our computation by considering the noisy leakage model on logical instructions working on constant-size machine words. In this model, we provide a generic reduction from the random-probing model over \(\mathbb {F}\) to the noisy leakage model on logical instructions. Namely we show that any arithmetic program \(\varPi \) secure under a \(\delta \)-random-probing leakage gives rise to a functionally equivalent program \(\varPi '\) that is secure under a \(\delta '\)-noisy leakage at the logical instruction level where \(\delta ' = \delta / O(\log |\mathbb {F}| \log \log |\mathbb {F}|)\). Applying this reduction, our construction achieves security in the \(\delta '\)-noisy leakage model with \(\delta ' = O \big (1/((\log n)^2 \log \log n)\big )\) for a complexity blow-up of \(O(n \log n)\).

Table 1 hereafter gives a asymptotic comparison of our scheme and the previous schemes in the literature (with noisy leakage security): the ISW scheme [17] with the reduction from [13], the ADF scheme with algebraic geometric codes [3], the ADF scheme improved with packed secret sharing [2, 3] (ADF-PSS). We emphasize that for the latter case no detailed description and analysis have been provided.

Table 1. Asymptotic comparison of secure schemes in the noisy leakage model.

The paper is organized as follows. Section 2 provides background notions on the noisy leakage model and the considered adversary. In Sect. 3 we describe our secure quasilinear multiplication scheme and we prove its security in the random probing model. Section 4 then presents the refreshing procedure used to get compositional security and provides a security proof for a full arithmetic program. In Sect. 5 we give our generic reduction from the random-probing model over \(\mathbb {F}\) to the noisy leakage model on logical instructions and we apply this reduction to our scheme to get our final result. We finally discuss practical aspects of our scheme and related open problems in Sect. 6.

2 Leakage and Adversary

In the rest of the paper, we shall denote by \(x\leftarrow \mathcal {X}\) the action of picking x uniformly at random over some set \( \mathcal {X}\). Similarly, for a probabilistic algorithm \(\mathcal {A}\), we denote by \(y\leftarrow \mathcal {A}(x)\) the action of running \(\mathcal {A}\) on input x with a fresh random tape and setting y to the obtained result.

2.1 Noisy Leakage Model

The noisy leakage model introduced by Prouff and Rivain in [21] follows the only computation leaks paradigm [20]. In this paradigm, the computation is divided into subcomputations; each works on a subpart x of the current computation state and leaks some information f(x), where f is called the leakage function. In practice, f is a so-called randomized function that takes two arguments, the input variable x and a random tape \(\rho \) that is large enough to model the leakage noise. A subcomputation with input variable x hence leaks \(f(x,\rho )\) for a fresh random tape \(\rho \). For the sake of simplicity, in the sequel we shall omit the parameter \(\rho \) and see f(x) as a random realization of \(f(x,\rho )\). Moreover, the definition space of the input x shall be called the domain of f, and we shall write \(f : \mathcal {X} \rightarrow \mathcal {Y}\) for a randomized function with domain \(\mathcal {X}\) and image space \(\mathcal {Y}\).

In the noisy leakage model [21], a noisy leakage function f is defined as a randomized function such that an observation f(x) only implies a bounded bias in the probability distribution of x. Namely, the statistical distance between the distributions of x and \((x\mid f(x))\) is assumed to be bounded by some bias \(\delta \). Let X and \(X'\) be two random variables defined over some set \(\mathcal {X}\). We recall that the statistical distance between X and \(X'\) is defined as:

$$\begin{aligned} \varDelta (X;X') = \frac{1}{2} \sum _{x \in \mathcal {X}} |\Pr (X=x)-\Pr (X'=x)|. \end{aligned}$$
(1)

The notion of noisy leakage function is then formalized as follows:

Definition 1

([21]). A \(\delta \)-noisy leakage function is a randomized function \(f : \mathcal {X} \rightarrow \mathcal {Y}\) satisfying

$$\begin{aligned} \sum _{y \in \mathcal {Y}} \Pr (f(X)=y) \cdot \varDelta (X;(X\mid f(X)=y)) \le \delta , \end{aligned}$$
(2)

where X is a uniform random variable over \(\mathcal {X}\).

In practice, the leaking input x might not be uniformly distributed but one must specify a distribution to have a consistent definition, and as argued in [21], the uniform distribution is a natural choice. Also note that in the original paper [21], the \(L_2\) norm was used for the definition of the statistical distance while, as argued in [13], the \(L_1\) norm is a more standard choice (that we also adopt in this paper).

A conceptually simpler model, known as the random probing model, was first used in [17] and formalized in the work of Duc, Dziembowski, and Faust [13]. Informally speaking, this model restricts the noisy leakage model to leakage functions that leak their entire input with a given probability. These random-probing leakage functions are formalized in the following definition.Footnote 2

Definition 2

A \(\delta \)-random-probing leakage function is a randomized function \(\phi : \mathcal {X} \rightarrow \mathcal {X}\,\cup \{\perp \}\) satisfying

$$\begin{aligned} \phi (x) = {\left\{ \begin{array}{ll} \perp &{}\text { with probability } 1-\delta \\ x &{}\text { with probability } \delta \end{array}\right. } \end{aligned}$$
(3)

It can be checked that such a function is a special case of \(\delta \)-noisy leakage function.Footnote 3 Moreover, it has been shown by Duc, Dziembowski, and Faust [13] that every noisy leakage function f can be expressed as a composition \(f = f' \,{\circ }\, \phi \) where \(\phi \) is a random-probing leakage function. This important result enables to reduce noisy-leakage security to random-probing security. It is recalled hereafter:

Lemma 1

([13]). Let \(f : \mathcal {X} \rightarrow \mathcal {Y}\) be a \(\delta \)-noisy leakage function with \(\delta < \frac{1}{|\mathcal {X}|}\). There exists a \(\delta '\)-random-probing leakage function \(\phi : \mathcal {X} \rightarrow \mathcal {X}\,\cup \{\perp \}\) and a randomized function \(f':\mathcal {X}\,\cup \{\perp \}\rightarrow \mathcal {Y}\) such that for every \(x\in \mathcal {X}\) we have

$$\begin{aligned} f(x) = f'(\phi (x)) ~~~ \text {and} ~~~ \delta ' \le \delta \cdot |\mathcal {X}|. \end{aligned}$$
(4)

In the random-probing model, the total number of leaking operations can be statistically bounded using the Chernoff bound as suggested in [13, 17]. We shall follow this approach in the present paper by using the following corollary.

Corollary 1

(Chernoff bound [9]). The \(\delta \)-random probing leakage of a computation composed of N elementary operations reveals the input of \(\ell > \delta N\) of these elementary operations with probability lower than

$$\begin{aligned} \psi (\ell ,N) = \exp \Big (-\frac{(\ell -\delta \,N)^2}{\ell +\delta \,N}\Big ). \end{aligned}$$
(5)

If \(\ell \le \alpha n\) and \(N = \beta n\), for some \(\alpha \), \(\beta \) and n with \(\alpha / \beta > \delta \), the above gives

$$\begin{aligned} \psi (\alpha n, \beta n) = \exp \Big (-\frac{(\alpha -\delta \beta )^2}{\alpha +\delta \beta } ~n\Big ). \end{aligned}$$
(6)

2.2 Leakage Adversary

We consider computation schemes that encode the data of a program in order to make the leakage on the encoded data useless. An encoding \(\mathsf {Enc}\) is a randomized function that maps an element \(x \in \mathbb {F}\) to a n-tuple \(\mathsf {Enc}(x) \in \mathbb {F}^n\), where n is called the encoding length, and for which a deterministic function \(\mathsf {Dec}: \mathbb {F}^n \rightarrow \mathbb {F}\) exists that satisfies \(\Pr (\mathsf {Dec}(\mathsf {Enc}(x)) = x) = 1\) for every \(x\in \mathbb {F}\) (where the latter probability is taken over the encoding randomness).

Consider an arithmetic program P taking a string \(\varvec{x} \in \mathbb {F}^s\) as input and executing a sequence of instructions of the form \(\mu _i \leftarrow \mu _j * \mu _k\), where \(*\) denotes some operations over \(\mathbb {F}\) (addition, subtraction, or multiplication) and where \([\mu _0, \mu _1, \ldots , \mu _T]\) denotes the memory of the program which is initialized with \(\varvec{x}\) (and some constants). To achieve leakage security, the program P is transformed into a functionally equivalent arithmetic program \(\varPi \) taking as input an encoded string \(\mathsf {Enc}(\varvec{x})\) (where the encoding simply applies to each coordinate of \(\varvec{x}\)). According to the defined leakage model, each executed instruction of \(\varPi \) is then assumed to leak some noisy function \(f(\mu _j, \mu _k)\) of its pair of inputs. It is further assumed that \(\varPi \) includes random sampling instructions \(\mu _i \leftarrow \mathbb {F}\) that each leaks a noisy function of the generated random element \(f(\mu _i)\). We denote the overall leakage by \(\mathcal {L}(\varPi , \varvec{x})\). The compiler is then said to be leakage secure if an observation of \(\mathcal {L}(\varPi , \varvec{x})\) does not reveal significant information about \(\varvec{x}\). More specifically, the leakage \(\mathcal {L}(\varPi , \varvec{x})\) must be indistinguishable from the leakage \(\mathcal {L}(\varPi , \varvec{x}')\) for every \(\varvec{x}'\in \mathbb {F}^s\). This security notion is formalized as follows:

Definition 3

(Leakage Security). The program \(\varPi \) is \(\varepsilon \)-leakage secure (w.r.t. leakage functions \(\mathcal {L}\)) if every adversary \(\mathcal {A}\) has advantage at most \(\varepsilon \) of distinguishing \(\mathcal {L}(\varPi , \varvec{x}_0)\) from \(\mathcal {L}(\varPi , \varvec{x}_1)\) for chosen \(\varvec{x}_0\) and \(\varvec{x}_1\), i.e. we have:

$$\begin{aligned} \mathrm {Adv}_{\mathcal {A}}^{\,\varPi ,\,\mathcal {L}} := \Big | \mathrm {Succ}_{\mathcal {A}}^{\,\varPi ,\,\mathcal {L}} - \frac{1}{2} \Big | \le \varepsilon \end{aligned}$$
(7)

where

$$\begin{aligned} \mathrm {Succ}_{\mathcal {A}}^{\,\varPi ,\,\mathcal {L}} = \Pr \left( \begin{array}{cl} (\varvec{x}_0,\varvec{x}_1,\mu ) \leftarrow \mathcal {A}(\bot ) \\ b \leftarrow \{0,1\} &{} : \mathcal {A}(\varvec{x}_0, \varvec{x}_1, \mu , \varvec{\ell }) = b \\ \varvec{\ell } \leftarrow \mathcal {L}(\varPi , \varvec{x}_b) \end{array} \right) \!. \end{aligned}$$
(8)

In the above definition, \(\mu \in \{0,1\}^*\) denotes any auxiliary information computed by the adversary during the first round when she chooses the inputs \(\varvec{x}_0\) and \(\varvec{x}_1\). Note that for the definition to be sound, we only consider adversaries \(\mathcal {A}\) such that \(\mathcal {A}(\bot )\) takes values over \(\mathbb {F}^s \times \mathbb {F}^s \times \{0,1\}^*\) and \(\mathcal {A}(\varvec{x}_0, \varvec{x}_1, \mu , \varvec{\ell })\) takes values over \(\{0,1\}\) for every input \((\varvec{x}_0, \varvec{x}_1, \mu , \varvec{\ell }) \in \mathbb {F}^s \times \mathbb {F}^s \times \{0,1\}^* \times \mathrm {span}(\mathcal {L})\).

Lemma 1 provides a security reduction from the noisy leakage model to the random probing model. This is formalized in the following corollary:

Corollary 2

Let \(\varPi \) be an arithmetic program that is \(\varepsilon \)-leakage secure w.r.t \(\delta \)-random-probing leakage functions. Then \(\varPi \) is \(\varepsilon \)-leakage secure w.r.t \(\delta '\)-noisy leakage functions, where \( \delta ' = \delta \, |\mathbb {F}|^2\).

Note that in the original version of Lemma 1 (see [13]), the authors need the additional requirement that \(f'\) is efficiently decidable so that \(f'(\phi (x))\) is computable in polynomial time in \(|\mathcal {X}|\). We ignore this property in the present paper since our security statements consider adversaries with unlimited computational power.

3 Secure Multiplication in Quasilinear Complexity

In this section, we describe our encoding scheme and the associated secure multiplication. An important requirement of our construction is that the size n of the underlying encoding must divide \(\frac{p-1}{2}\) where p is the characteristic of \(\mathbb {F}\), that is \(\mathbb {F}\) must contain the 2n-th roots of unity. This implies that the size of the elements of \(\mathbb {F}\) is in \(\varOmega (\log n)\). Without loss of generality, we further assume that n is a power of 2.

3.1 Our Encoding

Let \(\xi \) denote a primitive 2nth root of unity in \(\mathbb {F}\). Our encoding is based on a random element \(\omega \in \mathbb {F}^*\) and is defined as follows:

Definition 4

Let \(\omega \in \mathbb {F}^*\) and \(a \in \mathbb {F}\). An \(\omega \)-encoding of a is a tuple \((a_i)_{i=0}^{n-1} \in \mathbb {F}^n\) satisfying \(\sum _{i=0}^{n-1}a_i\omega ^i = a\).

Our encoding function \(\mathsf {Enc}\) maps an element \(a \in \mathbb {F}\) to a random element \(\omega \in \mathbb {F}^*\) and a random uniform \(\omega \)-encoding of a:

$$\begin{aligned} \mathsf {Enc}(a) = \langle \omega , (a_0, a_1, \dots , a_{n-1}) \rangle . \end{aligned}$$
(9)

The corresponding decoding function \(\mathsf {Dec}\) is defined as:

$$\begin{aligned} \mathsf {Dec}\big ( \langle \omega , (a_0, a_1, \dots , a_{n-1}) \rangle \big ) := \mathsf {Dec}_{\omega }(a_0, a_1, \dots , a_{n-1}) := \sum _{i=0}^{n-1}a_i\omega ^i \end{aligned}$$
(10)

It is easy to check that we have \(\Pr (\mathsf {Dec}(\mathsf {Enc}(a))=a)=1\) for every \(a\in \mathbb {F}\). It is worth noting that the security of our scheme does not rely on the secrecy of \(\omega \) but on its uniformity. Besides, we will consider that \(\omega \) is systematically leaked to the adversary.

3.2 Multiplication of Encoded Variables

Let \((a_i)_{i=0}^{n-1}\) be an \(\omega \)-encoding of a and \((b_i)_{i=0}^{n-1}\) be an \(\omega \)-encoding of b. To compute an \(\omega \)-encoding \((c_i)_{i=0}^{n-1}\) of \(c=a \cdot b\) we use the NTT-based polynomial multiplication.

Specifically, we first apply the NTT on \((a_i)_i\) and \((b_i)_i\) to obtain the polynomial evaluations \(u_j = \sum _{i=0}^{n-1}a_i(\xi ^j)^i\) and \(v_j = \sum _{i=0}^{n-1}b_i(\xi ^j)^i\) for \(j\in \![\![0, 2n-1\!]\!]\). These evaluations are then pairwisely multiplied to get evaluations of the product \(s_j = (2n)^{-1} u_j \cdot v_j\) for \(j \in \![\![0, 2n-1\!]\!]\) (with a multiplicative factor \((2n)^{-1}\)). Afterwards, we apply the inverse NTT to get coefficients \(t_i\) that satisfy \(\sum _{i=0}^{2n-1}t_i\omega ^i = (\sum _{i=0}^{n-1}a_i\omega ^i)\cdot (\sum _{i=0}^{n-1}b_i\omega ^i)\). Eventually, we apply a compression procedure to recover an n-size \(\omega \)-encoding from the 2n-size \(\omega \)-encoding \((t_i)_i\). Due to the particular form of roots of unity, an NTT can be evaluated with a divide and conquer strategy in \(3 \, n \log n\) arithmetic instructions (a detailed description is given in Appendix A).

The overall process is summarized as follows:

$$\begin{aligned} (u_0, u_1, \ldots , u_{2 n -1})&\leftarrow \mathsf {NTT}_{\xi }(a_0, a_1, \dots , a_{n-1}, 0, \dots , 0)\\ (r_0, r_1, \ldots , r_{2 n -1})&\leftarrow \mathsf {NTT}_{\xi }(b_0, b_1, \dots , b_{n-1}, 0, \dots , 0)\\ (s_0, s_1, \ldots , s_{2 n -1})&\leftarrow (2n)^{-1} (u_0 \cdot r_0, u_1 \cdot r_1, \ldots , u_{2n-1} \cdot r_{2n-1})\big )\\ (t_0, t_1, \dots , t_{2n-1})&\leftarrow \mathsf {NTT}_{\xi ^{-1}}(s_{0}, s_{1}, \dots , s_{2 n-1})\\ (c_0, c_1, \dots , c_{n-1})&\leftarrow \mathsf {compress}(t_0, t_1, \dots , t_{2n-1}) \end{aligned}$$

Compression Procedure. After computing the inverse NTT, we get a double-size encoding \((t_i)_{i=0}^{2n-1}\) satisfying \(\sum _{i=0}^{2n-1} t_i \, \omega ^i = a \cdot b\). In order to obtain a standard encoding with n shares, we simply set \(c_i = t_i + t_{n+i} \omega ^n\) for \(i \in [\![0,n-1]\!]\). It is not hard to see that the result is consistent.

3.3 Security in the Random Probing Model

We first focus on the NTT leakage security as it is the most complex part of our scheme, and then provide a security proof for the whole multiplication.

Security of the NTT. We have the following result:

Theorem 1

Let \(\omega \) be a uniform random element of \(\mathbb {F}^*\), let \((a_i)_{i=0}^{n-1}\) be a uniform \(\omega \)-encoding of some variable a and let \(\delta < 1 /(6 \log n)\). The \(\mathsf {NTT}_\xi \) procedure on input \((a_i)_{i=0}^{n-1}\) is \(\varepsilon \)-leakage secure in the \(\delta \)-random-probing leakage model, where

$$\begin{aligned} \varepsilon = \frac{n}{|\mathbb {F}|} + \exp \Big (-\frac{(1-6 \delta \log n)^2}{4} ~n\Big ). \end{aligned}$$
(11)

The rest of the section gives a proof of Theorem 1. During the computation of the NTT on an \(\omega \)-encoding \((a_i)_{i=0}^{n-1}\) of a, all the leaking intermediate variables (i.e. the inputs of arithmetic instructions) are linear combinations of the \(a_i\)’s. Specifically, every intermediate variable v occurring in the NTT computation can be expressed as

$$\begin{aligned} v = \sum _{i=0}^{n-1}\alpha _i a_i \end{aligned}$$
(12)

where the \(\alpha _i\)’s are constant coefficients over \(\mathbb {F}\). In the following, we shall use the notation

$$\begin{aligned}{}[v]=(\alpha _0, \alpha _1, \dots , \alpha _{n-1})^\mathsf {t}\end{aligned}$$
(13)

for the column vector of coefficients of such an intermediate variable. Similarly, we shall denote \([a] = (1, \omega , \omega ^2, \ldots , \omega ^{n-1})^\mathsf {t}\) since we have \(a = \sum _{i=0}^{n-1}\omega ^i a_i\) by definition. Moreover, we will denote by \([v_0, v_1, \dots , v_{\ell }]\) the matrix with column vectors \([v_0]\), \([v_1]\), ..., \([v_{\ell }]\). In particular, we have \([a_0, a_1, \dots , a_{n-1}] = I_{n}\) (where \(I_n\) stands for the identity matrix of dimension n over \(\mathbb {F}\)) and for \(u_i = \sum _{j=0}^{n-1} a_j (\xi ^i)^j\) (the output elements of the NTT), the matrix \([u_0, u_1, \ldots , u_{n-1}]\) is a Vandermonde matrix.

First consider an adversary that recovers \(\ell <n\) intermediate variables in the computation of the NTT, denoted \(v_1, v_2, \dots , v_{\ell }\). Without loss of generality, we assume that these intermediate variables are linearly independent (otherwise the adversary equivalently gets less than \(\ell \) intermediate variables), which means that the matrix \([v_1, v_2, \dots , v_{\ell }]\) has full rank. The following lemma gives a necessary and sufficient condition for such a leakage to be statistically independent of a.

Lemma 2

Let \(v_1, v_2, \dots , v_{\ell }\) be a set of \(\ell <n\) intermediate variables of the NTT on input a uniform \(\omega \)-encoding of a variable a. The distribution of the tuple \((v_1, v_2, \dots , v_{\ell })\) is statistically independent of a iff

$$\begin{aligned}{}[a] \notin \mathrm {span}([v_1, \dots , v_{\ell }])~, \end{aligned}$$
(14)

where \(\mathrm {span}(\cdot )\) refers to the linear span of the input matrix.

Proof

If \([a] \in \mathrm {span}([v_1, \dots , v_{\ell }])\) then there exists constants \(\gamma _1\), \(\gamma _2\), ..., \(\gamma _\ell \) such that \([a] = \sum _i \gamma _i [v_i]\) implying \(a = \sum _i \gamma _i v_i\), and the distribution \((v_1, v_2, \dots , v_{\ell })\) is hence statistically dependent on a. On the other hand, if \([a] \notin \mathrm {span}([v_1, \dots , v_{\ell }])\), then the system

$$\begin{aligned} {\left\{ \begin{array}{ll} a = \mathop \sum \nolimits _{j=0}^{n-1} \omega ^j a_j &{} = \gamma _0 \\ v_1 = \mathop \sum \nolimits _{j=0}^{n-1} \alpha _{1,j} a_j &{} = \gamma _1\\ v_2 = \mathop \sum \nolimits _{j=0}^{n-1} \alpha _{2,j} a_j &{} = \gamma _2 \\ \vdots \\ v_\ell = \mathop \sum \nolimits _{j=0}^{n-1} \alpha _{t,j} a_j &{} = \gamma _\ell \end{array}\right. } \end{aligned}$$

has \(|\mathbb {F}|^{n-(\ell +1)}\) solutions \((a_0, a_1, \ldots , a_{n-1})\) for every \((\gamma _0, \gamma _1, \ldots , \gamma _\ell ) \in \mathbb {F}^{\ell +1}\). This implies the statistical independence between a and \((v_1, v_2, \dots , v_{\ell })\). \(\square \)

The following lemma gives an upper bound on the probability that the above condition is not fulfilled.

Lemma 3

Let \(\omega \) be a uniform random element in \(\mathbb {F}^*\) and let \(v_1, v_2, \dots , v_{\ell }\) be a set of \(\ell <n\) linearly independent intermediate variables of the NTT on input an \(\omega \)-encoding of a variable a. We have:

$$\begin{aligned} \Pr \big ([a] \in \mathrm {span}([v_1, \dots , v_{\ell }]) \big )\le \frac{\ell }{|\mathbb {F}|-1} < \frac{n}{|\mathbb {F}|} ~, \end{aligned}$$
(15)

where the above probability is taken over a uniform random choice of \(\omega \).

Proof

Let us denote \(A(x) = \sum _{i=0}^{n-1} a_i x^i\) so that \([A(\alpha )] = (1, \alpha , \alpha ^2, \ldots , \alpha ^{n-1})^\mathsf {t}\) for every \(\alpha \in \mathbb {F}\), and in particular \([a] = [A(\omega )]\). For any distinct \(\ell +1\) elements \(\alpha _1, \alpha _2, \cdots \alpha _{\ell +1} \in \mathbb {F}^*\), the matrix \([A(\alpha _1), A(\alpha _2), \ldots , A(\alpha _{\ell +1})]\) has full rank since it is a Vandermonde matrix with distinct input entries. This directly implies:

$$\begin{aligned} \underbrace{\mathrm {span}([A(\alpha _1), A(\alpha _2), \ldots , A(\alpha _{\ell +1})])}_{\mathrm {dim}~{\ell +1}} \nsubseteq \underbrace{\mathrm {span}([v_1, \dots , v_{\ell }])}_{\mathrm {dim}~\ell }, \end{aligned}$$
(16)

hence the set \(\varOmega = \{ \alpha \mid [A(\alpha )] \in \mathrm {span}([v_0, v_1, \dots , v_{\ell }])\}\) has cardinality at most \(\ell \). By the uniform distribution of \(\omega \), we then have a probability at most \(\ell /(|\mathbb {F}|-1) \le n/|\mathbb {F}|\) to have \(\omega \in \varOmega \) that is to have \([a] \in \mathrm {span}([v_1, \dots , v_{\ell }])\). \(\square \)

We now have all the ingredients to prove Theorem 1.

Proof

(Theorem 1) We will show that for any adversary \(\mathcal {A}\), the advantage \(\mathrm {Adv}^{\mathsf {NTT},\,\mathcal {L}}_{\mathcal {A}}\) in distinguishing \(\mathcal {L}(\mathsf {NTT}, \mathsf {Enc}(a^{(0)}))\) from \(\mathcal {L}(\mathsf {NTT}, \mathsf {Enc}(a^{(1)}))\) for any chosen elements \(a^{(0)},a^{(1)} \in \mathbb {F}\) is lower than \(\varepsilon \), where \(\mathcal {L}(\mathsf {NTT}, \mathsf {Enc}(a))\) denotes the \(\delta \)-random-probing leakage of the procedure \(\mathsf {NTT}_\xi \) on input \(\mathsf {Enc}(a) = \langle \omega , (a_i)_{i=0}^{n-1} \rangle \). Note that this leakage is a tuple in which each coordinate corresponds to an arithmetic instruction in the computation of \(\mathsf {NTT}_{\xi }\) that either equals \(\bot \) (with probability \(1 - \delta \)) or the input of the instruction. We recall that the advantage is defined as \(\mathrm {Adv}^{\mathsf {NTT},\,\mathcal {L}}_{\mathcal {A}} = \big | \mathrm {Succ}_{\mathcal {A}}^{\mathsf {NTT},\,\mathcal {L}} - \frac{1}{2} \big |\) where

$$\begin{aligned} \mathrm {Succ}_{\mathcal {A}}^{\mathsf {NTT},\,\mathcal {L}} = \Pr \left( \begin{array}{cl} (a^{(0)},a^{(1)}, \mu ) \leftarrow \mathcal {A}(\bot ) \\ b \leftarrow \{0,1\} &{} : ~ \mathcal {A}(a^{(0)},a^{(1)}, \mu , \varvec{\ell }) = b \\ \varvec{\ell } \leftarrow \mathcal {L}(\mathsf {NTT}, a^{(b)}) \\ \end{array} \right) \end{aligned}$$
(17)

Without loss of generality, we assume \(\mathrm {Succ}_{\mathcal {A}}^{\mathsf {NTT},\,\mathcal {L}} \ge \frac{1}{2}\). Indeed, for any adversary with success probability \(\frac{1}{2} - \mathrm {Adv}^{\mathsf {NTT},\,\mathcal {L}}_{\mathcal {A}}\), there exists an adversary \(\mathcal {A}'\) with success probability \(\frac{1}{2} + \mathrm {Adv}^{\mathsf {NTT},\,\mathcal {L}}_{\mathcal {A}}\) (defined as \(\mathcal {A}'(a^{(0)},a^{(1)}, \varvec{\ell }) = 1 - \mathcal {A}(a^{(0)},a^{(1)},\mu , \varvec{\ell })\)).

The procedure \(\mathsf {NTT}_\xi \) is composed of \(N=3n\log n\) arithmetic instructions. In the \(\delta \)-random-probing model, each of these instructions leaks its input(s) with probability \(\delta \). The number of instructions that leak hence follows a binomial distribution with parameters N and \(\delta \). Let us denote by \(\mathsf {max}_\ell \) the event that \(\ell \) or less instructions leak in the random-probing leakage \(\mathcal {L}(\mathsf {NTT}, \mathsf {Enc}(a))\). Since each instruction takes at most two inputs over \(\mathbb {F}\), the adversary gets the values of at most \(2\ell \) intermediate variables whenever \(\mathsf {max}_\ell \) occurs. By the Chernoff bound (see Corollary 1), the probability that more than \(\ell > N \delta \) arithmetic instructions leak, namely the probability that \(\lnot \mathsf {max}_\ell \) occurs, satisfies:

$$\begin{aligned} \Pr (\lnot \mathsf {max}_\ell ) \le \psi (\ell ,N). \end{aligned}$$
(18)

From \(N = 3n\log n\) and \(\ell <\frac{n}{2}\), we get that:

$$\begin{aligned} \Pr (\lnot \mathsf {max}_\ell ) \le \exp \Big (-\frac{(1-6 \delta \log n)^2}{2 + 12 \delta \log n} ~n\Big ) \le \exp \Big (-\frac{(1-6 \delta \log n)^2}{4} ~n\Big ). \end{aligned}$$
(19)

Now let assume that \(\mathsf {max}_\ell \) occurs for some \(\ell < \frac{n}{2}\) and let denote \(v_1\), \(v_2\), ..., \(v_{2\ell }\) the recovered intermediate variables. Without loss of generality, we assume that the recovered intermediate variables are linearly independent. Let us then denote by \(\mathsf {free}\) the event that \([a] \notin \mathrm {span}([v_1, \dots , v_{\ell }])\). By Lemma 3, we have

$$\begin{aligned} \Pr (\lnot \mathsf {free}) < \frac{n}{|\mathbb {F}|}. \end{aligned}$$
(20)

And let finally denote by \(\mathsf {succ}\) the event that \(\mathcal {A}\) outputs the right bit b on input \((a^{(0)},a^{(1)}, \mu , \varvec{\ell })\) so that \(\mathrm {Succ}_{\mathcal {A}}^{\mathsf {NTT},\,\mathcal {L}} = \Pr (\mathsf {succ})\). We can then write:

$$\begin{aligned} \mathrm {Succ}_{\mathcal {A}}^{\mathsf {NTT},\,\mathcal {L}} = \Pr (\mathsf {max}_\ell ) \Pr (\mathsf {succ} \mid \mathsf {max}_\ell )&+ \Pr (\lnot \mathsf {max}_\ell ) \Pr (\mathsf {succ} \mid \lnot \mathsf {max}_\ell )\nonumber \\&\le \Pr (\mathsf {succ} \mid \mathsf {max}_\ell ) + \Pr (\lnot \mathsf {max}_\ell ). \end{aligned}$$
(21)

In the same way, we have

$$\begin{aligned} \Pr (\mathsf {succ} \mid \mathsf {max}_\ell ) \le \Pr (\mathsf {succ} \mid \mathsf {max}_\ell \cap \mathsf {free}) + \Pr (\lnot \mathsf {free}). \end{aligned}$$
(22)

By Lemma 2, we have that the leakage \(\varvec{\ell }\) is statistically independent of \(a^{(b)}\) in (17) whenever \(\mathsf {max}_\ell \cap \mathsf {free}\) occurs. This directly implies \(\Pr (\mathsf {succ} \mid \mathsf {max}_\ell \cap \mathsf {free}) = \frac{1}{2}\), which gives

$$\begin{aligned} \mathrm {Succ}_{\mathcal {A}}^{\mathsf {NTT},\,\mathcal {L}} < \frac{1}{2} + \Pr (\lnot \mathsf {max}_\ell ) + \Pr (\lnot \mathsf {free}). \end{aligned}$$
(23)

Hence, we finally get

$$\begin{aligned} \mathrm {Adv}_{\mathcal {A}}^{\mathsf {NTT},\,\mathcal {L}} < \Pr (\lnot \mathsf {max}_\ell ) + \Pr (\lnot \mathsf {free}) = \frac{n}{|\mathbb {F}|} + \exp \Big (-\frac{(1-6 \delta \log n)^2}{4} ~n\Big ), \end{aligned}$$
(24)

which concludes the proof. \(\square \)

Security of the Full Multiplication. We now prove the security of the full multiplication. We have the following result:

Theorem 2

Let \(\omega \) be a uniform random element of \(\mathbb {F}^*\), let \((a_i)_{i=0}^{n-1}\) and \((b_i)_{i=0}^{n-1}\) be uniform \(\omega \)-encodings of some variables a and b, and let \(\delta < 1/(21\log n)\). The above NTT-based multiplication procedure on input \((a_i)_{i=0}^{n-1}\) and \((b_i)_{i=0}^{n-1}\) is \(\varepsilon \)-leakage secure in the \(\delta \)-random-probing leakage model, where

$$\begin{aligned} \varepsilon = \frac{2\,n}{|\mathbb {F}|} + 5 \exp \Big (-\frac{(1-21 \delta \log n)^2}{14} ~n\Big ). \end{aligned}$$
(25)

Proof

The full multiplication is composed of five successive steps:

  1. 1.

    the NTT on input \((a_i)_i\),

  2. 2.

    the NTT on input \((b_i)_i\),

  3. 3.

    the pairwise multiplications \((2n)^{-1} \cdot u_i \cdot r_i\),

  4. 4.

    the NTT on input \((s_i)_i\),

  5. 5.

    the final compression on input \((t_i)_i\).

Let us denote by \(\ell _1\), \(\ell _2\), ..., \(\ell _5\) the number of operations that leak at each of these steps. Since each operation takes up to 2 input variables, the adversary then gets:

  • up to \(2\ell _1\) variables from the first NTT, each variable providing a linear equation in the \(a_i\)’s;

  • up to \(2\ell _2\) variables from the second NTT, each variable providing a linear equation in the \(b_i\)’s;

  • up to \(\ell _3\) pairs \((u_i,r_i)\);Footnote 4 each pair providing a linear equation in the \(a_i\)’s and a linear equation in the \(b_i\)’s;

  • up to \(2 \ell _4\) variables in the third NTT (the inverse NTT), each variable providing a linear equation in the \(s_j\)’s;

  • up to \(\ell _5\) pairs \((t_i,t_{i+n})\),Footnote 5 each pair providing two linear equations in the \(s_j\)’s.

To sum up, the adversary gets a system composed of

  • up to \(\ell _1^* = 2 \ell _1 + \ell _3\) linear equations of the form

    $$\begin{aligned} \sum _{i=1}^n \alpha _{k,i} \cdot a_i = \eta _k ~~~\text {for}\, k=1, \ldots , \ell _1^* \end{aligned}$$
    (26)
  • up to \(\ell _2^* = 2 \ell _2 + \ell _3\) linear equations of the form

    $$\begin{aligned} \sum _{i=1}^n \beta _{k,i} \cdot b_i = \nu _k ~~~\text {for}\, k=1, \ldots , \ell _2^* \end{aligned}$$
    (27)
  • up to \(\ell _3^* = 2 \ell _4 + 2 \ell _5\) linear equations of the form

    $$\begin{aligned} \sum _{j=1}^{2n} \gamma _{k,j} \cdot s_j = \chi _k ~~~\text {for}\, k=1, \ldots , \ell _3^* \end{aligned}$$
    (28)

we have \(s_j = (2n)^{-1} u_j r_j\) for every j, and since \(u_j\) and \(r_j\) can be expressed as linear combinations of \((a_i)_i\) and of \((b_i)_i\) respectively, for every j, the last \(\ell _3^*\) equations can be rewritten as:

$$\begin{aligned} \sum _{i=1}^n \gamma '_{k,i} \cdot b_i = \chi _k ~~~\text {for}\, k=1, \ldots , \ell _3^* \end{aligned}$$
(29)

where the \(\gamma '_{k,i}\)’s are coefficients that depend on the \(a_i\)’s.

From these equations, the attacker gains the knowledge that:

  1. 1.

    the encoding \((a_i)_{i=0}^{n-1}\) belongs to some vectorial space

    $$\begin{aligned} \mathcal {S}_1 = \{\varvec{x}\in \mathbb {F}^n ~;~ M_1 \cdot \varvec{x} = \varvec{\eta }\} \end{aligned}$$
    (30)

    of dimension at least \(n - \ell _1^*\) where \(M_1\) is the matrix with coefficients \(\alpha _{k,i}\)’s, and \(\varvec{\eta }\) is the vector with coordinates \(\eta _k\),

  2. 2.

    the encoding \((b_i)_{i=0}^{n-1}\) then belongs to some vectorial space

    $$\begin{aligned} \mathcal {S}_2 = \{\varvec{x}\in \mathbb {F}^n ~;~ M_2 \cdot \varvec{x} = (\varvec{\nu },\varvec{\chi })\} \end{aligned}$$
    (31)

    of dimension at least \(n - \ell _2^* - \ell _3^*\) where \(M_2\) is the matrix with coefficients \(\beta _{k,i}\)’s and \(\gamma '_{k,i}\)’s and \((\varvec{\nu },\varvec{\chi })\) is the vector with coordinates \(\nu _k\) and \(\chi _k\).

Following the demonstration of Lemma 2, it can be checked that if

$$\begin{aligned} (1, \omega , \ldots , \omega ^{n-1}) \notin \mathrm {span}(M_1) ~~~\text {and}~~~ (1, \omega , \ldots , \omega ^{n-1}) \notin \mathrm {span}(M_2), \end{aligned}$$

then the full leakage of the multiplication is statistically independent of a and b, namely the leakage security holds. These two events are denoted \(\mathsf {free}_1\) and \(\mathsf {free}_2\) hereafter.

Then, following the demonstration of Lemma 3, \(\mathsf {free}_1\) occurs with probability at least \(1 - \frac{n}{|\mathbb {F}|}\) over a random choice of \(\omega \), provided that we have \(\mathrm {rank}(M_1) < n\). Then, since the vectorial space \(\mathcal {S}_1\) is independent of \(\omega \), any possible choice of \((a_i)_{i=0}^{n-1} \in \mathcal {S}_1\) gives rise to some coefficients \(\gamma '_{k,i}\)’s independent of \(\omega \) and we have that \(\mathsf {free}_2\) occurs with probability at least \(1 - \frac{n}{|\mathbb {F}|}\) over a random choice of \(\omega \) as long as we have \(\mathrm {rank}(M_2) < n\). The two conditions on the ranks of \(M_1\) and \(M_2\) are then fulfilled whenever we have

$$\begin{aligned} \ell ^*_1 = 2 \ell _1 + \ell _3 < n, \end{aligned}$$
(32)

and

$$\begin{aligned} \ell ^*_2 + \ell ^*_3 = 2 \ell _2 + \ell _3 + 2 \ell _4 + 2 \ell _5 < n. \end{aligned}$$
(33)

Let us denote \(\mathsf {max}_i\) the event that the number of leaking operations \(\ell _i\) at step i is lower than n / 7, for every i. If \(\mathsf {max}_i\) occurs for every \(i \in \{1,2,3,4,5\}\), then two above inequalities are well satisfied.

By applying the Chernoff bound, we hence get:

$$\begin{aligned} \Pr (\lnot \mathsf {max}_i) \le \psi \big (\frac{n}{7}, N_i\big ), \end{aligned}$$
(34)

where \(N_i\) is the number of operations at step i, which satisfies \(N_i \le 3n \log n\), which gives

$$\begin{aligned} \Pr (\lnot \mathsf {max}_i) \le \psi \big (\frac{n}{7}, 3n \log n\big ) \le \exp \Big (-\frac{(1-21 \delta \log n)^2}{14} ~n\Big ). \end{aligned}$$
(35)

We finally get that the multiplication is \(\varepsilon \)-leakage secure with

$$\begin{aligned}&\varepsilon< \Pr (\lnot \mathsf {max}_1) + \Pr (\lnot \mathsf {max}_2) + \cdots + \Pr (\lnot \mathsf {max}_5) \nonumber \\&\qquad +\, \underbrace{\Pr (\lnot \mathsf {free}_1 \mid \mathsf {max}_1 \wedge \ldots \wedge \mathsf {max}_5)}_{< {n}/{|\mathbb {F}|}} + \underbrace{\Pr (\lnot \mathsf {free}_2 \mid \mathsf {max}_1 \wedge \ldots \wedge \mathsf {max}_5)}_{< {n}/{|\mathbb {F}|}}. \end{aligned}$$
(36)

\(\square \)

4 Compositional Security for Arithmetic Programs

In this section we show how to obtain leakage security for a full arithmetic program, composed of several multiplications, additions and subtractions. Since computing addition and subtraction on encoded variables is quite simple, our main contribution is to describe a refreshing procedure which allows us to achieve compositional security.

We first describe our refreshing procedure before explaining how to transform an arithmetic program into a leakage-secure equivalent arithmetic program. Then we provide our compositional security proof.

4.1 Refreshing Procedure

Our refreshing procedure is based on the common approach of adding an encoding of 0. Let \((a_i)_{i=0}^{n-1}\) be an \(\omega \)-encoding of a variable a. We refresh it into an \(\omega \)-encoding \((a'_i)_{i=0}^{n-1}\) of a as follows:

  1. 1.

    sample a random \(\omega \)-encoding \((r_0, r_1, \ldots , r_{n-1}) \leftarrow \mathsf {Enc}_{\omega }(0)\)

  2. 2.

    set \(a'_i = a_i + r_i\) for \(i=0\) to \(n-1\)

The main issue with such an approach is the design of a scheme to sample an encoding of 0 which has the right features for the compositional security. As detailed later, we can prove the compositional security as long as our construction satisfies the two following properties:

  • Uniformity: it outputs a uniform \(\omega \)-encoding of 0;

  • Output linearity: its intermediate variables (i.e. the input of elementary operations in the sampler) can each be expressed as a linear combination of the output shares \((r_i)_i\).

We now describe an \(\mathsf {Enc}_{\omega }(0)\) sampler which satisfies these two properties.

Sampling Encodings of 0. At the beginning of the computation of \(\varPi \), a random \(\omega \)-encoding of 0 is generated. This is simply done by randomly picking \(n-1\) of the n shares and computing the last one accordingly. We will denote by \((e_i)_{i=0}^{n-1}\) this encoding. Note that just as for \(\omega \), this encoding can be fully leaked to the adversary. Our sampler then works as follows:

  1. 1.

    pick \(n-1\) random values \(u_0\), \(u_1\), ..., \(u_{n-2}\) over \(\mathbb {F}\),

  2. 2.

    output \((r_i)_{i=0}^{n-1} = \mathsf {NTTMult}((u_0, u_1, \dots , u_{n-2},0), (e_0, e_1, \dots , e_{n-1}))\)

where \(\mathsf {NTTMult}\) is the NTT-based multiplication described in Sect. 3.

It is not hard to see that the result is indeed an encoding of 0: since the \((e_i)_i\) encode a 0, then the encoded product is also a 0. The uniformity is slightly more tricky to see. We claim that with overwhelming probability (over the random choice of \((e_i)_i\)), the function:

$$\begin{aligned} (u_0, u_1, \dots , u_{n-2}) \mapsto \mathsf {NTTMult}((u_0, u_1, \dots , u_{n-2},0), (e_0, e_1, \dots , e_{n-1})), \end{aligned}$$
(37)

is invertible. This function is indeed linear and it can be seen as a multiplication by an \((n-1) \times n\) matrix. We empirically validated that this matrix is of rank \(n-1\) with overwhelming probability.Footnote 6 By discarding one column we can get a full-rank square matrix of dimension \(n-1\), allowing the recovery of the \((u_0, u_1, \dots , u_{n-2})\) from output encoding. Therefore, we have a one-to-one mapping between the vectors \((u_0, u_1, \dots , u_{n-2}) \in \mathbb {F}^{n-1}\) and the \(\omega \)-encodings of 0, \((r_i)_{i=0}^{n-1} \in \mathbb {F}^{n}\) with \(\mathsf {Dec}_{\omega } ((r_i)_{i=0}^{n-1}) = 0\).

The output linearity is a direct consequence of the above. Since the \(u_i\)’s can be expressed as linear combinations of the \(r_i\)’s, then all the intermediate variables of the sampling procedure can be expressed as such linear combinations as well.

4.2 Arithmetic Program Compiler

We consider an arithmetic program P processing variables defined over a prime field \(\mathbb {F}\). We show how to transform such a program into a leakage-secure arithmetic program \(\varPi \). Each arithmetic instruction of P gives rise to a corresponding gadget in \(\varPi \) that works on encodings. We describe these different gadgets hereafter.

Copy Gadget. The copy gadget simply consists in applying a refreshing procedure to copy an encoded variable into the same freshly encoded variable. Let \((a_i)_{i=0}^{n-1}\) be an \(\omega \)-encoding of a. The copy gadget compute an \(\omega \)-encoding \((a'_i)_{i=0}^{n-1}\) of a as:

$$\begin{aligned} (a'_0, a'_1, \dots , a'_{n-1})&\leftarrow \mathsf {refresh}(a_0, a_1, \dots , a_{n-1}) \\ \end{aligned}$$

The copy gadget is used whenever an output \(\omega \)-encoding \((a_i)_{i=0}^{n-1}\) from some previous gadget is used as an input of several following gadgets. If \((a_i)_{i=0}^{n-1}\) is to be used in input of N following gadgets, one makes \(N-1\) extra copies (in such a way that each new copy enters the next copy gadget):

$$\begin{aligned} (a_i)_{i=0}^{n-1} \rightarrow (a_i^{(2)})_{i=0}^{n-1} \rightarrow \cdots \rightarrow (a_i^{(N)})_{i=0}^{n-1} \end{aligned}$$

This way, each fresh encoding \((a_i^{(j)})_{i=0}^{n-1}\) enters at most two different gadgets: the copy gadget and one of the N computation gadgets.

Addition Gadget. Let \((a_i)_{i=0}^{n-1}\) be an \(\omega \)-encoding of a and \((b_i)_{i=0}^{n-1}\) be an \(\omega \)-encoding of b. To compute an \(\omega \)-encoding \((c_i)_{i=0}^{n-1}\) of \(c=a+b\), we simply compute:

$$\begin{aligned} (c_0,c_1,\dots ,c_{n-1}) \leftarrow \mathsf {refresh}(a_0+b_0, a_1+b_1, \dots , a_{n-1}+b_{n-1}) \end{aligned}$$

Subtraction Gadget. Let \((a_i)_{i=0}^{n-1}\) be an \(\omega \)-encoding of a and \((b_i)_{i=0}^{n-1}\) be an \(\omega \)-encoding of b. To compute an \(\omega \)-encoding \((c_i)_{i=0}^{n-1}\) of \(c=a+b\), we simply compute:

$$\begin{aligned} (c_0,c_1,\dots ,c_{n-1}) \leftarrow \mathsf {refresh}(a_0-b_0, a_1-b_1, \dots , a_{n-1}-b_{n-1}) \end{aligned}$$

Multiplication Gadget. Let \((a_i)_{i=0}^{n-1}\) be an \(\omega \)-encoding of a and \((b_i)_{i=0}^{n-1}\) be an \(\omega \)-encoding of b. To compute an \(\omega \)-encoding \((c_i)_{i=0}^{n-1}\) of \(c=a \cdot b\), we simply compute:

$$\begin{aligned} (c_0,c_1,\dots ,c_{n-1}) \leftarrow \mathsf {refresh}\big (\mathsf {NTTMult}((a_0, a_1, \dots , a_{n-1}), (b_0, b_1, \dots , b_{n-1}))\big ) \end{aligned}$$

where \(\mathsf {NTTMult}\) denotes the NTT-based multiplication described in Sect. 3.

4.3 Compositional Security

The compositional security of our construction is based on the two following properties of the refreshing procedure:

  • Uniformity: for a given \(\omega \in \mathbb {F}^*\) and a given value \(a \in \mathbb {F}\), the \(\omega \)-encoding \((a'_i)_{i=0}^{n-1}\) in output of the refreshing procedure is uniformly distributed and independent of the input \(\omega \)-encoding \((a_i)_{i=0}^{n-1}\);

  • I/O linear separability: the intermediate variables of the refreshing procedure can each be expressed as a deterministic function of a linear combination of the \((a_i)_i\) and a linear combination of the \((a'_i)_i\).

The uniformity property is a direct consequence of the uniformity of the \(\mathsf {Enc}_{\omega }(0)\) sampler. The I/O linear separability holds from the output linearity of the \(\mathsf {Enc}_{\omega }(0)\) sampler since the shares \((r_i)_i\) output by the sampler satisfy \(r_i = a'_i -a_i\) for every i, implying that any linear combination \({\sum }_i \gamma _{i} r_i\) equals \({\sum }_i \gamma _{i} a_i' - {\sum }_i \gamma _{i} a_i\) and is hence a deterministic function of a linear combination \({\sum }_i \gamma _{i} a_i'\) and a linear combination \({\sum }_i \gamma _{i} a_i\).

The I/O linear separability of the refreshing procedure implies that its leakage can be split into some leakage depending only on its input encoding, which is the output (before refreshing) from a previous gadget, and some leakage depending only on its output encoding, which is the input of a next gadget. This way, the full leakage can be split into subleakages each depending on the input/output of one gadget. Moreover, the uniformity property implies that all these subleakages are mutually independent. They can hence be analyzed separately: if none of them reveal information, then the full leakage does not reveal information either.

The compositional security of our construction is formalized in the following theorem.

Theorem 3

Let P be an arithmetic program taking some input \(\varvec{x} \in \mathbb {F}^s\) and let \(\varPi \) denotes the corresponding program protected with n-size encodings as described above. For every \(\delta < 1/(33\log n)\), \(\varPi \) is \(\varepsilon \)-leakage secure in the \(\delta \)-random-probing model where

$$\begin{aligned} \varepsilon = 3|P| \cdot \left( 2 \exp \Big (-\frac{(1-33 \delta \log n)^2}{22} ~n\Big ) + \frac{2 n}{|\mathbb {F}|} \right) , \end{aligned}$$
(38)

where |P| denotes the size of P i.e. its number of arithmetic instructions.

Proof

Let \(|\varPi |\) denotes the number of gadgets in \(\varPi \). Since the output of each gadget is refreshed (and nothing more), the number of call to the refreshing procedure is also \(|\varPi |\). Each arithmetic instruction in P gives rise to one associated gadget, plus up to 2 copy gadgets if necessary. We hence deduce \(|\varPi | \le 3 |P|\).

Let us denote by \(\mathsf {rmax}\) the event that at most \(\frac{n}{11}\) operations leak in each refreshing. By applying the Chernoff bound (see Corollary 1), we have

$$\begin{aligned} \Pr (\lnot \mathsf {rmax}) \le |\varPi | \cdot \psi \big (\frac{n}{11},N_{\mathrm {ref}}\big ), \end{aligned}$$
(39)

where \(N_{\mathrm {ref}}\) denotes the number of elementary operations in the refreshing procedure. Let us further denote by \(\mathsf {gmax}\) the event that at most \(\frac{n}{11}\) operations leak in each gadget (without refreshing). In the same way as above, we have

$$\begin{aligned} \Pr (\lnot \mathsf {gmax}) \le \sum _{i=1}^{|\varPi |} \psi \big (\frac{n}{11},N^{(i)}\big ) \le |\varPi | \cdot \psi \big (\frac{n}{11},N_{\mathrm {gad}}\big ), \end{aligned}$$
(40)

where \(N^{(i)}\) denotes the number of elementary operations in the ith gadget and where \(N_{\mathrm {gad}}\) denotes the max (which is reached by the multiplication gadget).

In the following, we shall denote by \((a^{(i)}_j)_{j}\) and \((b^{(i)}_j)_{j}\) the input encodings of the ith gadget of \(\varPi \) and by \((c^{(i)}_j)_{j}\) the output encoding (before refreshing) of the ith gadget of \(\varPi \). Let us further denote by \(\mathcal {L}\) the full \(\delta \)-random-probing leakage of \(\varPi \), so that we have:

$$\begin{aligned} \mathcal {L} = \bigcup _{i=1}^{|\varPi |} \mathcal {G}^{(i)} ~ \cup ~\bigcup _{i=1}^{|\varPi |} \mathcal {R}^{(i)} \end{aligned}$$
(41)

where \(\mathcal {G}^{(i)}\) denotes the leakage from the ith gadget (without refreshing) and where \(\mathcal {R}^{(i)}\) denotes the leakage of the ith refresh. Specifically, \(\mathcal {G}^{(i)}\) and \(\mathcal {R}^{(i)}\) are families of intermediate variables (inputs of elementary operations) that are revealed by the \(\delta \)-random-probing leakage. If \(\mathsf {rmax}\) and \(\mathsf {gmax}\) occurs, we have \(|\mathcal {G}^{(i)}| \le \frac{2n}{11}\) and \(|\mathcal {R}^{(i)}| \le \frac{2n}{11}\).

According the the I/O linear separability property of the refreshing procedure, we can define a separated leakage \(\mathcal {L}'\) as

$$\begin{aligned} \mathcal {L}' = \bigcup _{i=1}^{|\varPi |} \big (\mathcal {G}^{(i)} \cup \mathcal {A}^{(i)} \cup \mathcal {B}^{(i)} \cup \mathcal {C}^{(i)} \big ) \end{aligned}$$
(42)

where \(\mathcal {A}^{(i)}\) is a set of linear combinations of \((a^{(i)}_{j})_j\), \(\mathcal {B}^{(i)}\) is a set of linear combinations of \((b^{(i)}_{j})_j\), \(\mathcal {C}^{(i)}\) is a set of linear combinations of \((c^{(i)}_{j})_j\), such that \(\mathcal {L}\) is a deterministic function of \(\mathcal {L}'\). This implies that if \(\mathcal {L}'\) is statistically independent of the program input \(\varvec{x}\), then so is \(\mathcal {L}\). The remaining of the proof consists in showing that the former occurs with overwhelming probability (for a sound choice of the parameters).

We shall bound the probability (over the distribution of \(\omega \)) that the family \(\mathcal {L}'\) is statistically dependent on \(\varvec{x}\), hereafter denoted \(\varvec{x} \,\triangledown \, \mathcal {L}'\). We have

$$\begin{aligned} \varvec{x} \,\triangledown \, \mathcal {L}' = \bigvee _{i=1}^{|\varPi |} \big (\varvec{x} \,\triangledown \, \mathcal {G}^{(i)} \cup \mathcal {A}^{(i)} \cup \mathcal {B}^{(i)} \cup \mathcal {C}^{(i)} \big ). \end{aligned}$$
(43)

By the uniformity property of the refreshing, we have that, given the program input \(\varvec{x}\), the different families of input/output shares \(\big \{(a^{(i)}_j)_{j=0}^{n-1}, (b^{(i)}_j)_{j=0}^{n-1}, (c^{(i)}_j)_{j=0}^{n-1}\big \}\) are mutually independent. We hence get

$$\begin{aligned} \Pr (\varvec{x} \,\triangledown \, \mathcal {L}') \le \sum _{i=1}^{|\varPi |} \Pr \big (\varvec{x} \,\triangledown \, \mathcal {G}^{(i)} \cup \mathcal {A}^{(i)} \cup \mathcal {B}^{(i)} \cup \mathcal {C}^{(i)} \big ). \end{aligned}$$
(44)

We can then upper bound the probability \(\Pr \big (\varvec{x} \,\triangledown \, \mathcal {G}^{(i)} \cup \mathcal {A}^{(i)} \cup \mathcal {B}^{(i)} \cup \mathcal {C}^{(i)} \big )\) when the ith gadget is a secure multiplication by following the proof of Theorem 2. The only difference is that the attacker gets additional linear combinations of the input/output shares from the refreshing procedures. Specifically, we would have

  • up to \(\ell ^*_1 = 2 \ell _1 + \ell _3 + 2\ell '_1\) linear combinations of the form \(\sum _{i} \alpha _{k,i} a_i,\) for \(1\le k\le \ell ^*_1\);

  • up to \(\ell ^*_2 = 2 \ell _2 + \ell _3+ 2\ell '_2\) linear combinations of the form \(\sum _{i} \beta _{k,i} b_i,\) for \(1\le k\le \ell ^*_2\);

  • up to \(\ell ^*_3 = 2 \ell _4 + 2 \ell _5+ 2 \ell '_3\) linear combinations of the form \(\sum _{i,j} \gamma _{k,j} s_j\), for \(1\le k\le \ell ^*_3\);

where \(\ell '_1\), \(\ell '_2\) and \(\ell '_3\), are the number of leaking operations in the input/output refreshing procedures. Taking the constraint \(\ell _i < \frac{n}{11}\) and \(\ell '_i < \frac{n}{11}\) for every i, we still get \(\ell ^*_1 + \ell ^*_3 < n\) and \(\ell ^*_2 + \ell ^*_3 < n\). That is, if \(\mathsf {rmax}\) and \(\mathsf {gmax}\) occurs, we get

$$\begin{aligned} \Pr \big (\varvec{x} \,\triangledown \, \mathcal {G}^{(i)} \cup \mathcal {A}^{(i)} \cup \mathcal {B}^{(i)} \cup \mathcal {C}^{(i)} \mid \mathsf {rmax}\wedge \mathsf {gmax}\big ) \le \frac{2 n}{|\mathbb {F}|}. \end{aligned}$$
(45)

For copy, addition and subtraction gadgets, the proof is quite simple. When an operation leaks in such a gadget, it reveals one shares from each input encoding. We hence get less that \(\frac{n}{11}\) linear combinations on each input encoding (from the gadget leakage), plus \(\frac{2n}{11}\) linear combinations on each input encoding (from their respective refreshing), plus \(\frac{2n}{11}\) linear combinations on the output encoding, which can be split into independent linear combinations on the two input encodings. We clearly get less than n linear combinations on each encoding, which allows us to apply Lemma 3 and to obtain (45) for every kind of gadget.

We finally get

$$\begin{aligned} \Pr (\varvec{x} \,\triangledown \, \mathcal {L}')\le & {} \Pr (\lnot \mathsf {rmax}) + \Pr (\lnot \mathsf {gmax}) \\ {}&~~~~~~~+ \sum _{i=1}^{|\varPi |} \Pr \big (\varvec{x} \,\triangledown \, \mathcal {G}^{(i)} \cup \mathcal {A}^{(i)} \cup \mathcal {B}^{(i)} \cup \mathcal {C}^{(i)} \mid \mathsf {rmax}\wedge \mathsf {gmax}\big ) \\\le & {} |\varPi | \cdot \Big ( \psi \big (\frac{n}{11},N_{\mathrm {ref}}\big ) + \psi \big (\frac{n}{11},N_{\mathrm {gad}}\big ) + \frac{2 n}{|\mathbb {F}|} \Big ), \end{aligned}$$

which together with \(N_{\mathrm {ref}},N_{\mathrm {gad}}< 3 n\log n\) concludes the proof. \(\square \)

5 From Arithmetic Random Probing to Noisy Leakage

5.1 Logical Programs

The definition of a logical program is analogous to the definition of an arithmetic program but it is composed of logical instructions over \(\{0,1\}^w\) such as the bitwise AND, OR, XOR, logical shifts and rotations, as well as the addition, subtraction, and multiplication modulo \(2^w\) (namely typical instructions of a w-bit processor). In the \(\varepsilon \)-noisy leakage model, a logical program leaks an \(\varepsilon \)-noisy leakage function \(f(\mu _j, \mu _k)\) of the pair of inputs of each logical instruction \(\mu _i \leftarrow \mu _j * \mu _k\).

The security reduction of Duc et al. (Lemma 1) then implies that a logical program \(\varPi \) that is secure against \(\delta \)-random-probing leakage is also secure against \(\delta '\)-noisy leakage with \(\delta ' = \delta /{2^{2w}}\).

5.2 A Generic Reduction

We then have the following reduction of random-probing model for a logical programs, to the random-probing model for an arithmetic programs:

Lemma 4

Let \(\varPi \) be a \(\varepsilon \)-leakage secure arithmetic program in the \(\delta \)-random-probing model, then there exists a functionally equivalent logical program \(\varPi '\) that is \(\varepsilon \)-leakage secure in the \(\delta '\)-random-probing model for some \(\delta '\) satisfying

$$\begin{aligned} \delta ' = 1 - (1 - \delta )^{1/N} \ge \frac{\delta }{N} ~~~\text {with}~~~ N = O\Big (\frac{1}{w}\log |\mathbb {F}| \log \big ( \frac{1}{w}\log |\mathbb {F}| \big ) \Big ). \end{aligned}$$
(46)

Proof

The logical program \(\varPi '\) is simply the program \(\varPi \) where arithmetic instructions are built from several w-bit logical instructions. It is well known that the addition and subtraction on \(\mathbb {F}\) can be computed in \(N = O\big (\frac{1}{w}\log |\mathbb {F}|\big )\) elementary (w-bit) operations, and that the multiplication on \(\mathbb {F}\) can be computed from \(N = O\big (\frac{1}{w}\log |\mathbb {F}| \log \big ( \frac{1}{w}\log |\mathbb {F}| \big ) \big )\) elementary (w-bit) operations.

Assume that there exists an adversary \(\mathcal {A}'\) with advantage \(\varepsilon \) that makes use of a \(\delta '\)-random-probing leakage on \(\varPi '\), then we show that there exists an adversary \(\mathcal {A}\) with advantage \(\varepsilon \) that makes use of a \(\delta \)-random-probing leakage on \(\varPi \). Since by assumption no such adversary \(\mathcal {A}\) exists, then by contraposition neither does such adversary \(\mathcal {A}'\), meaning that \(\varPi '\) is indeed \(\varepsilon \)-leakage secure in the \(\delta '\)-random-probing model.

We construct an adversary \(\mathcal {A}\) that is given the full input to an arithmetic instruction of \(\varPi \) whenever at least one of the corresponding logical instruction leaks in \(\varPi '\). Informally, it is clear that this can only increase the success probability. To make this reasoning formal, we need to construct an adversary \(\mathcal {A}\) that receives the strengthened leakage, resamples it to make its distribution identical to that of the \(\delta '\)-random-probing leakage on \(\varPi '\) and then call \(\mathcal {A}'\). When \(\mathcal {A}\) receives \(\bot \) as leakage for an arithmetic instruction, it simply sends \(\bot \) to \(\mathcal {A}'\) for all the corresponding logical instructions. When it receives the full input of the arithmetic instruction (meaning that at least one corresponding logical instruction of \(\varPi '\) must leak), it can compute all the inputs of the corresponding logical instructions in \(\varPi '\), and reveal each of them to \(\mathcal {A}'\) with some (biased) given probability. Since we do not consider the computational complexity of the adversaries, the easiest way to achieve a perfect simulation is to use rejection sampling. Namely, for every logical instruction in the group, the input is revealed with probability \(\delta '\). If at the end of the group, no input was revealed, simply restart the revealing process for the same group. This way, we have constructed an adversary \(\mathcal {A}\) using a \(\delta \)-random-probing leakage on \(\varPi \) where

$$ \delta =1-(1-\delta ')^{N}, $$

for \(N = O\big (\log |\mathbb {F}| \log \log |\mathbb {F}|\big )\). Since by assumption no such adversary exists, this means that no adversary \(\mathcal {A}'\) exists with advantage \(\varepsilon \) that makes use of a \(\delta '\)-random-probing leakage on \(\varPi '\). \(\square \)

Combining the above lemma with Lemma 1, and considering a constant word-size w, we get a tight reduction of the security in the noisy leakage model for logical program to the security in the random-probing model for arithmetic program:

Lemma 5

Let \(\varPi \) be a \(\varepsilon \)-leakage secure arithmetic program in the \(\delta \)-random-probing model, then there exists a functionally equivalent logical program \(\varPi '\) that is \(\varepsilon \)-leakage secure in the \(\delta '\)-noisy leakage model for some \(\delta '\) satisfying

$$\begin{aligned} \delta ' = \frac{\delta }{O(\log |\mathbb {F}| \log \log |\mathbb {F}|)}. \end{aligned}$$
(47)

5.3 Application to Our Scheme

In the previous section we have shown that for \(\delta = O (1/\log n)\) our construction is \(\varepsilon \)-leakage secure in the \(\delta \)-random-probing model with

$$\begin{aligned} \varepsilon = \mathsf {negl}(\lambda ) + \mathsf {negl}'(n) \end{aligned}$$
(48)

where \(\mathsf {negl}\) and \(\mathsf {negl}'\) are some negligible functions and where \(\lambda \) is some security parameter such that \(\log |\mathbb {F}| = \lambda + \log n\).

By applying the above reduction to our construction (and recalling that we have \(|\mathbb {F}| = O(n)\)), we obtain the following corollary of Theorem 3:

Corollary 3

Let \(\varPi '\) denotes the secure logical program corresponding to our construction (see Sect. 4). \(\varPi '\) is \(\varepsilon \)-leakage secure in the \(\delta \)-noisy leakage model where \(\varepsilon = \mathsf {negl}(\lambda ) + \mathsf {negl}'(n)\) and \(\delta ' = O \big (1/((\log n)^2 \log \log n)\big )\).

6 Practical Aspects and Open Problems

Securing Arbitrary Computation. Although our scheme is described to work on a finite field \(\mathbb {F}\) with specific structure, it can be used to secure any arbitrary computation represented as a Boolean circuit. Indeed, it is possible to embed a Boolean circuit into an arithmetic program over \(\mathbb {F}\). Each bit is simply represented by an element \(a \in \{0,1\} \subseteq \mathbb {F}\). The binary multiplication then matches with the \(\mathbb {F}\)-multiplication over this subset. Regarding the binary addition \(\oplus \), it can be implemented with operations over \(\mathbb {F}\) as:

$$\begin{aligned} a \oplus b = a + b - 2 a b, \end{aligned}$$
(49)

for every \(a,b \in \{0,1\} \subseteq \mathbb {F}\). Of course such an embedding comes at a high cost in practice and our scheme would not be efficient to protect e.g. an AES computation. However, our scheme is asymptotically more efficient than previous ISW-based schemes meaning that there exists some masking order n for which an implementation of our scheme would be more efficient than an implementation of a previous scheme. Moreover and as discussed hereafter, we think that our scheme could be practically improved in many ways.

Practical Efficiency. For any cryptographic computation on a base field \(\mathbb {F}\) with appropriate structure, our scheme should be very efficient in practice. We recall that the field should be such that \(|\mathbb {F}| = \alpha \cdot n+1 \), for n being a power of 2 and \(\alpha \) being large enough so that \(n/\alpha \) is negligible. A 256-bit prime field such as those used in Elliptic Curve Cryptography could for instance satisfy these criteria. An interesting open issue would be to extend our scheme to work on other algebraic structures and in particular on binary fields (e.g. to efficiently secure the AES) or on rings used in lattice-based cryptography.

On the Size of the Field. We note that we need a ‘big’ field (typically of size \(128 + 2 \log n\)) in order to have enough randomness when picking \(\omega \). However this might be a proof artefact and the scheme could be secure for some constant \(\omega \) and/or using smaller fields. Another direction of improvement would be to mitigate or remove this constraint with an improved construction and/or proof technique.

Packing Encodings. Finally our scheme could also probably be improved by using the principle of packed secret sharing as suggested in [2, 3] since our encoding is a kind of randomized Shamir’s secret sharing.