Abstract
We construct a two-message oblivious transfer (OT) protocol without setup that guarantees statistical privacy for the sender even against malicious receivers. Receiver privacy is game based and relies on the hardness of learning with errors (LWE). This flavor of OT has been a central building block for minimizing the round complexity of witness indistinguishable and zero knowledge proof systems, non-malleable commitment schemes and multi-party computation protocols, as well as for achieving circuit privacy for homomorphic encryption in the malicious setting. Prior to this work, all candidates in the literature from standard assumptions relied on number theoretic assumptions and were thus insecure in the post-quantum setting. This work provides the first (presumed) post-quantum secure candidate and thus allows to instantiate the aforementioned applications in a post-quantum secure manner.
Technically, we rely on the transference principle: Either a lattice or its dual must have short vectors. Short vectors, in turn, can be translated to information loss in encryption. Thus encrypting one message with respect to the lattice and one with respect to its dual guarantees that at least one of them will be statistically hidden.
The full version of this paper is available at https://eprint.iacr.org/2018/530.
Z. Brakerski—Supported by the Israel Science Foundation (Grant No. 468/14), Binational Science Foundation (Grants No. 2016726, 2014276), and by the European Union Horizon 2020 Research and Innovation Program via ERC Project REACT (Grant 756482) and via Project PROMETHEUS (Grant 780701).
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
1 Introduction
Oblivious transfer (OT), introduced by Rabin [32], is one of the most fundamental cryptographic tasks. A sender (S) holds two values \(\mu _0, \mu _1\) and a receiver (R) holds a bit \(\beta \). The functionality should allow the receiver to learn \(\mu _\beta \) and nothing else, the sender should learn nothing. OT has been a fundamental building block for many cryptographic applications, in particular ones related to secure multi-party computation (MPC), starting with [15, 35].
A central measure for the complexity of a protocol or a proof system is its round complexity. One could imagine a protocol implementing the OT functionality with only two messages: a first message from the receiver to the sender, and a second message from the sender to the receiver. Indeed, in the semi-honest setting, where parties are assumed to follow the protocol, this can be achieved based on a variety of concrete cryptographic assumptions (Decisional Diffie-Hellman, Quadratic Residuosity, Decisional Composite Residuosity, Learning with Errors, to name a few), as well as based on generic assumptions such as trapdoor permutations, additively homomorphic encryption and public key encryption with oblivious public key generation (e.g. [7, 13]).
In the malicious setting, where an adversarial party might deviate from the designated protocol, the ultimate simulation based security notion cannot be achieved in a two message protocol (without assuming setup such as a common random string or a random oracle) [16]. The standard security notion in this setting, which originated from the works of Naor and Pinkas [27] and Aiello et al. [1], and was further studied in [3, 18, 21], provides a meaningful relaxation of the standard (simulation-based) security notion. This definition requires that the receiver’s only message is computationally indistinguishable between the cases of \(\beta =0\) and \(\beta =1\)Footnote 1, and that regardless of the receiver’s first message, the sender’s message statistically hides at least one of \(\mu _0, \mu _1\). Alternative equivalent formulations are simulation using a computationally unbounded (or exponential time) simulator, or the existence of a computationally unbounded (or exponential time) extractor, that can extract a \(\beta \) value from any receiver message.
With the aforementioned connection to secure MPC, it is not surprising that this notion of malicious statistical sender-private OT (SSP-OT) found numerous applications. In particular in recent years as the round complexity of MPC and related objects is taken to the necessary minimum. Badrinarayanan et al. [3], Jain et al. [19] and Kalai et al. [22] used it to construct two-message witness indistinguishable proof systems, and even restricted forms of zero-knowledge proof systems.
Badrinarayanan et al. [4] used similar techniques to present malicious MPC with minimal round complexity (4-rounds). In particular, their building blocks are SSP-OT and a 3-round semi-malicious MPC protocol (a comparable result was achieved by Halevi et al. [17] using different techniques, in particular requiring NIZK/ZAP). Khurana and Sahai [24] used SSP-OT to construct two-message non-malleable commitment schemes (with respect to the commitment), and Khurana [23] used it (together with ZAPs) to achieve 3-round non-malleable commitments from polynomial assumptions. Badrinarayanan et al. [5] relied on SSP-OT to construct 3-round concurrent MPC.
Ostrovsky, Paskin-Cherniavsky and Paskin-Cherniavsky [28] used SSP-OT to show that any fully homomorphic encryption scheme (FHE) can be converted to one that is statistically circuit private even against maliciously generated public keys and ciphertexts.
Our Results and Applications. Prior to this work it was only known how to construct SSP-OT from number theoretic assumptions such as DDH [1, 27], QR and DCR [18]. If setup is allowed, specifically a common random string, then an LWE-based construction by Peikert, Vaikuntanathan and Waters [31] achieves strong simulation security (even in the UC model). However, the aforementioned applications require a construction without setup and could therefore not be instantiated in a post-quantum secure manner. In this work, we construct SSP-OT from the learning with errors (LWE) assumption [33], with polynomial noise-ratio, which translates to the hardness of polynomially approximating short-vector problems (such as SIVP or GapSVP) to within a polynomial factor. Currently, no polynomial time quantum algorithm is known for these problems, and thus they serve as a major candidate for constructing post-quantum secure cryptography.
Relying on our construction, it is possible for the first time, to instantiate the works of [3, 5, 19, 22, 24] from LWE, i.e. in a post-quantum secure manner, and obtain proof systems with witness-indistinguishable or (limited) zero-knowledge properties, as well as non-malleable commitment schemes and concurrent MPC protocols. It is also possible to construct a round-optimal malicious MPC from LWE by applying the result of [4] using our SSP-OT and the LWE-based 3-round semi-malicious MPC of Brakerski et al. [8]. Lastly, our result allows to achieve malicious circuit private FHE from LWE by instantiating the [28] result with our LWE-based SSP-OT and relying on the numerous existing LWE-based FHE schemes. We stress that none of these applications had prior post-quantum secure candidates.
1.1 Technical Overview
Our construction relies on some fundamental properties of lattices. For our purposes we will only consider the so called q-ary lattices that can be described as follows. Given a matrix \(\mathbf {{A}}\in \mathbb {Z}_q^{n \times m}\) for some modulus q and \(m \ge n\), we can define \(\varLambda _q(\mathbf {{A}}) = \{ \mathbf {{y}} \in \mathbb {Z}^m : \mathbf {{y}} = \mathbf {{s}}\mathbf {{A}} \pmod {q} \}\) which is the lattice defined by the row-span of \(\mathbf {{A}}\), and \(\varLambda ^{\perp }_q(\mathbf {{A}}) = \{ \mathbf {{x}} \in \mathbb {Z}^m : \mathbf {{A}}\mathbf {{x}} = \mathbf {{0}} \pmod {q} \}\) which is the lattice defined by the kernel of \(\mathbf {{A}}\). Note that both lattices have rank m over the integers, i.e. they contain a set of m linearly independent vectors over the integers (but not modulo q), since they contain \(q \cdot \mathbb {Z}^m\). There is a duality relation between these two lattices, both induced by the matrix \(\mathbf {{A}}\), and this relation will be instrumental for our methods.
An important fact about lattices is that a good basis implies decoding. Specifically, if \(\varLambda ^{\perp }_q(\mathbf {{A}})\) contains m linearly independent vectors (over the integers) of length at most \(\ell \), then it is possible to decode vectors of the form \(\mathbf {{s}}\mathbf {{A}}+\mathbf {{e}}\pmod {q}\), if \(\left\| {\mathbf {{e}}} \right\| \) is sufficiently smaller than \(q/\ell \). Namely, to recover \(\mathbf {{s}}, \mathbf {{e}}\). Such a short basis is sometimes called a trapdoor for \(\mathbf {{A}}\).Footnote 2
Consider sampling \(\mathbf {{s}}\) uniformly in \(\mathbb {Z}_q^n\) and \(\mathbf {{e}}\) from a Gaussian s.t. \(\left\| {\mathbf {{e}}} \right\| \) is slightly below the decoding capability \(q/\ell \). Then if \(\varLambda ^{\perp }_q(\mathbf {{A}})\) indeed has an \(\ell \)-basis then \(\mathbf {{s}}, \mathbf {{e}}\) can be recovered from \(\mathbf {{s}}\mathbf {{A}}+\mathbf {{e}}\pmod {q}\). However, a critical observation for us is that this encoding becomes lossy if the lattice \(\varLambda _q(\mathbf {{A}})\) contains a vector of norm \({\ll } q/\ell \). That is, in this case it is information theoretically impossible to recover the original \(\mathbf {{s}}\). This is because the component of \(\mathbf {{s}}\mathbf {{A}}\) that is in the direction of the short vector is masked by the noise \(\mathbf {{e}}\) (which is Gaussian and thus has a component in every direction). This property was also used by Goldreich and Goldwasser [14] to show that some lattice problems are in \(\mathbf {coAM}\).
To utilize this structure for our purposes, we specify the OT receiver message to be a matrix \(\mathbf {{A}}\). Then the OT sender generates \(\mathbf {{s}}\mathbf {{A}}+\mathbf {{e}}\pmod {q}\) and encodes one of its inputs, say \(\mu _1\) using entropy from the vector \(\mathbf {{s}}\) (e.g. using a randomness extractor). We get that this value is recoverable if \(\mathbf {{A}}\) has \(\ell \)-basis and information-theoretically hidden if \(\varLambda _q(\mathbf {{A}})\) has a short vector. If the receiver’s choice bit is 1, all it needs to do is generate \(\mathbf {{A}}\) that has an \(\ell \)-trapdoor, for which there are many well known methods to generate such \(\mathbf {{A}}\)’s that are statistically indistinguishable from uniform (starting from [2] with numerous followups). In order to complete the OT functionality we need to find a way to encode \(\mu _0\) in a way that is lossy if \(\varLambda _q(\mathbf {{A}})\) has no short vector. This will guarantee that regardless of the (possibly malicious) choice of matrix \(\mathbf {{A}}\), either \(\mu _0\) or \(\mu _1\) are information theoretically hidden.
Let us examine the case where all vectors in \(\varLambda _q(\mathbf {{A}})\) are of length \({\gg } t\) for some parameter t. Then the duality relations expressed in Banaszczyk’s transference theorems [6] guarantees that \(\varLambda ^{\perp }_q(\mathbf {{A}})\) has a basis of length \({\ll } q/t\). In such case we can use the smoothing principle to conclude that if \(\mathbf {{x}}\) is a discrete Gaussian with parameter q / t then \(\mathbf {{A}}\mathbf {{x}}\pmod {q}\) is statistically close to uniform. We can thus instruct the sender to compute \(\mathbf {{A}}\mathbf {{x}} + \mathbf {{d}}\pmod {q}\) for some vector \(\mathbf {{d}}\), and encode \(\mu _1\) using entropy extracted from \(\mathbf {{d}}\). This guarantees lossiness if \(\varLambda _q(\mathbf {{A}})\) has no short vectors as required. Furthermore, it is possible to generate a pseudorandom \(\mathbf {{A}}\) (under the LWE assumption) and specify \(\mathbf {{d}}\) such that \(\mathbf {{d}}\) is recoverable (this \(\mathbf {{A}}\) corresponds to the public key in Regev’s original encryption scheme [33]).
All that is left is to set the relation between \(\ell , t, q\) so as to make sure that if one mode of the OT is decodable then the other is lossy. One may be suspicious whether there is a valid setting of parameters, but in fact there is quite some slackness in the choice of parameters. We can start by setting \(\ell , t\) to be some fixed polynomial in n that is sufficient to guarantee correct recovery in the respective cases. This can be done regardless of the value of q. We will set the parameter q to ensure that if \(\mu _1\) is recoverable then \(\mu _0\) is not, which is sufficient to guarantee statistical sender privacy against malicious receiver. Specifically, if \(\mu _1\) is recoverable then \(\varLambda _q(\mathbf {{A}})\) does not have vectors of length \(q/(k \ell )\), where k is some polynomial in n (that does not depend on q), and thus \(\varLambda ^{\perp }_q(\mathbf {{A}})\) has a \(k \ell \) basis. We therefore require that \(q/t \gg k\ell \), or equivalently \(q \gg k \ell t\), which guarantees that \(\mu _0\) is not recoverable in this case. Since \(k, \ell , t\) are fixed polynomials in n, it is sufficient to choose q to be a sufficiently larger polynomial than the product \(k \ell t\) to guarantee security. Receiver privacy is guaranteed since \(\mathbf {{A}}\) is either statistically indistinguishable from uniform if the choice bit \(\beta \) is 1, or computationally indistinguishable from uniform if \(\beta = 0\).
Disadvantages of the Basic Solution, and Our Actual Improved Scheme. The proposal above can indeed be used to implement an SSP-OT. However, when actual parameters are assigned, it becomes apparent that the argument about the lossiness of \(\mathbf {{s}}\) given \(\mathbf {{s}}\mathbf {{A}}+\mathbf {{e}}\pmod {q}\) when \(\varLambda _q(\mathbf {{A}})\) has some short vector does not produce sufficient randomness to allow extraction. This can be resolved by repetition (many \(\mathbf {{s}}\) values with the same \(\mathbf {{A}}\)). However, the lossiness argument for \(\mathbf {{d}}\) guarantees much more and in fact allows to extract random bits from \(\mathbf {{d}}\) deterministically. The consequence is an unnecessarily inefficient scheme. In particular, the information rate is inverse polynomial in the security parameter of the scheme.
The scheme we actually introduce and analyze is therefore a balanced version of the above outline, where we “pay” in weakening the lossiness in \(\mathbf {{d}}\) in exchange for strengthening the lossiness for \(\mathbf {{s}}\), which leads to a scheme with information rate \(\widetilde{\varOmega }(1)\) (achieving constant information rate while preserving statistical security remains an intriguing question). Towards this end, we introduce refinements of known lattice tools that may be of independent interest.
The idea is to improve the lossiness in \(\mathbf {{s}}\) by considering the case where \(\varLambda _q(\mathbf {{A}})\) has multiple short vectors, instead of just one. Intuitively, this will introduce entropy into additional components of \(\mathbf {{s}}\), thus increasing the lossiness. We formalize this by considering the Gaussian measure of \(\varLambda _q(\mathbf {{A}})\). A high Gaussian measure translates (at least intuitively) to the existence of a multitude of short vectors, formally it characterizes the potency of \(\mathbf {{e}}\) to hide information about \(\mathbf {{s}}\). The formal argument goes through the optimal Voronoi cell decoder, see Sect. 3 for formal statement and additional details.
Of course the lossiness in \(\mathbf {{s}}\) needs to be complemented by lossiness in \(\mathbf {{d}}\) if the Gaussian measure of \(\varLambda _q(\mathbf {{A}})\) is small, which translates to having few independent short vectors in \(\varLambda _q(\mathbf {{A}})\). We show that in this case we can derive partial smoothing where for a Gaussian \(\mathbf {{x}}\), the value \(\mathbf {{A}}\mathbf {{x}}\pmod {q}\) is no longer uniform, but rather is uniform over some subspace modulo q. If the dimension of this subspace is large enough, we can get lossiness for the vector \(\mathbf {{d}}\) and complete the security proof. Partial smoothing and implications are discussed in Sect. 4.
To apply these principles we need to slightly modify the definition of the vector \(\mathbf {{d}}\) and the matrix \(\mathbf {{A}}\) in the case of \(\beta =0\). Now \(\mathbf {{A}}\) will no longer correspond to the public key of the Regev scheme but rather, interestingly, to the public key of the batched scheme introduced in [31] (which is also concerned with constructing OT, but allowing setup). The complete construction and analysis can be found in Sect. 5.
2 Preliminaries
2.1 Statistical Sender-Private Two-Message Oblivious Transfer
We now define the object of main interest in this work, namely SSP-OT. We only define the two-message perfect-correctness variant since this is what we achieve in this work. A two-message oblivious transfer protocol consists of a tuple ppt algorithms \((\mathsf {OTR}, \mathsf {OTS}, \mathsf {OTD})\) with the following syntax.
-
\(\mathsf {OTR}(1^\lambda , \beta )\) takes the security parameter \(\lambda \) and a selection bit \(\beta \) and outputs a message \(\mathsf {ot_1}\) and secret state \(\mathsf {st}\).
-
\(\mathsf {OTS}(1^\lambda , (\mu _0, \mu _1), \mathsf {ot_1})\) takes the security parameter \(\lambda \), two inputs \((\mu _0, \mu _1) \in \{0,1\}^\mathsf {len}\) (where \(\mathsf {len}\) is a parameter of the scheme) and a message \(\mathsf {ot_1}\). It outputs a message \(\mathsf {ot_2}\).
-
\(\mathsf {OTD}(1^\lambda , \beta , \mathsf {st}, \mathsf {ot_2})\) takes the security parameter, the bit \(\beta \), secret state \(\mathsf {st}\) and message \(\mathsf {ot_2}\) and outputs \(\mu ' \in \{0,1\}^\mathsf {len}\).
Correctness and security are defined as follows.
Definition 2.1
A tuple \((\mathsf {OTR}, \mathsf {OTS}, \mathsf {OTD})\) is a SSP-OT scheme if the following hold.
-
Correctness. For all \(\lambda , \beta , \mu _0, \mu _1\), letting \((\mathsf {ot_1}, \mathsf {st}) = \mathsf {OTR}(1^\lambda , \beta )\), \(\mathsf {ot_2}= \mathsf {OTS}(1^\lambda , (\mu _0, \mu _1), \mathsf {ot_1})\), \(\mu ' = \mathsf {OTD}(1^\lambda , \beta , \mathsf {st}, \mathsf {ot_2})\), it holds that \(\mu ' = \mu _\beta \) with probability 1.
-
Receiver Privacy. Consider the distribution \(\mathcal{D}_\beta (\lambda )\) defined by running \((\mathsf {ot_1}, \mathsf {st}) = \mathsf {OTR}(1^\lambda , \beta )\) and outputting \(\mathsf {ot_1}\). Then \(\mathcal{D}_0, \mathcal{D}_1\) are computationally indistinguishable.
-
Statistical Sender Privacy. There exists an extractor \(\mathsf {OTExt}\) (possibly computationally unbounded) s.t. for any sequence of messages \(\mathsf {ot_1}= \mathsf {ot_1}(\lambda )\) and inputs \((\mu _0, \mu _1) = (\mu _0(\lambda ), \mu _1(\lambda ))\), the distribution ensembles \(\mathsf {OTS}(1^\lambda , (\mu _0, \mu _1), \mathsf {ot_1})\) and \(\mathsf {OTS}(1^\lambda , (\mu _{\beta '}, \mu _{\beta '}), \mathsf {ot_1})\), where \(\beta '=\mathsf {OTExt}(\mathsf {ot_1})\), are statistically indistinguishable.
2.2 Linear Algebra, Min-Entropy and Extractors
Random Matrices: The probability that a uniformly random matrix (with \(m \ge n\)) has full rank is given by
where the first inequality follows from the union-bound.
Average Conditional Min-Entropy. Let X be a random-variable supported on a finite set \(\mathcal {X}\) and let Z be a (possibly correlated) random variable supported on a finite set \(\mathcal {Z}\). The average-conditional min-entropy \(\tilde{H}_\infty ( X | Z )\) of X given Z is defined as
We will use the following easy-to-establish fact about uniform distributions on binary vector-spaces: If \(\mathsf {U}, \mathsf {V} \subseteq \mathbb {Z}_2^n\) are sub-vectorspaces of \(\mathbb {Z}_2^n\), and if and , then it holds that
Extractors. A function \(\mathsf {Ext}: \{ 0,1 \}^d \times \mathcal {X} \rightarrow \{0,1\}^\ell \) is called a seeded strong average-case \((k,\epsilon )\)-extractor, if it holds for all random variables X with support \(\mathcal {X}\) and Z defined on some finite support that if \(\tilde{H}_\infty (X | Z) \ge k\), then it holds that
where and . Such extractors can be constructed from universal hash functions [11, 12]. In fact, any extractor is an average-case extractor for slightly worse parameters by the averaging principleFootnote 3.
2.3 Lattices
We recall the standard facts about lattices. A lattice \(\varLambda \subseteq {\mathbb R}^m\) is the set of all integer-linear combinations of a set of linearly independent basis-vectors, i.e. for every lattice \(\varLambda \) there exists a full-rank matrix \(\mathbf {{B}} \in {\mathbb R}^{k \times m}\) such that \(\varLambda = \varLambda (\mathbf {{B}}) = \{ \mathbf {{z}} \cdot \mathbf {{B}} \ | \ \mathbf {{z}} \in \mathbb {Z}^k \}\). We call k the rank of \(\varLambda \) and \(\mathbf {{B}}\) a basis of \(\varLambda \). More generally, for a set \(S \subseteq \varLambda \) we denote by \(\varLambda (S)\) the smallest sub-lattice of \(\varLambda \) which contains S. Moreover, we will write \(\mathsf {rank}(S)\) to denote \(\mathsf {rank}((\varLambda (S))\).
The dual-lattice \(\varLambda ^*= \varLambda ^*(\varLambda )\) of a lattice \(\varLambda \) is defined by \(\varLambda ^*(\varLambda ) = \{ \mathbf {{x}} \in {\mathbb R}^n \ | \ \forall \mathbf {{y}} \in \varLambda : \langle \mathbf {{x}}, \mathbf {{y}} \rangle \in \mathbb {Z}\}\). Note that it holds that \((\varLambda ^*)^*= \varLambda \). The determinant of a lattice \(\varLambda \) is defined by \(\det \varLambda = \sqrt{\det (\mathbf {{B}} \cdot \mathbf {{B}}^\top )}\) where \(\mathbf {{B}}\) is any basis of \(\varLambda \). It holds that \(\det \varLambda ^*= 1 / \det \varLambda \). If \(\varLambda = \varLambda (\mathbf {{B}})\) and the norm of each row of \(\mathbf {{B}}\) is at most \(\ell \), then an argument using Gram-Schmidt orthogonalization establishes \(\det \mathbf {{B}} \le \ell ^k\).
For a basis \(\mathbf {{B}} \in {\mathbb R}^{k \times m}\) of \(\varLambda \), we define the parallel-epiped of \(\mathbf {{B}}\) by \(\mathcal {P}(\mathbf {{B}}) = \{ \mathbf {{x}} \cdot \mathbf {{B}} \ | \ \mathbf {{x}} \in [-1/2,1/2 )^k \}\). In abuse of notation we write \(\mathcal {P}(\varLambda )\) to denote \(\mathcal {P}(\mathbf {{B}})\) for some canonic basis \(\mathbf {{B}}\) of \(\varLambda \) (such as e.g. a Hermite basis). For lattices \(\varLambda \subseteq \varLambda _0\), we will use \(\mathcal {P}(\varLambda ) \cap \varLambda _0\) as a system of (unique) representatives for the quotient group \(\varLambda _0 / \varLambda \).
We say that a lattice is q-ary if \((q \mathbb {Z})^m \subseteq \varLambda \subseteq \mathbb {Z}^m\). In particular, for every q-ary lattice \(\varLambda \) there exists a matrix \(\mathbf {{A}} \in \mathbb {Z}_q^{k \times m}\) such that \(\varLambda = \varLambda _q(\mathbf {{A}}) = \{ \mathbf {{y}} \in \mathbb {Z}^m \ | \ \exists \mathbf {{x}} \in \mathbb {Z}_q^k: \mathbf {{y}} = \mathbf {{x}} \cdot \mathbf {{A}} ( \ \mathsf {mod} \ q) \}\). We also define the lattice \(\varLambda _q^\bot (\mathbf {{A}}) = \{ \mathbf {{y}} \in \mathbb {Z}_q^m \ | \ \mathbf {{A}} \cdot \mathbf {{y}} = 0 ( \ \mathsf {mod} \ q)\}\). It holds that \((\varLambda _q(\mathbf {{A}}))^*= \frac{1}{q} \varLambda _q^\bot (\mathbf {{A}})\).
Gaussians. The Gaussian function \(\rho _\sigma : {\mathbb R}^m \rightarrow {\mathbb R}\) is defined by
For a lattice \(\varLambda \subseteq {\mathbb R}^m\) and a parameter \(\sigma > 0\), we define the discrete Gaussian distribution \(D_{\varLambda ,\sigma }\) on \(\varLambda \) as the distribution with probability-mass function \(\Pr [\mathbf {{x}} = \mathbf {{x}}'] = \rho _\sigma (\mathbf {{x}}') / \rho _\sigma (\varLambda )\) for all \(\mathbf {{x}}' \in \varLambda \). Let in the following \(\mathcal {B}= \{ \mathbf {{x}} \in {\mathbb R}^m \ | \ \Vert \mathbf {{x}} \Vert \le 1 \}\) be the closed ball of radius 1 in \({\mathbb R}^m\). A standard concentration inequality for discrete gaussians on general lattices is provided by Banaszczyk’s Theorem.
Theorem 2.2
([6]). For any lattice \(\varLambda \in {\mathbb R}^m\), parameter \(\sigma > 0\) and \(u \ge 1/\sqrt{2\pi }\) it holds that
where \(c_u = - \log (\sqrt{2 \pi e} u \cdot e^{-\pi u^2})\).
Setting \(\varLambda = \mathbb {Z}^m\) and \(u = 1\) in Theorem 2.2 we obtain the following corollary.
Corollary 2.3
Let \(\sigma > 0\) and . Then it holds that \(\Vert \mathbf {{x}} \Vert \le \sigma \cdot \sqrt{m}\), except with probability \(2^{-m}\).
Uniform Matrix Distributions with Decoding Trapdoor. For our construction we will need an efficiently samplable ensemble of matrices which is statistically close to uniform and is equipped with an efficient bounded-distance-decoder. Such an ensemble was first constructed by Ajtai [2] for q-ary lattices with prime q. We use a more efficient ensemble due to Micciancio and Peikert [25] which works for arbitrary modulus.
Lemma 2.4
([25]). Let \(\kappa (n) = \omega (\sqrt{\log (n)})\) be any function that grows faster than \(\sqrt{\log (n)}\) and \(\tau \) be a sufficiently large constant. There exists a pair of algorithms \((\mathsf {SampleWithTrapdoor},\mathsf {Decode})\) such that if \((\mathbf {{A}},\mathsf {td}) \leftarrow \mathsf {SampleWithTrapdoor}(q,n)\), then \(\mathbf {{A}}\) is of size \(n \times m\) with \(m = m(q,n) = O(n \cdot \log (q))\) and \(\mathbf {{A}}\) is \(2^{-n}\)close to uniform. For any \(\mathbf {{s}} \in \mathbb {Z}_q^m\) and \(\varvec{{\eta }} \in \mathbb {Z}_q^m\) with \(\Vert \varvec{{\eta }} \Vert < \frac{q}{\sqrt{m} \cdot \kappa (n)}\) the algorithm \(\mathsf {Decode}\) on input \(\mathsf {td}\) and \(\mathbf {{s}} \cdot \mathbf {{A}} + \varvec{{\eta }}\) will output \(\mathbf {{s}}\).
2.4 Learning with Errors
The learning with errors (LWE) problem was defined by Regev [33]. In this work we exclusively use the decisional version. The \(\mathrm {LWE}_{n,m,q,\chi }\) problem, for \(n,m,q\in {\mathbb N}\) and for a distribution \(\chi \) supported over \(\mathbb {Z}\) is to distinguish between the distributions \((\mathbf {{A}}, \mathbf {{s}}\mathbf {{A}}+\mathbf {{e}} \pmod {q})\) and \((\mathbf {{A}}, \mathbf {{u}})\), where \(\mathbf {{A}}\) is uniform in \(\mathbb {Z}_q^{n \times m}\), \(\mathbf {{s}}\) is a uniform row vector in \(\mathbb {Z}_q^n\), \(\mathbf {{e}}\) is a uniform row vector drawn from \(\chi ^m\), and \(\mathbf {{u}}\) is a uniform vector in \(\mathbb {Z}_q^m\). Often we consider the hardness of solving \(\mathrm {LWE}\) for any \(m=\mathrm{poly}(n \log q)\). This problem is denoted \(\mathrm {LWE}_{n,q,\chi }\). The matrix version of this problem asks to distinguish \((\mathbf {{A}},\mathbf {{S}}\cdot \mathbf {{A}} + \mathbf {{E}})\) from \((\mathbf {{A}},\mathbf {{U}})\), where , and \(\mathbf {{U}} \leftarrow \mathbb {Z}_q^{k \times m}\). The hardness of the matrix version for any \(k = \mathrm{poly}(n)\) can be established from \(\mathrm {LWE}_{n,m,q,\chi }\) via a routine hybrid-argument.
As shown in [30, 33], the \(\mathrm {LWE}_{n,q,\chi }\) problem with \(\chi \) being the discrete Gaussian distribution with parameter \(\sigma = \alpha q \ge 2 \sqrt{n}\) (i.e. the distribution over \(\mathbb {Z}\) where the probability of x is proportional to \(e^{-\pi (\left| {x} \right| /\sigma )^2}\), see more details below), is at least as hard as approximating the shortest independent vector problem (\(\mathsf {SIVP}\)) to within a factor of \(\gamma = {\widetilde{O}}({n}/\alpha )\) in worst case dimension n lattices. This is proven using a quantum reduction. Classical reductions (to a slightly different problem) exist as well [9, 29] but with somewhat worse parameters. The best known (classical or quantum) algorithms for these problems run in time \(2^{{\widetilde{O}}(n/\log \gamma )}\), and in particular they are conjectured to be intractable for \(\gamma = \mathrm{poly}(n)\).
3 Lossy Modes for q-Ary Lattices
The following lemmata borrow techniques of the proofs of two lemmata by Chung et al. [10] (Lemmas 3.3 and 3.4), but are not directly implied by these lemmata. In this section and Sect. 4, it will be instructive to think of \(\varLambda _0\) as \(\mathbb {Z}^n\), which will be the case in our application in Sect. 5.
Lemma 3.1
Let \(\varLambda \subseteq \varLambda _0 \subseteq {\mathbb R}^m\) be full rank lattices and let \(T \subseteq \varLambda _0\) be a system of coset representatives of \(\varLambda _0 / \varLambda \), i.e. we can write every \(\mathbf {{x}} \in \varLambda _0\) as \(\mathbf {{x}} = \mathbf {{t}} + \mathbf {{z}}\) for unique \(\mathbf {{t}} \in T\) and \(\mathbf {{z}} \in \varLambda \). Then it holds for any parameter \(\sigma > 0\) that
Proof
As the \(T + \mathbf {{y}}\) cover \(\varLambda _0\) it holds that
where the first equality follows from the fact that \(\sum _{\mathbf {{y}} \in \varLambda }\rho _\sigma (T + \mathbf {{y}}) = \sum _{\mathbf {{y}} \in \varLambda }\rho _\sigma (T - \mathbf {{y}}) = \rho _\sigma (\varLambda _0)\). The claim follows immediately.
Lemma 3.2
Fix a matrix \(\mathbf {{A}}\in \mathbb {Z}_q^{n \times m}\) with \(m = O(n \log (q))\) and a parameter \(0< \sigma < \frac{q}{2 \sqrt{m}}\). Let and . Then it holds that \(\tilde{H}_\infty ( \mathbf {{s}} | \mathbf {{s}} \mathbf {{A}}+ \mathbf {{e}} \ \mathsf {mod} \ q) \ge -\log \left( \frac{1}{\rho _{\sigma }(\varLambda _q(\mathbf {{A}}))} + 2^{-m}\right) \).
Proof
Given arbitrary \(\mathbf {{A}}\) and \(\mathbf {{y}}\), we would like to find an \(\mathbf {{s}}^*\) that maximizes the probability \(\Pr [\mathbf {{s}} = \mathbf {{s}}^*| \mathbf {{y}} = \mathbf {{s}} \mathbf {{A}}+ \mathbf {{e}}]\). By Bayes’ rule, it holds that
As the denominator \(\sum _{\mathbf {{s}}'} \Pr [\mathbf {{e}} = \mathbf {{y}} - \mathbf {{s}}' \mathbf {{A}}]\) is independent of \(\mathbf {{s}}^*\), it suffices to maximize the numerator \(\Pr [\mathbf {{e}} = \mathbf {{y}} - \mathbf {{s}}^*\mathbf {{A}}]\) with respect to \(\mathbf {{s}}^*\). As \( \Pr [\mathbf {{e}} = \mathbf {{y}} - \mathbf {{s}}^*\mathbf {{A}}] = \frac{\rho _{\sigma }(y - s^*\mathbf {{A}})}{\rho _{\sigma }(\mathbb {Z}^m)}\) is monotonically decreasing in \(\Vert y - s^*\mathbf {{A}}\Vert \), this probability is maximal for the \(\mathbf {{s}}^*\) that minimizes \(\Vert \mathbf {{y}} - \mathbf {{s}}^*\mathbf {{A}}\Vert \).
Let \(V \subseteq \mathbb {Z}^n\) be the discretized Voronoi-cell of \(\varLambda _q(\mathbf {{A}})\), that is V consists of all the points in \(\mathbb {Z}^m\) that are (strictly) closer to 0 than to any other point in \(\varLambda \) and, for any point \(\mathbf {{x}} \in \mathbb {Z}^m\) that is equi-distant to several lattice-points \(\mathbf {{z}}_1,\dots ,\mathbf {{z}}_\ell \) (where \(\mathbf {{z}}_1 = 0\)), assume that there is some tie-breaking rule \(\mathbf {{x}} \mapsto i(\mathbf {{x}})\), such that \(\mathbf {{x}} - \mathbf {{z}}_{i(\mathbf {{x}})} \in V\), but for all \(j \in [\ell ] \backslash \{ i(\mathbf {{x}}) \}\) it holds that \(\mathbf {{x}} - \mathbf {{z}}_j \notin V\). By construction, V is a system of coset representatives of \(\mathbb {Z}^m / \varLambda _q(\mathbf {{A}})\).
Moreover, for the maximum-likelihood \(\mathbf {{s}}^*\) it holds that \(\Pr [\mathbf {{s}} = \mathbf {{s}}^*| \mathbf {{y}} = \mathbf {{s}}\mathbf {{A}}+ \mathbf {{e}}] = \Pr [\mathbf {{e}} \ \mathsf {mod} \ q \in V]\). By Corollary 2.3 it holds that \(\Vert \mathbf {{e}} \Vert \le \sigma \cdot \sqrt{m} < q/2\), except with probability \(2^{-m}\). Moreover, conditioned on \(\Vert \mathbf {{e}} \Vert < q/2\) the events \(\mathbf {{e}} \ \mathsf {mod} \ q \in V\) and \(\mathbf {{e}} \in V\) are equivalent. We can therefore bound \(\Pr [\mathbf {{e}} \ \mathsf {mod} \ q \in V] \le \Pr [\mathbf {{e}} \in V] + 2^{-m}\). By Lemma 3.1 we obtain \(\Pr [\mathbf {{e}} \in V] \le \frac{\rho _\sigma (V)}{\rho _\sigma (\mathbb {Z}^m)} \le \frac{1}{\rho _{\sigma }(\varLambda _q(\mathbf {{A}}))}\) and therefore \( \Pr [\mathbf {{e}} \ \mathsf {mod} \ q \in V] \le \frac{1}{\rho _{\sigma }(\varLambda _q(\mathbf {{A}}))} + 2^{-m}\)
We conclude that \(\max _{\mathbf {{s}}^*\in \mathbb {Z}_q^n} \Pr [\mathbf {{s}} = \mathbf {{s}}^*| \mathbf {{y}} = \mathbf {{s}} \mathbf {{A}}+ \mathbf {{e}}] =\Pr [\mathbf {{e}} \ \mathsf {mod} \ q \in V] \le \frac{1}{\rho _{\sigma }(\varLambda _q(\mathbf {{A}}))} + 2^{-m}\). Thus, it holds that
4 Partial Smoothing
In this section we will state a variant of the smoothing lemma of Micciancio and Regev [26]. Consider a discrete gaussian \(D_{\varLambda _0,\sigma }\) on a lattice \(\varLambda _0\). As in the setting of the smoothing Lemma of [26], we want to analyze what happens to the distribution of this Gaussian when we reduce it modulo a sublattice \(\varLambda \subseteq \varLambda _0\). The new lemma states that if the mass of the Fourier-transform of the probability-mass function of \(D_{\varLambda _0,\sigma } \ \mathsf {mod} \ \varLambda \) is concentrated on short vectors of the dual lattice \(\varLambda ^*\), then \(D_{\varLambda _0,\sigma } \ \mathsf {mod} \ \varLambda \) will be uniform on a certain sublattice \(\varLambda _1\) with \(\varLambda _0 \subseteq \varLambda _1 \subseteq \varLambda \).
Lemma 4.1
Let \(\sigma > 0\) and let \(\varLambda \subseteq \varLambda _0 \subseteq {\mathbb R}^n\) be full-rank lattices where \(\det (\varLambda _0) = 1\). Furthermore, let \(\gamma > 0\). Define \(\varLambda _1 = \{ \mathbf {{z}} \in \varLambda _0 \ | \ \forall \mathbf {{y}} \in \varLambda ^*\cap \gamma \mathcal {B}: \langle \mathbf {{y}},\mathbf {{z}} \rangle \in \mathbb {Z}\}\). Given that \(\rho _{1/\sigma }(\varLambda ^*\backslash \gamma \mathcal {B}) \le \epsilon \), it holds that
where and .
Notice that for the case of \(\varLambda ^*\cap \gamma \mathcal {B}= \{ 0 \}\) we recover the standard smoothing lemma of [26]. The proof of Lemma 4.1 uses standard Fourier-analytic techniques akin to [26] and is deferred to Appendix A. We will make use of the following consequence of Lemma 4.1.
Corollary 4.2
Let \(q > 0\) be an integer and let \(\gamma > 0\). Let \(\mathbf {{A}}\in \mathbb {Z}_q^{m \times n}\) and let \(\sigma > 0\) and \(\epsilon > 0\) be such that \(\rho _{q/\sigma }(\varLambda _q(\mathbf {{A}}) \backslash \gamma \mathcal {B}) \le \epsilon \). Let \(\mathbf {{D}} \in \mathbb {Z}_q^{k \times m}\) be a full-rank (and therefore minimal) matrix with \(\varLambda _q^\bot (\mathbf {{D}}) = \{ \mathbf {{x}} \in \mathbb {Z}^m \ | \ \forall \mathbf {{y}} \in \varLambda _q(\mathbf {{A}}) \cap \gamma \mathcal {B}: \langle \mathbf {{x}},\mathbf {{y}} \rangle = 0 \pmod {q} \}\). Let and . Then it holds that
Proof
Setting \(\varLambda _0 = \mathbb {Z}^n\), \(\varLambda = \varLambda _q^\bot (\mathbf {{A}})\) and \(\gamma ' = \gamma /q\), it holds that \(\varLambda ^*= \frac{1}{q} \varLambda _q(\mathbf {{A}})\) and
Therefore, we can set
Now it holds by Lemma 4.1 as that \(\mathbf {{x}} \ \mathsf {mod} \ \varLambda _q^{\bot }(\mathbf {{A}}) \approx _\epsilon (\mathbf {{x}} + \mathbf {{u}}) \ \mathsf {mod} \ \varLambda _q^{\bot } (\mathbf {{A}})\). Write \(\mathbf {{y}}_1 = \mathbf {{x}} \ \mathsf {mod} \ \varLambda _q^{\bot }(\mathbf {{A}})\) as \(\mathbf {{y}}_1 = \mathbf {{x}} + \mathbf {{z}}_1 \ \mathsf {mod} \ q\) for a suitable \(\mathbf {{z}}_1 \in \varLambda _q^{\bot }(\mathbf {{A}})\). Likewise, we can write \(\mathbf {{y}}_2 = \mathbf {{x}} + \mathbf {{u}} \ \mathsf {mod} \ \varLambda _q^{\bot }(\mathbf {{A}})\) as \(\mathbf {{y}}_2 = \mathbf {{x}} + \mathbf {{u}} + \mathbf {{z}}_2 \ \mathsf {mod} \ q\) for a suitable \(\mathbf {{z}}_2\). Thus it holds that
We will also use the following lower bound on the gaussian measure of lattices that have many short linearly independent vectors. The proof of Lemma 4.3 is technically similar to the proof of the transference theorem in [6].
Lemma 4.3
Let \(\varLambda \in {\mathbb R}^m\), \(\sigma > 0\) and \(\gamma > 0\) be such that \(\varLambda \cap \gamma \mathcal {B}\) contains at least k linearly independent vectors. Then it holds that \(\rho _\sigma (\varLambda ) \ge (\sigma /\gamma )^k\).
Proof
Let \(\varLambda ' \subseteq \) be the sublattice generated by the vectors in \(\varLambda \cap \gamma \mathcal {B}\). Let k be the dimension of the span of \(\varLambda '\). As \(\varLambda ' \subseteq \varLambda \), it holds that \(\rho _\sigma (\varLambda ) \ge \rho _\sigma (\varLambda ')\). As \(\varLambda '\) has a basis of length at most \(\gamma \), we we have that \(\det (\varLambda ') \le \gamma ^k\) and conclude \(\det ((\varLambda ')^*) = 1/\det (\varLambda ') \ge \frac{1}{\gamma ^k}\). By the Poisson-summation formula, we get that
as \(\rho _{1/\sigma }((\varLambda ')^*) \ge 1\). Thus we conclude that \(\rho _\sigma (\varLambda ) \ge (\sigma /\gamma )^k\).
5 Our Oblivious Transfer Protocol
We are now ready to provide our statistically sender private oblivious transfer protocol. In the following, let \(q,n,\ell = \mathrm{poly}(\lambda )\) and assume that q is of the form \(q = 2p\) for an odd p. Let \((\mathsf {SampleWithTrapdoor},\mathsf {Decode})\) be the pair of algorithms provided in Lemma 2.4 and let \(m = m(q,n)\) be such that the matrices \(\mathbf {{A}}\) generated by \(\mathsf {SampleWithTrapdoor}(q,2n)\) are elements of \(\mathbb {Z}^{2n \times m}\). Let \(\mathsf {Ext}_0: \{0,1\}^d \times \{0,1\}^n \rightarrow \{0,1\}^\ell \) and \(\mathsf {Ext}_1: \{0,1\}^d \times \mathbb {Z}_q^{2n}\rightarrow \{0,1\}^\ell \) be seeded extractors, both with seed-length d and \(\ell \) bits of output. Finally, let \(\sigma _0, \sigma _1 > 0\) be parameters for discrete Gaussians and \(\chi \) be an LWE error-distribution.
The protocol \(\mathsf {OT}= (\mathsf {OTR},\mathsf {OTS},\mathsf {OTD})\) is given as follows.
-
\(\mathsf {OTR}(1^\lambda ,\beta \in \{0,1\})\):
-
If \(\beta = 0\), choose a matrix , a matrix \(\mathbf {{S}} \leftarrow \mathbb {Z}_q^{n \times n}\), . Set \(\mathbf {{A}}_2 \leftarrow \mathbf {{S}} \cdot \mathbf {{A}}_1 + \mathbf {{E}}\) and . Repeat this step until \(\mathbf {{A}} \ \mathsf {mod} \ 2\) has full rank.
Output \(\mathsf {ot}_1 \leftarrow \mathbf {{A}}\) and \(\mathsf {st}\leftarrow \mathbf {{S}}\).
-
If \(\beta = 1\), sample . Repeat this step until \(\mathbf {{A}} \ \mathsf {mod} \ 2\) has full rank. Output \(\mathsf {ot}_1 \leftarrow \mathbf {{A}}\) and \(\mathsf {st}\leftarrow \mathsf {td}\).
-
-
\(\mathsf {OTS}(1^\lambda , (\mu _0,\mu _1) \in (\{0,1\}^\ell )^2, \mathsf {ot}_1 = \mathbf {{A}})\):
-
Check if \(\mathbf {{A}} \ \mathsf {mod} \ 2\) has full rank, if not output \(\bot \).
-
Parse . Sample and reject a discrete Gaussian until \(\Vert \mathbf {{x}} \Vert < \sigma _0 \sqrt{m}\). Choose a uniformly random \(\mathbf {{r}} \leftarrow \{0,1\}^n\) and choose a random seed for the extractor \(\mathsf {Ext}_0\). Compute \(\mathbf {{y}}_1 \leftarrow \mathbf {{A}}_1\mathbf {{x}}\) and \(\mathbf {{y}}_2 \leftarrow \mathbf {{A}}_2 \mathbf {{x}} + \frac{q}{2} \cdot \mathbf {{r}}\). Set \(\mathsf {c}_0 \leftarrow (\mathbf {{y}}_1,\mathbf {{y}}_2,\mathsf {s}_0,\mathsf {Ext}_0(\mathsf {s}_0,\mathbf {{r}}) \oplus \mu _0)\).
-
Sample and reject until \(\Vert \varvec{{\eta }} \Vert < \sigma _1 \sqrt{m}\). Choose a uniformly random and a seed for the extractor \(\mathsf {Ext}_1\). Compute \(\mathbf {{y}} \leftarrow \mathbf {{t}} \cdot \mathbf {{A}}+\varvec{{\eta }}\) set \(\mathsf {c}_1 \leftarrow (\mathbf {{y}},\mathsf {s}_1,\mathsf {Ext}_1(\mathsf {s}_1,\mathbf {{t}}) \oplus \mu _1)\).
-
Output \(\mathsf {ot}_2 \leftarrow (\mathsf {c}_0,\mathsf {c}_1)\).
-
-
\(\mathsf {OTD}(\beta ,\mathsf {st},\mathsf {ot}_2 = (\mathsf {c}_0,\mathsf {c}))\)
-
If \(\beta = 0\): Parse \(\mathsf {st}= \mathbf {{S}}\) and \(\mathsf {c}_0 = (\mathbf {{y}}_1,\mathbf {{y}}_2,\mathsf {s}_0,\tau )\). Compute \(\mathbf {{r}}' \leftarrow \left\lfloor \mathbf {{y}}_2 - \mathbf {{S}} \cdot \mathbf {{y}}_1 \right\rceil _{q/2}\) and output \(\mu _0' \leftarrow \mathsf {Ext}_0(\mathsf {s}_0,\mathbf {{r}}') \oplus \tau \).
-
If \(\beta = 1\): Parse \(\mathsf {st}= \mathsf {td}\) and \(\mathsf {c}_1 = (\mathbf {{y}},\mathsf {s}_1,\tau )\). Compute \(\mathbf {{t}}' \leftarrow \mathsf {Decode}(\mathsf {td},\mathbf {{y}})\) and output \(\mu _1' \leftarrow \mathsf {Ext}_1(\mathsf {s}_1,\mathbf {{t}}') \oplus \tau \).
-
We will first show correctness of our protocol.
Lemma 5.1
(Correctness). Assume that the distribution \(\chi \) is a B-bounded. Provided that \(\sigma _0 \le \frac{q}{4 B \cdot m }\) and \(\sigma _1 \le \frac{q}{m \cdot \kappa (n)}\) (where \(\kappa (n) = \omega (\sqrt{\log (n)})\) as in Lemma 2.4), the protocol \(\mathsf {OT}\) is perfectly correct.
Proof
First note that as \(m \ge n \cdot \log (q)\), it holds for a uniformly random that \(\mathbf {{A}} \ \mathsf {mod} \ 2\) has full rank, except with negligible probability \(2^{n - m}\) (as detailed in Sect. 2.2). Moreover for \(\mathbf {{x}} \leftarrow D_{\mathbb {Z}^m,\sigma _0}\) and it holds by Corollary 2.3 that \(\Vert \mathbf {{x}} \Vert < \sigma _0 \sqrt{m}\) and \(\Vert \varvec{{\eta }} \Vert < \sigma _1 \sqrt{m}\), except with negligible probability. Thus, rejection in \(\mathsf {OTR}\) and \(\mathsf {OTS}\) happens only with negligible probability.
In the case of \(\beta = 0\), it holds that
By the Cauchy-Schwarz inequality it holds for each row \(\mathbf {{e}}_i\) of \(\mathbf {{E}}\) that \(| \langle \mathbf {{e}}_i , \mathbf {{x}} \rangle | \le \Vert \mathbf {{e}}_i \Vert \cdot \Vert \mathbf {{x}} \Vert \). As the entries of \(\mathbf {{e}}_i\) are chosen according to \(\chi \), we can bound \(\Vert \mathbf {{e}}_i \Vert \) by \(\Vert \mathbf {{e}}_i \Vert \le B \cdot \sqrt{m}\). As \(\Vert \mathbf {{x}} \Vert < \sigma _0 \cdot \sqrt{m}\), we have that
as \(\sigma _0 \le \frac{q}{4 B \cdot m }\). We conclude that \(\mathbf {{r}}' = \left\lfloor \mathbf {{y}}_2 - \mathbf {{S}} \cdot \mathbf {{y}}_1 \right\rceil _{q/2}\) is identical to the vector \(\mathbf {{r}}\) used during encryption. Consequently, it holds that \(\mu _0' = \mathsf {Ext}_0(\mathsf {s}_0,\mathbf {{r}}') \oplus \tau = \mathsf {Ext}_0(\mathsf {s}_0,\mathbf {{r}}') \oplus \mathsf {Ext}_0(\mathsf {s}_0,\mathbf {{r}}) \oplus \mu _0 = \mu _0\).
For the case of \(\beta = 1\), as \(\Vert \varvec{{\eta }} \Vert < \sigma _1 \sqrt{m} \le \frac{q}{\sqrt{m} \cdot \kappa (n)}\) it holds by Lemma 2.4 that \(\mathsf {Decode}(\mathsf {td},\mathbf {{y}}_1)\) outputs the correct \(\mathbf {{t}}' = \mathbf {{t}}\). We conclude that \(\mu '_1 = \mathsf {Ext}_1(\mathsf {s}_1,\mathbf {{t}}') \oplus \tau = \mathsf {Ext}_1(\mathsf {s}_1,\mathbf {{t}}') \oplus \mathsf {Ext}_1(\mathsf {s}_1,\mathbf {{t}}) \oplus \mu = \mu _1\).
We now show that \(\mathsf {OT}\) has computational receiver privacy under the decisional matrix LWE assumption.
Lemma 5.2
(Computational Receiver Security). Given that the decisional \(LWE_{n,q,\chi }\)-assumption holds, the protocol \(\mathsf {OT}= (\mathsf {OTR},\mathsf {OTS},\mathsf {OTD})\) has receiver privacy.
Proof
Let \((\mathbf {{A}},\mathsf {st}_0) \leftarrow \mathsf {OTR}(1^\lambda ,0)\) and \((\mathbf {{A}}',\mathsf {st}_1) \leftarrow \mathsf {OTR}(1^\lambda ,1)\). Assume towards contradiction that there exists a PPT-distinguisher \(\mathcal {D}\) which distinguishes \(\mathbf {{A}}\) and \(\mathbf {{A}}'\) with non-negligible advantage \(\epsilon \). We can immediately use \(\mathcal {D}\) to distinguish decisional matrix LWE. Decomposing , it holds that \(\mathbf {{A}}_1\) is uniformly random and \(\mathbf {{A}}_2 = \mathbf {{S}} \cdot \mathbf {{A}}_1 + \mathbf {{E}}\), i.e. \((\mathbf {{A}}_1,\mathbf {{A}}_2)\) is a sample of the matrix LWE distribution. On the other hand, due to the uniformity property of \(\mathsf {SampleWithTrapdoor}\) (provided in Lemma 2.4) it holds that \(\mathbf {{A}}' \approx _s \mathbf {{A}}^*\) for a uniformly random . Consequently
which contradicts the hardness of decisional matrix LWE.
We will now show that \(\mathsf {OT}\) is statistically sender-private.
Theorem 5.3
(Statistical Sender Security). Let \(q = 2p\) for an odd p. Given that \(\sigma _0 \cdot \sigma _1 \ge 4 \sqrt{m} \cdot q\), \(\sigma _1 < \frac{q}{2 \sqrt{m}}\) and both \(\mathsf {Ext}_0\) and \(\mathsf {Ext}_1\) are strong average-case \((n/2,\mathrm{negl})\)-extractors, then the above scheme enjoys statistical sender security.
Proof
Fix a maliciously generated \(\mathsf {ot}_1\)-message \(\mathsf {ot}_1 = \mathbf {{A}}\). Let in the following \(\gamma := \sqrt{m} \cdot \frac{q}{\sigma _0}\). Consider the following two cases.
-
1.
\(\rho _{q /\sigma _0}(\varLambda _q(\mathbf {{A}})) > 2^{n/2 + 1}\) or \(\mathsf {rank}(\varLambda _q(\mathbf {{A}}) \cap \gamma \mathcal {B})) > n/2\).
-
2.
\(\rho _{q /\sigma _0}(\varLambda _q(\mathbf {{A}})) \le 2^{n/2 + 2}\) and \(\mathsf {rank}(\varLambda _q(\mathbf {{A}}) \cap \gamma \mathcal {B}) \le n/2\).
First notices that the two cases are slightly overlapping, but for any choice of \(\mathbf {{A}}\) one of the two cases must be true.
The unbounded message extractor \(\mathsf {OTExt}\) takes input \(\mathbf {{A}}\) and decides if item 1 or item 2 holds. If item 1 holds it outputs 0, otherwise 1. Note that \(\mathsf {rank}(\varLambda _q(\mathbf {{A}}) \cap \gamma \mathcal {B})\) can be computed exactly. On the other hand, it is sufficient approximate \(\rho _{q /\sigma _0}(\varLambda _q(\mathbf {{A}}))\) to a certain precision to determine which case holds.
We will now show that in case 1 the sender-message \(\mu _1\) is statistically hidden, whereas in case 2 the sender-message \(\mu _0\) is statistically hidden.
Case 1. We will start with the (easier) first case. We will show that either statement implies \(\rho _{\sigma _1}(\varLambda _q(\mathbf {{A}})) \ge 2^{n/2 + 1}\). If it holds that \(\rho _{q /\sigma _0}(\varLambda _q(\mathbf {{A}})) > 2^{n/2 +1}\), we can directly conclude that
If the second statement \(\mathsf {rank}(\varLambda _q(\mathbf {{A}}) \cap \gamma \mathcal {B})) > n/2\) holds, Lemma 4.3 implies
as \(\sigma _1 \ge 4 \gamma \).
Now let \(\mathsf {c}_1 = (\mathbf {{y}},\mathsf {s}_1,\tau )\), where \(\mathbf {{y}} \leftarrow \mathbf {{t}} \cdot \mathbf {{A}}+\varvec{{\eta }}\). Note that we can switch to a hybrid in which the distribution of \(\varvec{{\eta }}\) is \(D_{\mathbb {Z}^m,\sigma _1}\) instead of the truncated version while only incurring a negligible statistical error.
As \(\rho _{\sigma _1}(\varLambda _q(\mathbf {{A}})) \ge 2^{n/2 + 1}\) and \(\sigma _1 < \frac{q}{2 \sqrt{m}}\), Lemma 3.2 implies that
Thus, as \(\mathsf {Ext}_1\) is a strong \((n/2,\mathrm{negl})\)-extractor, we get that \(\mathsf {Ext}_1(\mathsf {s}_1,\mathbf {{t}})\) is statistically close to uniform given \(\mathbf {{y}}\). Consequently, \(\tau = \mathsf {Ext}_1(\mathsf {s}_1,\mathbf {{t}}) \oplus \mu _1\) is statistically close to uniform given \(\mathsf {s}_1\) and \(\mathbf {{y}}\), which concludes the first case.
Case 2. We will now turn to the second case, i.e. it holds that \(\rho _{q /\sigma _0}(\varLambda _q(\mathbf {{A}})) \le 2^{n/2 + 2}\) and \(\mathsf {rank}(\varLambda _q(\mathbf {{A}}) \cap \gamma \mathcal {B})) \le n/2\). Theorem 2.2 yields that
where \(C > 0\) is a constant. This expression is negligible as \(m \ge n \cdot \log (q)\). Consequently, the precondition \(\rho _{q/\sigma _0}(\varLambda _q(\mathbf {{A}}) \backslash \gamma \mathcal {B}) \le \mathrm{negl}\) of Corollary 4.2 is fulfilled.
Now let \(\mathbf {{D}} \in \mathbb {Z}_q^{k \times m}\) be a full-rank matrix with \(\varLambda _q^\bot (\mathbf {{D}}) = \{ \mathbf {{z}} \in \mathbb {Z}^m \ | \ \forall \mathbf {{v}} \in \varLambda _q(\mathbf {{A}}) \cap \gamma \mathcal {B}: \langle \mathbf {{z}},\mathbf {{v}} \rangle = 0 ( \ \mathsf {mod} \ q))\). Thus it holds that \(\varLambda _q(\mathbf {{A}}) \cap \gamma \mathcal {B}\subset \varLambda _q(\mathbf {{D}})\) and there is no matrix with fewer than k rows with this property. As \(\mathsf {rank}(\varLambda _q(\mathbf {{A}}) \cap \gamma \mathcal {B}) \le n/2\), it holds that \(k \le n/2\).
Decompose the matrix \(\mathbf {{A}}\) into with \(\mathbf {{A}}_1 \in \mathbb {Z}_q^{n \times m}\) and \(\mathbf {{A}}_2 \in \mathbb {Z}_q^{n \times m}\). Let \(\mathsf {c}_0 = (\mathbf {{y}}_1,\mathbf {{y}}_2,\mathsf {s}_0,\tau )\), where \(\mathbf {{y}}_1 = \mathbf {{A}}_1 \mathbf {{x}}\) and \(\mathbf {{y}}_2 = \mathbf {{A}}_2 \mathbf {{x}} + \frac{q}{2} \mathbf {{r}}\) with and . As \(\rho _{q/\sigma _0}(\varLambda _q(\mathbf {{A}}) \backslash \gamma \mathcal {B}) \le \epsilon \), Corollary 4.2 implies that
where . We can therefore switch to a hybrid experiment in which we replace \(\mathbf {{x}}\) with \(\mathbf {{x}} + \mathbf {{u}}\) while only incurring negligible statistical distance. We will now show that \(\tilde{H}_\infty (\mathbf {{r}} | \mathbf {{y}}'_1,\mathbf {{y}}'_2 ) \ge n/2\).
As \(q = 2p\) and p is odd, it holds by the Chinese remainder theorem that
Note that \(\mathbf {{u}} \ \mathsf {mod} \ 2\) and \(\mathbf {{u}} \ \mathsf {mod} \ p\) are independent. As the \( \ \mathsf {mod} \ p\) part does not depend on \(\mathbf {{r}}\), we only need to consider the \( \ \mathsf {mod} \ 2\) part. Let in the following variables with a hat denote this variable is reduced modulo 2, e.g. \(\hat{\mathbf {{x}}} = \mathbf {{x}} \ \mathsf {mod} \ 2\). It holds that \(\hat{\mathbf {{u}}}\) is chosen uniformly from \(\mathsf {ker}(\hat{\mathbf {{D}}}) = \{ \mathbf {{w}} \in \mathbb {Z}_2^m \ | \ \hat{\mathbf {{D}}}\cdot \mathbf {{w}} = 0 \}\). The dimension of \(\mathsf {ker}(\hat{\mathbf {{D}}})\) is at least \(m - k \ge m - n/2\). Let \(\hat{\mathbf {{B}}} \in \mathbb {Z}_2^{m \times m}\) be a basis of \(\mathsf {ker}(\hat{\mathbf {{D}}})\). As \(\hat{\mathbf {{A}}}\) has full rank and therefore \(\mathsf {rank}(\mathsf {ker}(\hat{\mathbf {{A}}})) = m - 2n\), it holds that \(\mathsf {rank}(\hat{\mathbf {{A}}} \cdot \hat{\mathbf {{B}}}) \ge \frac{3}{2} n\). Therefore \(\hat{\mathbf {{A}}} \cdot \hat{\mathbf {{u}}}\) is uniformly random in an \(\frac{3}{2}n\) dimensional subspace. But this means that \((\hat{\mathbf {{y}}}_1',\hat{\mathbf {{y}}}_2') = (\hat{\mathbf {{A}}}_1 \hat{\mathbf {{x}}} + \hat{\mathbf {{A}}}_1 \hat{\mathbf {{u}}}, \hat{\mathbf {{A}}}_2 \hat{\mathbf {{x}}} + \hat{\mathbf {{A}}}_2 \hat{\mathbf {{u}}} + \mathbf {{r}})\) loses at least n / 2 bits of information about \(\mathbf {{r}}\) (c.f. Sect. 2.2). Consequently, it holds that \(\tilde{H}_\infty (\mathbf {{r}} | \mathbf {{y}}_1',\mathbf {{y}}_2') \ge n/2\). Therefore, as \(\mathsf {Ext}_0\) is a strong \((n/2,\mathrm{negl})\)-extractor, we get that \(\mathsf {Ext}_0(\mathsf {s}_0,\mathbf {{r}})\) is statistically close to uniform given \(\mathbf {{y}}_1',\mathbf {{y}}_2'\). Finally, \(\tau = \mathsf {Ext}_0(\mathsf {s}_0,\mathbf {{r}}) \oplus \mu _0\) is statistically close to uniform given \(\mathsf {s}_0\) and \(\mathbf {{y}}_1',\mathbf {{y}}_2'\), which concludes the second case.
5.1 Setting the Parameters
We will now show that the parameters of the scheme can be chosen such that correctness, statistical sender privacy and computational receiver privacy hold.
-
By Lemma 5.1, \(\mathsf {OT}\) is correct if \(\sigma _0 \le \frac{q}{4 B \cdot m }\) and \(\sigma _1 \le \frac{q}{m \cdot \kappa (n)}\) (where \(\kappa (n) = \omega (\sqrt{\log (n)})\)).
-
By Theorem 5.3, \(\mathsf {OT}\) is statistically sender private if \(\sigma _0 \cdot \sigma _1 \ge 4 \sqrt{m} \cdot q\) and \(\sigma _1 < \frac{q}{2 \sqrt{m}}\).
These requirements can be met if
which is equivalent to
If \(\chi \) is a discrete Gaussian on \(\mathbb {Z}\) with parameter \(\alpha q\), i.e. \(\chi = D_{\mathbb {Z},\alpha q}\), then, given that \(\alpha q \ge \eta _\epsilon (\mathbb {Z}) = \omega (\sqrt{\log (n)})\) it holds that \(\chi \) is \(\alpha q\) bounded, i.e. \(B \le \alpha q\) (with overwhelming probability). This means that
implies inequality (1). Thus, we get a worst-case approximation factor \(\tilde{O}(n / \alpha ) = \tilde{O}(n^{3.5})\) for SIVP (compared to \(\tilde{O}(n^{1.5})\) for primal Regev encryption). With this choice of \(\alpha \), we can choose \(q = \tilde{O}(n^3)\), \(\sigma _0 = \tilde{O}(n^{2.5})\) and \(\sigma _1= \tilde{O}(n)\).
Notes
- 1.
Notice that it is impossible to achieve statistical indistinguishability in this setting, at least against non-uniform malicious receivers.
- 2.
While the form \(\mathbf {{s}}\mathbf {{A}} + \mathbf {{e}}\pmod {q}\) bears resemblance to an instance of the LWE problem (to be discussed below), the matrix \(\mathbf {{A}}\) in our setting might be chosen by a malicious party and therefore cannot be assumed to be close to uniform.
- 3.
I.e. a simple application of Markov’s inequality.
- 4.
Where nice enough means that \(\int _{\mathbf {{x}} \in {\mathbb R}^m} |f(\mathbf {{x}})| d\mathbf {{x}}\) is finite.
References
Aiello, B., Ishai, Y., Reingold, O.: Priced oblivious transfer: how to sell digital goods. In: Pfitzmann, B. (ed.) EUROCRYPT 2001. LNCS, vol. 2045, pp. 119–135. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-44987-6_8
Ajtai, M.: Generating hard instances of the short basis problem. In: Wiedermann, J., van Emde Boas, P., Nielsen, M. (eds.) ICALP 1999. LNCS, vol. 1644, pp. 1–9. Springer, Heidelberg (1999). https://doi.org/10.1007/3-540-48523-6_1
Badrinarayanan, S., Garg, S., Ishai, Y., Sahai, A., Wadia, A.: Two-message witness indistinguishability and secure computation in the plain model from new assumptions. In: Takagi, T., Peyrin, T. (eds.) ASIACRYPT 2017. LNCS, vol. 10626, pp. 275–303. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-70700-6_10
Badrinarayanan, S., Goyal, V., Jain, A., Kalai, Y.T., Khurana, D., Sahai, A.: Promise zero knowledge and its applications to round optimal MPC. IACR Cryptology ePrint Archive 2017, 1088 (2017). http://eprint.iacr.org/2017/1088
Badrinarayanan, S., Goyal, V., Jain, A., Khurana, D., Sahai, A.: Round optimal concurrent MPC via strong simulation. In: Kalai and Reyzin [20], pp. 743–775. https://doi.org/10.1007/978-3-319-70500-2_25
Banaszczyk, W.: New bounds in some transference theorems in the geometry of numbers. Math. Ann. 296(1), 625–635 (1993)
Bellare, M., Micali, S.: Non-interactive oblivious transfer and applications. In: Brassard, G. (ed.) CRYPTO 1989. LNCS, vol. 435, pp. 547–557. Springer, New York (1990). https://doi.org/10.1007/0-387-34805-0_48
Brakerski, Z., Halevi, S., Polychroniadou, A.: Four round secure computation without setup. In: Kalai and Reyzin [20], pp. 645–677. https://doi.org/10.1007/978-3-319-70500-2_22
Brakerski, Z., Langlois, A., Peikert, C., Regev, O., Stehlé, D.: Classical hardness of learning with errors. In: Boneh, D., Roughgarden, T., Feigenbaum, J. (eds.) Symposium on Theory of Computing Conference, STOC 2013, Palo Alto, CA, USA, 1–4 June 2013, pp. 575–584. ACM (2013). http://doi.acm.org/10.1145/2488608.2488680
Chung, K., Dadush, D., Liu, F., Peikert, C.: On the lattice smoothing parameter problem. In: IEEE Conference on Computational Complexity, pp. 230–241. IEEE Computer Society (2013)
Dodis, Y., Ostrovsky, R., Reyzin, L., Smith, A.D.: Fuzzy extractors: how to generate strong keys from biometrics and other noisy data. SIAM J. Comput. 38(1), 97–139 (2008)
Dodis, Y., Reyzin, L., Smith, A.: Fuzzy extractors: how to generate strong keys from biometrics and other noisy data. In: Cachin, C., Camenisch, J.L. (eds.) EUROCRYPT 2004. LNCS, vol. 3027, pp. 523–540. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-24676-3_31
Gertner, Y., Kannan, S., Malkin, T., Reingold, O., Viswanathan, M.: The relationship between public key encryption and oblivious transfer. In: FOCS, pp. 325–335. IEEE Computer Society (2000)
Goldreich, O., Goldwasser, S.: On the limits of non-approximability of lattice problems. In: Vitter, J.S. (ed.) Proceedings of the Thirtieth Annual ACM Symposium on the Theory of Computing, Dallas, Texas, USA, 23–26 May 1998, pp. 1–9. ACM (1998). http://doi.acm.org/10.1145/276698.276704
Goldreich, O., Micali, S., Wigderson, A.: How to play any mental game or a completeness theorem for protocols with honest majority. In: Aho, A.V. (ed.) Proceedings of the 19th Annual ACM Symposium on Theory of Computing, 1987, New York, New York, USA, pp. 218–229. ACM (1987). http://doi.acm.org/10.1145/28395.28420
Goldreich, O., Oren, Y.: Definitions and properties of zero-knowledge proof systems. J. Cryptol. 7(1), 1–32 (1994)
Halevi, S., Hazay, C., Polychroniadou, A., Venkitasubramaniam, M.: Round-optimal secure multi-party computation. IACR Cryptology ePrint Archive 2017, 1056 (2017). http://eprint.iacr.org/2017/1056
Halevi, S., Kalai, Y.T.: Smooth projective hashing and two-message oblivious transfer. J. Cryptol. 25(1), 158–193 (2012). https://doi.org/10.1007/s00145-010-9092-8
Jain, A., Kalai, Y.T., Khurana, D., Rothblum, R.: Distinguisher-dependent simulation in two rounds and its applications. In: Katz, J., Shacham, H. (eds.) CRYPTO 2017. LNCS, vol. 10402, pp. 158–189. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63715-0_6
Kalai, Y., Reyzin, L. (eds.): TCC 2017. LNCS, vol. 10677. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-70500-2
Kalai, Y.T.: Smooth projective hashing and two-message oblivious transfer. In: Cramer, R. (ed.) EUROCRYPT 2005. LNCS, vol. 3494, pp. 78–95. Springer, Heidelberg (2005). https://doi.org/10.1007/11426639_5
Kalai, Y.T., Khurana, D., Sahai, A.: Statistical witness indistinguishability (and more) in two messages. In: Nielsen, J.B., Rijmen, V. (eds.) EUROCRYPT 2018. LNCS, vol. 10822, pp. 34–65. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-78372-7_2
Khurana, D.: Round optimal concurrent non-malleability from polynomial hardness. In: Kalai, Y., Reyzin, L. (eds.) TCC 2017. LNCS, vol. 10678, pp. 139–171. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-70503-3_5
Khurana, D., Sahai, A.: How to achieve non-malleability in one or two rounds. In: Umans, C. (ed.) 58th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2017, Berkeley, CA, USA, 15–17 October 2017, pp. 564–575. IEEE Computer Society (2017). https://doi.org/10.1109/FOCS.2017.58
Micciancio, D., Peikert, C.: Trapdoors for lattices: simpler, tighter, faster, smaller. In: Pointcheval, D., Johansson, T. (eds.) EUROCRYPT 2012. LNCS, vol. 7237, pp. 700–718. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-29011-4_41
Micciancio, D., Regev, O.: Worst-case to average-case reductions based on Gaussian measures. SIAM J. Comput. 37(1), 267–302 (2007)
Naor, M., Pinkas, B.: Efficient oblivious transfer protocols. In: Kosaraju, S.R. (ed.) Proceedings of the Twelfth Annual Symposium on Discrete Algorithms, 7–9 January 2001, Washington, DC, USA, pp. 448–457. ACM/SIAM (2001). http://dl.acm.org/citation.cfm?id=365411.365502
Ostrovsky, R., Paskin-Cherniavsky, A., Paskin-Cherniavsky, B.: Maliciously circuit-private FHE. In: Garay, J.A., Gennaro, R. (eds.) CRYPTO 2014. LNCS, vol. 8616, pp. 536–553. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-662-44371-2_30
Peikert, C.: Public-key cryptosystems from the worst-case shortest vector problem: extended abstract. In: Mitzenmacher, M. (ed.) STOC, pp. 333–342. ACM (2009)
Peikert, C., Regev, O., Stephens-Davidowitz, N.: Pseudorandomness of Ring-LWE for any ring and modulus. In: Hatami, H., McKenzie, P., King, V. (eds.) Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017, Montreal, QC, Canada, 19–23 June 2017, pp. 461–473. ACM (2017). http://doi.acm.org/10.1145/3055399.3055489
Peikert, C., Vaikuntanathan, V., Waters, B.: A framework for efficient and composable oblivious transfer. In: Wagner, D. (ed.) CRYPTO 2008. LNCS, vol. 5157, pp. 554–571. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-85174-5_31
Rabin, M.O.: How to exchange secrets with oblivious transfer. Harvard University Technical Report (1981). http://eprint.iacr.org/2005/187
Regev, O.: On lattices, learning with errors, random linear codes, and cryptography. In: Gabow, H.N., Fagin, R. (eds.) STOC, pp. 84–93. ACM (2005). Full version in [34]
Regev, O.: On lattices, learning with errors, random linear codes, and cryptography. J. ACM 56(6), 34 (2009)
Yao, A.C.C.: How to generate and exchange secrets (extended abstract). In: FOCS, pp. 162–167 (1986)
Acknowledgement
We would like to thank the anonymous TCC 2018 reviewers for insightful comments that helped to improve the presentation of the paper.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
A Appendix
A Appendix
In this Section we will provide the proof for Lemma 4.1. We will first provide some additional preliminaries.
1.1 A.1 Additional Preliminaries
Fourier Transforms. We now recall a few basic facts about Fourier-transforms on lattices. Let \(f: {\mathbb R}^m \rightarrow {\mathbb C}\) and \(\varLambda \) be a lattice, if it exists, we will write \(f(\varLambda ) := \sum _{\mathbf {{x}} \in \varLambda } f(\mathbf {{x}})\). For a nice enoughFootnote 4 function \(f: {\mathbb R}^m \rightarrow {\mathbb C}\), we define the continuous Fourier-transform \(\hat{f}: {\mathbb R}^m \rightarrow {\mathbb C}\) by \(\hat{f}(\varvec{{\omega }}) = \int _{\mathbf {{x}} \in {\mathbb R}^m} f(x) \cdot e^{-2 \pi i \cdot \langle \varvec{{\omega }},\mathbf {{x}} \rangle } d\mathbf {{x}}\). The Poisson summation formula states that \(f(\varLambda ) = \det (\varLambda ^*) \cdot \hat{f}(\varLambda ^*)\). The Fourier-transform of the Gaussian function \(\rho _\sigma (\mathbf {{x}})\) is \(\sigma ^m \cdot \rho _{1/\sigma }(\varvec{{\omega }})\). Consequently, we get by the Poisson summation formula that
Fix a full-rank lattice \(\varLambda _0 \subseteq {\mathbb R}^m\) and assume henceforth that \(\varLambda \subseteq \varLambda _0\). We say a function \(f: \varLambda _0 \rightarrow {\mathbb C}\) is \(\varLambda \)-periodic if it holds for all \(\mathbf {{x}} \in \varLambda _0\) and all \(\mathbf {{z}} \in \varLambda \) that \(f(\mathbf {{x}} + \mathbf {{z}}) = f(\mathbf {{x}})\). Now let \(f: \varLambda _0 \rightarrow {\mathbb C}\) be a \(\varLambda \)-periodic function. We define the discrete Fourier transform \(\hat{f}: \varLambda ^*\rightarrow {\mathbb C}\) of f by
Here, \(\mathcal {P}(\varLambda ) \cap \varLambda _0\) can be replaced by any system of representatives for the quotient group \(\varLambda _0 / \varLambda \). Using Fourier-inversion, we can express f as
Note that \(\hat{f}\) is \(\varLambda _0^*\) periodic.
Let \(\mathbf {{x}}\) and \(\mathbf {{y}}\) be random variables defined on \(\varLambda _0 / \varLambda \). Let the probability-mass function of the distribution of \(\mathbf {{x}}\) be given by a \(\varLambda \)-periodic function \(X:\varLambda _0 \rightarrow {\mathbb R}\), and let the probability-mass function of \(\mathbf {{y}}\) be given by a \(\varLambda \)-periodic function \(Y:\varLambda _0 \rightarrow {\mathbb R}\). Finally, let \(Z: \varLambda _0 \rightarrow {\mathbb R}\) be the probability mass function of \(\mathbf {{x}} + \mathbf {{y}}\). The convolution theorem states that it holds that
for all \(\varvec{{\omega }} \in \varLambda ^*\).
If \(\mathbf {{x}}\) is distributed according to a discrete Gaussian \(D_{\varLambda _0,\sigma }\) and \(\varLambda \subseteq \varLambda _0\), then \(\mathbf {{x}} \ \mathsf {mod} \ \varLambda \) has the probability-mass function of a periodic gaussian given by
and it holds that
for \(\varvec{{\omega }} \in \varLambda ^*\).
We define the Dirac-function \(\delta : \varLambda _0 \rightarrow {\mathbb R}\) as \(\delta (0) = 1\) and \(\delta (\mathbf {{x}}) = 0\) for \(\mathbf {{x}} \ne 0\). If \(\varLambda \subseteq \varLambda _1 \subset \varLambda _0\) and \(\mathbf {{u}}\) is distributed uniformly random on \(\mathcal {P}(\varLambda ) \cap \varLambda _1\), then \(\mathbf {{u}}\) has the probability-mass function
and the Fourier-transform
1.2 A.2 Proof of the Partial Smoothing Lemma
Lemma A.1
Let \(\sigma > 0\) and let \(\varLambda \subseteq \varLambda _0 \subseteq {\mathbb R}^n\) be full-rank lattices where \(\det (\varLambda _0) = 1\). Furthermore, let \(\gamma > 0\). Define \(\varLambda _1 = \{ \mathbf {{z}} \in \varLambda _0 \ | \ \forall \mathbf {{y}} \in \varLambda ^*\cap \gamma \mathcal {B}: \langle \mathbf {{y}},\mathbf {{z}} \rangle \in \mathbb {Z}\}\). Given that \(\rho _{1/\sigma }(\varLambda ^*\backslash \gamma \mathcal {B}) \le \epsilon \), it holds that
where and .
Proof
First notice that \(\varLambda \subseteq \varLambda _1 \subseteq \varLambda _0\) and \(\varLambda ^*\cap \gamma \mathcal {B}\subseteq \varLambda _1^*\). The probability-mass function of \(\mathbf {{y}}\) is given by
for \(\mathbf {{x}} \in \mathcal {P}(\varLambda ) \cap \varLambda _0\). The Fourier-transform of Y is
for \(\varvec{{\omega }} \in \mathcal {P}(\varLambda _0^*) \cap \varLambda ^*\).
The probability-mass function of \(\mathbf {{u}}\) is
for \(\mathbf {{x}} \in \mathcal {P}(\varLambda ) \cap \varLambda _0\).
Note that \(U(\mathbf {{x}})\) is \(\varLambda \)-periodic as \(\varLambda \subseteq \varLambda _1\). We can therefore compute the Fourier-transform of U and obtain
i.e. \(\hat{U}(\varvec{{\omega }})\) is constant 1 on \(\mathcal {P}(\varLambda _0^*) \cap \varLambda _1^\top \) and 0 everywhere else.
By the convolution theorem, the Fourier-transform of the probability mass function of \(\mathbf {{r}} = \mathbf {{y}} + \mathbf {{u}} \ \mathsf {mod} \ \varLambda \) is
for \(\varvec{{\omega }} \in \mathcal {P}(\varLambda _0^*) \cap \varLambda ^*\).
Consequently, we can bound the statistical distance between \(\mathbf {{y}}\) and \(\mathbf {{r}}\) by
The second inequality holds as \(\frac{1}{\rho _{1/\sigma }(\varLambda _0^*)} \le 1\) and the third inequality is an application of the triangle inequality.
Rights and permissions
Copyright information
© 2018 International Association for Cryptologic Research
About this paper
Cite this paper
Brakerski, Z., Döttling, N. (2018). Two-Message Statistically Sender-Private OT from LWE. In: Beimel, A., Dziembowski, S. (eds) Theory of Cryptography. TCC 2018. Lecture Notes in Computer Science(), vol 11240. Springer, Cham. https://doi.org/10.1007/978-3-030-03810-6_14
Download citation
DOI: https://doi.org/10.1007/978-3-030-03810-6_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-03809-0
Online ISBN: 978-3-030-03810-6
eBook Packages: Computer ScienceComputer Science (R0)