Abstract
The question whether there exist verifiable random functions with exponential-sized input space and full adaptive security based on a non-interactive, constant-size assumption is a long-standing open problem. We construct the first verifiable random functions which achieve all these properties simultaneously.
Our construction can securely be instantiated in groups with symmetric bilinear map, based on any member of the \((n-1)\) -linear assumption family with \(n \ge 3\). This includes, for example, the 2-linear assumption, which is also known as the decision linear (DLIN) assumption.
Supported by DFG grants HO 4534/2-2 and HO 4534/4-1.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
1 Introduction
A verifiable random function (VRF) \(V_{{ sk }}\) is essentially a pseudorandom function, but with the additional feature that it is possible to create a non-interactive and publicly verifiable proof \(\pi \) that a given function value Y was computed correctly as \(Y = V_{{ sk }}(X)\). VRFs are useful ingredients for applications as various as resettable zero-knowledge proofs [37], lottery systems [38], transaction escrow schemes [31], updatable zero-knowledge databases [34], or e-cash [4, 5].
Desired Properties of VRFs. The standard security properties required from VRFs are pseudorandomness (when no proof is given, of course) and unique provability. The latter means that for each X there is only one unique value Y such that a proof for the statement “\(Y = V_{{ sk }}(X)\)” exists. Unique provability is a very strong requirement, because not even the party that creates sk (possibly maliciously) may be able to create fake proofs. For example, the natural attempt of constructing a VRF by combining a pseudorandom function with a non-interactive zero-knowledge proof system fails, because zero-knowledge proofs are simulatable, which contradicts uniqueness.
Most known constructions of verifiable random functions allow an only polynomially bounded input space, or do not achieve full adaptive security, or are based on an interactive complexity assumption. In the sequel, we will say that a VRF has all desired properties, if is has an exponential-sized input space and a proof of full adaptive security under a non-interactive complexity assumption.
VRFs with all Desired Properties. All known examples of VRFs that possess all desired properties are based on so-called Q -type complexity assumptions. For example, the VRF of Hohenberger and Waters [28] relies on the assumption that, given a list of group elements
and a bilinear map \(e : \mathbb {G} \times \mathbb {G} \rightarrow \mathbb {G} _T\), it is computationally infeasible to distinguish \(t = e(g,h)^{x^Q}\) from a random element of \(\mathbb {G} _T\) with probability significantly better than 1 / 2. Note that the assumption is parametrized by an integer \(Q \), which determines the number of group elements in a given problem instance.
The main issue with \(Q \)-type assumptions is that they get stronger with increasing \(Q \), as demonstrated by Cheon [18]. For example, the VRF described in [28] is based on a \(Q\)-type assumption with \(Q = \varTheta (q\cdot k)\), where \(k\) is the security parameter and q is the number of function evaluations queried by the attacker in the security experiment. Constructions from weaker \(Q \)-type assumptions were described by Boneh et al. [11] and Abdalla et al. [2], both require \(Q = \varTheta (k)\). A VRF-security proof for the classical verifiable unpredictable function of Lysyanskaya [35], which requires a \(Q\)-type assumption with only \(Q = O(\log k)\), was recently given in [29]. Even though this is complexity assumption is relatively weak, it is still Q-type.
In summary, the construction of a VRF with all desired security properties, which is based on a standard, constant-size assumption (like the decision-linear assumption, for example) is a long-standing open problem, posed for example in [28, 29]. Some authors even asked if it is possible to prove that a Q-type assumption is inherently necessary to construct such VRFs [28]. Indeed, by adopting the techniques of [30] to the setting of VRFs, one can prove [33] that some known VRF-constructions are equivalent to certain Q-type assumptions, which means that a security proof under a strictly weaker assumption is impossible. This includes the VRFs of Dodis-Yampolskiy [20] and Boneh et al. [11]. It is also known that it is impossible to construct verifiable random functions from one-way permutations [13] or even trapdoor permutations in a black-box manner [23].
Our Contribution. We construct the first verifiable random functions with exponential-sized input space, and give a proof of full adaptive security under any member of the \((n-1)\) -linear assumption family with \(n \ge 3\) in symmetric bilinear groups. The \((n-1)\)-linear assumption is a family of non-interactive, constant-size complexity assumptions, which get progressively weaker with larger n [42]. A widely-used special case is the 2-linear assumption, which is also known as the decision-linear (DLIN) assumption [10].
Recently, a lot of progress has been made in proving the security of cryptosystems which previously required a \(Q \)-type assumption, see [16, 26, 43], for example. Verifiable random functions with all desired properties were one of the last cryptographic applications that required \(Q \)-type assumptions. Our work eliminates VRFs from this list.
The New Construction and Proof Idea. The starting point for our construction is the VRF of Lysyanskaya [35]. Her function is in fact the Naor-Reingold pseudorandom function [40] with
where \(X=(x_1,\dots ,x_k)\), and the \(a_{i,b}\) are randomly chosen exponents. However, unlike [40], Lysyanskaya considers this function in a “Diffie-Hellman gap group”.Footnote 1 The corresponding verification key consists of all \(g^{a_{i,x_i}}\). Relative to this verification key, an image y can be proven to be of the form \(g^{\prod _{i=1}^ka_{i,x_i}}\) by publishing all “partial products in the exponent”, that is, all values \(\pi _\ell := g^{\prod _{i=1}^\ell a_{i,x_i}}\) for \(\ell \in \{2,\ldots ,k-1\}\). (Since the Decisional Diffie-Hellman problem is assumed to be easy, these partial products can be checked for consistency with the \(g^{a_{i,b}}\) one after the other.)
Note that pseudorandomness of this construction is not obvious. Indeed, Lysyanskaya’s analysis requires a computational assumption that offers \(k\) group elements in a computational challenge. (This “size-\(k\)” assumption could be reduced to a “size-\((\log (k))\)” assumption recently [29].) One reason for this apparent difficulty lies in the verifiability property of a VRF. For instance, the original Naor-Reingold analysis of [40] (that shows that this \(V_{{ sk }}\) is a PRF) can afford to gradually substitute images given to the adversary by random images, using a hybrid argument. Such a proof is not possible in a setting in which the adversary can ask for “validity proofs” for some of these images. (Note that by the uniqueness property of a VRF, we cannot expect to be able to simulate such validity proofs for non-images.) As a result, so far security proofs for VRFs have used “one-shot reductions” to suitable computational assumptions (which then turned out to be rather complex).
We circumvent this problem by a more complex function (with more complex public parameters) that can be modified gradually, using simpler computational assumptions. Following [22], in the sequel we will write \(g^{\mathbf {u}}\), where \(\mathbf {u} = (u_1,\ldots ,u_n)^\top \in \mathbb {Z} _p^n\) is a vector, to denote the vector \(g^{\mathbf {u}} := (g^{u_1},\ldots ,g^{u_n})\). We will also extend this notation to matrices in the obvious way. To explain our approach, consider the function
for random (quadratic) matrices \(\mathbf {M} _{i,x_i}\) and a random vector \(\mathbf {u} \). The function \(G_{{ sk }}\) will not be the VRF \(V_{{ sk }}\) we seek, but it will form the basis for it. (In fact, \(V_{{ sk }}\) will only postprocess \(G_{{ sk }}\)’s output, in a way we will explain below.) \(V_{{ sk }}\)’s verification key will include \(g^{\mathbf {u}}\) and the \(g^{\mathbf {M} _{i,b}}\). As in the VRF described above, validity proofs of images contain all partial products \(g^{\mathbf {u} ^\top \cdot \prod _{i=1}^\ell \mathbf {M} _{i,x_i}}\). (However, note that to check proofs, we now need a bilinear map, and not only an efficient DDH-solver, as with Lysyanskaya’s VRF.)
To show pseudorandomness, let us first consider the case of selective security in which the adversary \(\mathcal {A}\) first commits to a challenge preimage \(X^*\). Then, \(\mathcal {A}\) receives the verification key and may ask for arbitrary images \(V_{{ sk }}(X)\) and proofs for \(X\ne X^*\). Additionally, \(\mathcal {A}\) gets either \(V_{{ sk }}(X^*)\) (without proof), or a random image, and has to decide which it is.
In this setting, we can gradually adapt the \(g^{\mathbf {M} _{i,b}}\) given to \(\mathcal {A}\) such that \(\prod _{i=1}^k\mathbf {M} _{i,x_i}\) has full rank if and only if \(X=X^*\). To this end, we choose \(\mathbf {M} _{i,b}\) as a full-rank matrix exactly for \(b=x^*_i\). (This change can be split up in a number of local changes, each of which changes only one \(\mathbf {M} _{i,b}\) and can be justified with the \((n-1)\)-linear assumption, where n is the dimension of \(\mathbf {M} _{i,b}\).) Even more: we show that if we perform these changes carefully, and in a “coordinated” way, we can achieve that \(\mathbf {v} ^\top :=\mathbf {u} ^\top \prod _{i=1}^k\mathbf {M} _{i,x_i}\) lies in a fixed subspace \(\mathfrak {U}^\top \) if and only if \(X\ne X^*\). In other words, if we write \(\mathbf {v} =\sum _{i=1}^n\beta _i\mathbf {b} _i\) for a basis \(\{\mathbf {b} _i\}_{i=1}^n\) such that \(\{\mathbf {b} _i\}_{i=1}^{n-1}\) is a basis of \(\mathfrak {U}\), then we have that \(\beta _n=0\) if and only if \(X\ne X^*\). Put differently: \(\mathbf {v} \) has a \(\mathbf {b} _n\)-component if and only if \(X=X^*\).
Hence, we could hope to embed (part of) a challenge from a computational hardness assumption into \(\mathbf {b} _n\). For instance, to obtain a VRF secure under the Bilinear Decisional Diffie-Hellman (BDDH) assumption, one could set \(V_{{ sk }}(X)=e(G_{{ sk }}(X),g^{\alpha })^{\beta }\) for a pairing e and random \(\alpha ,\beta \). A BDDH challenge can then be embedded into \(\mathbf {b} _n\), \(\alpha \), and \(\beta \). (Of course, also validity proofs need to be adapted suitably.)
In the main part of the paper, we show how to generalize this idea simultaneously to adaptive security (with a semi-generic approach that employs admissible hash functions), and based on the \((n-1)\)-linear assumption for arbitrary \(n \ge 3\) (instead of the BDDH assumption).
We note that we pay a price for a reduction to a standard assumption: since our construction relies on matrix multiplication (instead of multiplication of exponents), it is less efficient than previous constructions. For instance, compared to Lysyanskaya’s VRF, our VRF has less compact proofs (by a factor of about n, when building on the \((n-1)\)-linear assumption), and requires more pairing operations (by a factor of about \(n^2\)) for verification.
Programmable Vector Hash Functions. The proof strategy sketched above is implemented by a new tool that we call programmable vector hash functions (PVHFs). Essentially, PVHFs can be seen as a variant of programmable hash functions of Hofheinz and Kiltz [27], which captures the “coordinated” setup of \(G_{{ sk }}\) described above in a modular building block. We hope that this building block will be useful for other cryptographic constructions.
More Related Work. VRFs were introduced by Micali, Rabin, and Vadhan [36]. Number-theoretic constructions of VRFs were described in [1, 2, 11, 19, 20, 28, 29, 35, 36]. Abdalla et al. [1, 2] also gave a generic construction from a special type of identity-based key encapsulation mechanisms. Most of these either do not achieve full adaptive security for large input spaces, or are based on interactive complexity assumptions, the exceptions [2, 11, 28, 29] were mentioned above. We wish to avoid interactive assumptions to prevent circular arguments, as explained by Naor [39].
The notion of weak VRFs was proposed by Brakerski et al. [13], along with simple and efficient constructions, and proofs that neither VRFs, nor weak VRFs can be constructed (in a black-box way) from one-way permutations. Several works introduced related primitives, like simulatable VRFs [15] and constrained VRFs [25].
Other Approaches to Avoid \(Q\) -type Assumptions. One may ask whether the techniques presented by Chase and Meiklejohn [16], which in certain applications allow to replace \(Q \)-type assumption with constant-size subgroup hiding assumptions, give rise to alternative constructions of VRFs from constant-size assumptions. This technique is based on the idea of using the dual-systems approach of Waters [43], and requires to add randomization to group elements. This randomization makes it difficult to construct VRFs that meet the unique provability requirement. Consequently, Chase and Meiklejohn were able to prove that the VRF of Dodis and Yampolski [20] forms a secure pseudorandom function under a static assumption, but not that it is a secure VRF.
Open Problems. The verifiable random functions constructed in this paper are relatively inefficient, when compared to the q-type-based constructions of [2, 11, 28, 29], for example. An interesting open problem is therefore the construction of more efficient VRFs from standard assumptions. In particular, it is not clear whether the constructions in this paper can also be instantiated from the SXDH assumption in asymmetric bilinear groups. This would potentially yield a construction with smaller matrices, and thus shorter proofs.
2 Certified Bilinear Group Generators
In order to be able to prove formally that a given verifiable random function satisfies uniqueness in the sense of Definition 8, we extend the notion of certified trapdoor permutations [7, 8, 32] to certified bilinear group generators. Previous works on verifiable random functions were more informal in this aspect, e.g., by requiring that group membership can be tested efficiently and that each group element has a unique representation.
Definition 1
A bilinear group generator is a probabilistic polynomial-time algorithm \(\mathsf {GrpGen}\) that takes as input a security parameter \(k\) (in unary) and outputs \(\varPi = (p,\mathbb {G},\mathbb {G} _T,\circ ,\circ _T,e,\phi (1)) \mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathsf {GrpGen} (1^k)\) such that the following requirements are satisfied.
-
1.
p is prime and \(\log (p)\in \varOmega (k)\).
-
2.
\(\mathbb {G} \) and \(\mathbb {G} _T\) are subsets of \(\{0,1\} ^*\), defined by algorithmic descriptions of maps \(\phi : \mathbb {Z} _p \rightarrow \mathbb {G} \) and \(\phi _T : \mathbb {Z} _p \rightarrow \mathbb {G} _T\).
-
3.
\(\circ \) and \(\circ _T\) are algorithmic descriptions of efficiently computable (in the security parameter) maps \(\circ : \mathbb {G} \times \mathbb {G} \rightarrow \mathbb {G} \) and \(\circ _T : \mathbb {G} _T \times \mathbb {G} _T \rightarrow \mathbb {G} _T\), such that
-
(a)
\((\mathbb {G},\circ )\) and \((\mathbb {G} _T,\circ _T)\) form algebraic groups and
-
(b)
\(\phi \) is a group isomorphism from \((\mathbb {Z} _p,+)\) to \((\mathbb {G},\circ )\) and
-
(c)
\(\phi _T\) is a group isomorphism from \((\mathbb {Z} _p,+)\) to \((\mathbb {G} _T,\circ _T)\).
-
(a)
-
4.
e is an algorithmic description of an efficiently computable (in the security parameter) bilinear map \(e : \mathbb {G} \times \mathbb {G} \rightarrow \mathbb {G} _T\). We require that e is non-degenerate, that is,
$$\begin{aligned} x \ne 0 \implies e(\phi (x),\phi (x)) \ne \phi _T(0) \end{aligned}$$
Definition 2
We say that group generator \(\mathsf {GrpGen}\) is certified, if there exists a deterministic polynomial-time algorithm \(\mathsf {GrpVfy}\) with the following properties.
-
Parameter Validation. Given a string \(\varPi \) (which is not necessarily generated by \(\mathsf {GrpGen} \)), algorithm \(\mathsf {GrpVfy} (\varPi )\) outputs 1 if and only if \(\varPi \) has the form
$$\begin{aligned} \varPi = (p,\mathbb {G},\mathbb {G} _T,\circ ,\circ _T,e,\phi (1)) \end{aligned}$$and all requirements from Definition 1 are satsified.
-
Recognition and Unique Representation Elements of \(\mathbb {G}\) . Furthermore, we require that each element in \(\mathbb {G} \) has a unique representation, which can be efficiently recognized. That is, on input two strings \(\varPi \) and s, \(\mathsf {GrpVfy} (\varPi ,s)\) outputs 1 if and only if \(\mathsf {GrpVfy} (\varPi )=1\) and it holds that \(s = \phi (x)\) for some \(x \in \mathbb {Z} _p\). Here \(\phi : \mathbb {Z} _p \rightarrow \mathbb {G} \) denotes the fixed group isomorphism contained in \(\varPi \) to specify the representation of elements of \(\mathbb {G} \) (see Definition 1).
3 Programmable Vector Hash Functions
Notation. As explained in the introduction, for a vector \(\mathbf {u} = (u_1,\ldots ,u_n)^\top \in \mathbb {Z} _p^n\) we will write \(g^{\mathbf {u}}\) to denote the vector \(g^{\mathbf {u}} := (g^{u_1},\ldots ,g^{u_n})\), and we will generalize this notation to matrices in the obvious way. Moreover, whenever the reference to a group generator \(g \in \mathbb {G} \) is clear (note that a generator \(g = \phi (1)\) is always contained in the group parameters \(\varPi \) generated by \(\mathsf {GrpGen} \)), we will henceforth follow [22] and simplify our notation by writing \(\left[ x\right] := g^x \in \mathbb {G} \) for an integer \(x \in \mathbb {Z} _p\), \(\left[ \mathbf {u} \right] := g^{\mathbf {u}} \in \mathbb {G} ^n\) for a vector \(\mathbf {u} \in \mathbb {Z} _p^n\), and \(\left[ \mathbf {M} \right] := g^{\mathbf {M}}\in \mathbb {G} ^{n \times n}\) for a matrix \(\mathbf {M} \in \mathbb {Z} _p^{n \times n}\). We also extend our notation for bilinear maps: we write \(e([\mathbf {A} ],[\mathbf {B} ])\) (for matrices \(\mathbf {A} =(a_{i,j})_{i,j}\in \mathbbm {Z} _p^{n_1\times n_2}\) and \(\mathbf {B} =(b_{i,j})_{i,j}\in \mathbbm {Z} _p^{n_2\times n_3}\)) for the matrix whose (i, j)-th entry is \(\prod _{\ell =1}^{n_2}e([a_{i,\ell }],[b_{\ell ,j}])\). In other words, we have \(e([\mathbf {A} ],[\mathbf {B} ])=e(g,g)^{\mathbf {A} \mathbf {B} }\).
For a vector space \(\mathfrak {U}\subseteq \mathbbm {Z} _p^{n\times n}\) of column vectors, we write \(\mathfrak {U}^\top :=\{\mathbf {u} ^\top \mid \mathbf {u} \in \mathfrak {U}\}\) for the respective set of row vectors. Furthermore, we write \(\mathfrak {U}^\top \cdot \mathbf {M} :=\{\mathbf {u} ^\top \cdot \mathbf {M} \mid \mathbf {u} ^\top \in \mathfrak {U}^\top \}\) for an element-wise vector-matrix multiplication. Finally, we denote with \(\mathrm {GL} _n(\mathbbm {Z} _p)\subset \mathbbm {Z} _p^{n\times n}\) the set of invertible n-by-n matrices over \(\mathbbm {Z} _p\). Recall that a uniformly random \(\mathbf {M} \in \mathbbm {Z} _p\) is invertible except with probability at most n / p. (Hence, the uniform distributions on \(\mathrm {GL} _n(\mathbbm {Z} _p)\) and \(\mathbbm {Z} _p^{n\times n}\) are statistically close.)
3.1 Vector Hash Functions
Definition 3
Let \(\mathsf {GrpGen}\) be group generator algorithm and let \(n \in \mathbb {N} \) be a positive integer. A verifiable vector hash function (VHF) for \(\mathsf {GrpGen}\) with domain \(\{0,1\} ^k\) and range \(\mathbb {G} ^n\) consists of algorithms \((\mathsf {Gen}_\mathsf {VHF},\mathsf {Eval}_\mathsf {VHF},\mathsf {Vfy}_\mathsf {VHF})\) with the following properties.
-
\(\mathsf {Gen}_\mathsf {VHF}\) takes as input parameters \(\varPi \mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathsf {GrpGen} (1^k)\) and outputs a verification key \( vk \) and an evaluation key \( ek \) as \(( vk , ek ) \mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathsf {Gen}_\mathsf {VHF} (\varPi )\).
-
\(\mathsf {Eval}_\mathsf {VHF}\) takes as input an evaluation key \( ek \) and a string \(X \in \{0,1\} ^k\). It outputs \((\left[ \mathbf {v} \right] , \pi ) \leftarrow \mathsf {Eval}_\mathsf {VHF} ( ek ,X)\), where \(\left[ \mathbf {v} \right] = (\left[ v_1\right] , \ldots , \left[ v_n\right] )^\top \in \mathbb {G} ^n\) is the function value and \(\pi \in \{0,1\} ^*\) is a corresponding proof of correctness.
-
\(\mathsf {Vfy}_\mathsf {VHF}\) takes as input a verification key \( vk \), vector \(\left[ \mathbf {v} \right] \in \mathbb {G} ^n\), proof \(\pi \in \{0,1\} ^*\), and \(X \in \{0,1\} ^k\), and outputs a bit: \(\mathsf {Vfy}_\mathsf {VHF} ( vk ,\left[ \mathbf {v} \right] ,\pi ,X) \in \{0,1\} \).
We require correctness and unique provability in the following sense.
-
Correctness. We say that \((\mathsf {Gen}_\mathsf {VHF},\mathsf {Eval}_\mathsf {VHF},\mathsf {Vfy}_\mathsf {VHF})\) is correct, if for all \(\varPi \mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathsf {GrpGen} (1^k)\), all \(( vk , ek ) \mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathsf {Gen}_\mathsf {VHF} (\varPi )\), and all \(X \in \{0,1\} ^k\) holds that
$$\begin{aligned} \Pr \left[ \mathsf {Vfy}_\mathsf {VHF} ( vk ,\left[ \mathbf {v} \right] ,\pi ,X) = 1 : \begin{array}{l} ( vk , ek ) \mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathsf {Gen}_\mathsf {VHF} (\varPi ), \\ (\left[ \mathbf {v} \right] , \pi ) \leftarrow \mathsf {Eval}_\mathsf {VHF} ( ek ,X) \end{array} \right] = 1 \end{aligned}$$ -
Unique Provability. We say that a VHF has unique provability, if for all strings \( vk \in \{0,1\} ^*\) (not necessarily generated by \(\mathsf {Gen}_\mathsf {VHF}\)) and all \(X \in \{0,1\} ^k\) there does not exist any tuple \((\left[ \mathbf {v} _0\right] ,\pi _0,\left[ \mathbf {v} _1\right] ,\pi _1)\) with \(\left[ \mathbf {v} _0\right] \ne \left[ \mathbf {v} _1\right] \) and \(\left[ \mathbf {v} _0\right] , \left[ \mathbf {v} _1\right] \in \mathbb {G} ^n\) such that
$$\begin{aligned} \mathsf {Vfy}_\mathsf {VHF} ( vk ,\left[ \mathbf {v} _0\right] ,\pi _0,X) = \mathsf {Vfy}_\mathsf {VHF} ( vk ,\left[ \mathbf {v} _1\right] ,\pi _1,X) = 1 \end{aligned}$$
3.2 Selective Programmability
Definition 4
We say that VHF \((\mathsf {Gen}_\mathsf {VHF},\mathsf {Eval}_\mathsf {VHF},\mathsf {Vfy}_\mathsf {VHF})\) is selectively programmable, if additional algorithms \(\mathsf {Trap}_\mathsf {VHF} =(\mathsf {TrapGen}_\mathsf {VHF},\mathsf {TrapEval}_\mathsf {VHF})\) exist, with the following properties.
-
\(\mathsf {TrapGen}_\mathsf {VHF}\) takes group parameters \(\varPi \mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathsf {GrpGen} (1^k)\), matrix \(\left[ \mathbf {B} \right] \in \mathbb {G} ^{n \times n}\), and \(X^{(0)} \in \{0,1\} ^k\). It computes \(( vk , td ) \mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathsf {TrapGen}_\mathsf {VHF} (\varPi ,\left[ \mathbf {B} \right] ,X^{(0)})\), where \( vk \) is a verification key with corresponding trapdoor evaluation key \( td \).
-
\(\mathsf {TrapEval}_\mathsf {VHF}\) takes as input a trapdoor evaluation key \( td \) and a string \(X \in \{0,1\} ^k\). It outputs a vector \(\mathbf {\varvec{\upbeta }} \leftarrow \mathsf {TrapEval}_\mathsf {VHF} ( td ,X)\) with \(\mathbf {\varvec{\upbeta }} \in \mathbb {Z} _p^n\) and a proof \(\pi \in \{0,1\} ^k\).
We furthermore have the following requirements.
-
Correctness. For all \(\varPi \mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathsf {GrpGen} (1^k)\), all \(\left[ \mathbf {B} \right] \in \mathbb {G} ^{n \times n}\), and all \(X,X^{(0)} \in \{0,1\} ^k\) we have
$$\begin{aligned} \Pr \left[ \mathsf {Vfy}_\mathsf {VHF} ( vk ,\left[ \mathbf {v} \right] ,X) = 1 : \begin{array}{l} ( vk , td ) \mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathsf {TrapGen}_\mathsf {VHF} (\varPi ,\left[ \mathbf {B} \right] ,X^{(0)}) \\ (\mathbf {\varvec{\upbeta }},\pi ) \leftarrow \mathsf {TrapEval}_\mathsf {VHF} ( td ,X)\\ \left[ \mathbf {v} \right] := \left[ \mathbf {B} \right] \cdot \mathbf {\varvec{\upbeta }} \end{array} \right] = 1 \end{aligned}$$ -
Indistinguishability. Verification keys generated by \(\mathsf {TrapGen}_\mathsf {VHF} \) are computationally indistinguishable from keys generated by \(\mathsf {Gen}_\mathsf {VHF}\). More precisely, we require that for all PPT algorithms \(\mathcal {A} = (\mathcal {A} _0,\mathcal {A} _1)\) holds that
is negligible, where oracles \(\mathcal {O} _0\) and \(\mathcal {O} _1\) are defined in Fig. 1.
-
Well-distributed Outputs. Let \(q = q(k) \in \mathbb {N} \) be a polynomial, and let \(\beta ^{(i)}_n\) denote the n-th coordinate of vector \(\mathbf {\varvec{\upbeta }} ^{(i)}\in \mathbb {Z} _p^n\). There exists a polynomial \(\mathsf {poly}\) such that for all \((X^{(0)}, \ldots , X^{(q)}) \in (\{0,1\} ^k)^{q+1}\) with \(X^{(0)} \ne X^{(i)}\) for \(i \ge 1\) holds that
We note that in our security definitions, \(\mathbf {B} \) is always a random invertible matrix, although \(\mathsf {TrapGen}_\mathsf {VHF} \) would also work on arbitrary \(\mathbf {B} \).
Furthermore, note that we only require a noticeable “success probability” in our “well-distributed outputs” requirement above. This is sufficient for our application; however, our (selectively secure) PVHF construction achieves a success probability of 1. (On the other hand, our adaptively secure construction only achieves well-distributedness in the sense above, with a significantly lower – but of course still noticeable – success probability.)
3.3 Adaptive Programmability
Definition 5
We say that VHF \((\mathsf {Gen}_\mathsf {VHF},\mathsf {Eval}_\mathsf {VHF},\mathsf {Vfy}_\mathsf {VHF})\) is (adaptively) programmable, if algorithms \(\mathsf {Trap}_\mathsf {VHF} =(\mathsf {TrapGen}_\mathsf {VHF},\mathsf {TrapEval}_\mathsf {VHF})\) exist, which have exactly the same syntax and requirements on correctness, indistinguishability, and well-formedness as in Definition 4, with the following differences:
-
\(\mathsf {TrapGen}_\mathsf {VHF} (\varPi ,\left[ \mathbf {B} \right] )\) does not take an additional string \(X^{(0)}\) as input.
-
In the indistinguishability experiment, \(\mathcal {A} _0\) is the trivial algorithm, which outputs the empty string \(\emptyset \), while \(\mathcal {A} _1\) additionally gets access to oracle \(\mathcal {O}_\mathsf {check} \) (see Fig. 1). We stress that this oracle always uses \( td \) to compute its output, independently of \(\overline{b}\). We denote with \(\mathsf {Adv}^{\mathsf {vhf-ad-ind}}_{\mathsf {VHF},\mathsf {Trap}_\mathsf {VHF}} (k)\) the corresponding advantage function.
4 A PVHF Based on the Matrix-DDH Assumption
Overview. In this section, we present a programmable vector hash function, whose security is based upon the “Matrix-DDH” assumption introduced in [22] (which generalizes the matrix-DDH assumption of Boneh et al. [12] and the matrix d-linear assumption of Naor and Segev [41]). This assumption can be viewed as a relaxation of the \((n-1)\)-linear assumption, so that in particular our construction will be secure under the \((n-1)\)-linear assumption with \(n \ge 3\).
Assumption 6
The n-rank assumption states that \([\mathbf {M} _{n-1}] \mathop {\approx }\limits ^{\mathrm {c}} [\mathbf {M} _{n}]\), where \(\mathbf {M} _{i}\in \mathbbm {Z} _p^{n\times n}\) is a uniformly distributed rank-i matrix, i.e., that
is negligible for every PPT adversary \(\mathcal {A}\).
4.1 The Construction
Assume a bilinear group generator \(\mathsf {GrpGen}\) and an integer \(n\in \mathbb {N} \) as above. Consider the following vector hash function \(\mathsf {VHF}\):
-
\(\mathsf {Gen}_\mathsf {VHF} (\mathsf {GrpGen})\) uniformly chooses \(2k\) invertible matrices \(\mathbf {M} _{i,b}\in \mathbb {Z} _p^{n\times n}\) (for \(1\le i\le k\) and \(b\in \{0,1\}\)) and a nonzero vector \(\mathbf {u} ^\top \in \mathbb {Z} _p^{n}\setminus \{0\}\). The output is \(( vk , ek )\) with
$$\begin{aligned} vk&= \big ( ([\mathbf {M} _{i,b}])_{1\le i\le k,b\in \{0,1\}}, [\mathbf {u} ] \big )&ek&= \big ( (\mathbf {M} _{i,b})_{1\le i\le k,b\in \{0,1\}}, \mathbf {u} \big ). \end{aligned}$$ -
\(\mathsf {Eval}_\mathsf {VHF} ( ek ,X)\) (for \(X=(x_1,\dots ,x_{k})\)) computes and outputs an image \([\mathbf {v} ]=[\mathbf {v} _{k}]\in \mathbbm {G} ^{n}\) and a proof \(\pi =([\mathbf {v} _1],\dots ,[\mathbf {v} _{k-1}])\in (\mathbbm {G} ^{n})^{k-1}\), where
$$\begin{aligned} \mathbf {v} _i^\top = \mathbf {u} ^\top \cdot \prod _{j=1}^i\mathbf {M} _{j,x_j} . \end{aligned}$$(1) -
\(\mathsf {Vfy}_\mathsf {VHF} ( vk ,[\mathbf {v} ],\pi ,X)\) outputs 1 if and only if
$$\begin{aligned} e ( [\mathbf {v} _i^\top ], [1] )&= e ( [\mathbf {v} _{i-1}^\top ],[\mathbf {M} _{i,x_i}]), \end{aligned}$$(2)holds for all i with \(1\le i\le k\), where we set \([\mathbf {v} _0]:=[\mathbf {u} ]\) and \([\mathbf {v} _{k}]:=[\mathbf {v} ]\).
Theorem 1
(Correctness and Uniqueness of \(\mathsf {VHF}\) ). \(\mathsf {VHF}\) is a vector hash function. In particular, \(\mathsf {VHF}\) satisfies the correctness and uniqueness conditions from Definition 3.
Proof
First, note that (2) is equivalent to \(\mathbf {v} _i^{\top }=\mathbf {v} _{i-1}^\top \cdot \mathbf {M} _{i,x_i}\). By induction, it follows that (2) holds for all i if and only if \(\mathbf {v} _i^{\top }=\mathbf {u} ^\top \cdot \prod _{j=1}^i\mathbf {M} _{j,x_j}\) for all i. By definition of \(\mathsf {Eval}_\mathsf {VHF}\), this yields correctness. Furthermore, we get that \(\mathsf {Vfy}_\mathsf {VHF}\) outputs 1 for precisely one value \([\mathbf {v} ]=[\mathbf {v} _{k}]\) (even if the \(\mathbf {M} _{i,b}\) are not invertible). In fact, the proof \(\pi \) is uniquely determined by \( vk \) and X.
4.2 Selective Security
We proceed to show the selective security of \(\mathsf {VHF}\):
Theorem 2
(Selectively Programmabability of \(\mathsf {VHF}\) ). \(\mathsf {VHF}\) is selectively programmable in the sense of Definition 4.
The Trapdoor Algorithms. We split up the proof of Theorem 2 into three lemmas (that show correctness, well-distributed outputs, and indistinguishability of \(\mathsf {VHF}\)). But first, we define the corresponding algorithms \(\mathsf {TrapGen}_\mathsf {VHF}\) and \(\mathsf {TrapEval}_\mathsf {VHF}\).
-
\(\mathsf {TrapGen}_\mathsf {VHF} (\varPi ,[\mathbf {B} ],X^{(0)})\) first chooses \(k+1\) subspaces \(\mathfrak {U}_{i}\) of \(\mathbb {Z} _p^{n}\) (for \(0\le i\le k\)), each of dimension \(n-1\). Specifically,
-
the first \(k\) subspaces \(\mathfrak {U}_{i}\) (for \(0\le i\le k-1\)) are chosen independently and uniformly,
-
the last subspace \(\mathfrak {U}_{k}\) is the subspace spanned by the first \(n-1\) unit vectors. (That is, \(\mathfrak {U}_{k}\) contains all vectors whose last component is 0.)
Next, \(\mathsf {TrapGen}_\mathsf {VHF}\) uniformly chooses \(\mathbf {u} \in \mathbbm {Z} _p^n\setminus \mathfrak {U}_{0}\) and \(2k\) matrices \(\mathbf {R} _{i,b}\) (for \(1\le i\le k\) and \(b\in \{0,1\}\)), as follows:
$$\begin{aligned} \begin{aligned} \mathbf {R} _{i,1-x_i^{(0)}}&\text { uniformly of rank }n-1\,\, \text {subject to }&\mathfrak {U}_{i-1}^\top \cdot \mathbf {R} _{i,1-x_i^{(0)}}&=\mathfrak {U}_{i}^\top \\ \mathbf {R} _{i,x_i^{(0)}}&\text { uniformly of rank }n\,\, \text {subject to }&\mathfrak {U}_{i-1}^\top \cdot \mathbf {R} _{i,x_i^{(0)}}&=\mathfrak {U}_{i}^\top . \end{aligned} \end{aligned}$$(3)Finally, \(\mathsf {TrapGen}_\mathsf {VHF}\) sets
$$\begin{aligned} \begin{aligned}&\qquad \qquad \qquad \qquad \;\,\mathbf {M} _{i,b} = \mathbf {R} _{i,b} \quad \text {for }1\le i\le k-1 \\&[\mathbf {M} _{k,0}] = [\mathbf {R} _{k,0}\cdot \mathbf {B} ^\top ] \qquad \qquad [\mathbf {M} _{k,1}] = [\mathbf {R} _{k,1}\cdot \mathbf {B} ^\top ], \end{aligned} \end{aligned}$$(4)and outputs
$$\begin{aligned} td&= \big ( (\mathbf {R} _{i,b})_{i\in [k-1],b\in \{0,1\}}, \mathbf {u}, [\mathbf {B} ] \big )&vk&= \big ( ([\mathbf {M} _{i,b}])_{i\in [k],b\in \{0,1\}}, [\mathbf {u} ] \big ). \end{aligned}$$ -
-
\(\mathsf {TrapEval}_\mathsf {VHF} ( td ,X)\) first computes an image \([\mathbf {v} ]=[\mathbf {v} _{k}]\), along with a corresponding proof \(\pi =[\mathbf {v} _1,\dots ,\mathbf {v} _{k-1}]\) exactly like \(\mathsf {Eval}_\mathsf {VHF}\), using (1). (Note that \(\mathsf {TrapEval}_\mathsf {VHF}\) can compute all \([\mathbf {v} _i]\) from its knowledge of \(\mathbf {u} \), all \(\mathbf {R} _{i,b}\), and \([\mathbf {B} ]\).) Next, observe that the image \([\mathbf {v} ]\) satisfies
$$\begin{aligned} \mathbf {v} ^\top&= \mathbf {v} _{k}^\top = \mathbf {u} ^\top \cdot \prod _{j=1}^{k}\mathbf {M} _{j,x_j} = \Big (\underbrace{\mathbf {u} ^\top \cdot \prod _{j=1}^{k}\mathbf {R} _{j,x_j}}_{=:\mathbf {\varvec{\upbeta }} ^\top } \Big ) \cdot \mathbf {B} ^\top . \end{aligned}$$(5)Hence, \(\mathsf {TrapEval}_\mathsf {VHF}\) outputs \((\mathbf {\varvec{\upbeta }},\pi )\).
Lemma 1
(Correctness of \(\mathsf {Trap}_\mathsf {VHF} \) ). The trapdoor algorithms \(\mathsf {TrapGen}_\mathsf {VHF} \) and \(\mathsf {TrapEval}_\mathsf {VHF} \) above satisfy correctness in the sense of Definition 4.
Proof
This follows directly from (5).
Lemma 2
(Well-distributedness of \(\mathsf {Trap}_\mathsf {VHF} \) ). The above trapdoor algorithms \(\mathsf {TrapGen}_\mathsf {VHF} \) and \(\mathsf {TrapEval}_\mathsf {VHF} \) enjoy well-distributed outputs in the sense of Definition 4.
Proof
Fix any preimage \(X^{(0)}=(x^{(0)}_i)_{i=1}^{k}\in \{0,1\}^{k}\), matrix \([\mathbf {B} ]\in \mathbb {G} ^{n\times n}\), and corresponding keypair \(( td , vk )\mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathsf {TrapGen}_\mathsf {VHF} (\varPi ,[\mathbf {B} ],X^{(0)})\). We will show first that for all \(X=(x_i)_{i=1}^{k}\in \{0,1\}^{k}\), the corresponding vectors \(\mathbf {v} _i^\top \) computed during evaluation satisfy
Equation (6) can be proven by induction over i. The case \(i=0\) follows from the setup of \(\mathbf {u} \notin \mathfrak {U}_{0}\). For the induction step, assume (6) holds for \(i-1\). To show (6) for i, we distinguish two cases:
-
If \(x_i=x^{(0)}_i\), then \(\mathbf {R} _{i,x_i}\) has full rank, and maps \(\mathfrak {U}_{i-1}^\top \) to \(\mathfrak {U}_{i}^\top \). Thus, \(\mathbf {u} ^\top \cdot \prod _{j=1}^i\mathbf {R} _{j,x_j}\in \mathfrak {U}_{i}^\top \) if and only if \(\mathbf {u} ^\top \cdot \prod _{j=1}^{i-1}\mathbf {R} _{j,x_j}\in \mathfrak {U}_{i-1}^\top \).Footnote 2 By the induction hypothesis, and using \(x_i=x^{(0)}_i\), hence, \(\mathbf {u} ^\top \cdot \prod _{j=1}^i\mathbf {R} _{j,x_j}\in \mathfrak {U}_{i}^\top \) if and only if \(x_j\ne x^{(0)}_j\) for some \(j\le i\). This shows (6).
-
If \(x_i\ne x^{(0)}_i\), then \(\mathbf {R} _{i,x_i}\) has rank \(n-1\). Together with \(\mathfrak {U}_{i-1}^\top \cdot \mathbf {R} _{i,x_i}=\mathfrak {U}_{i}^\top \), this implies that in fact \((\mathbb {Z} _p^{n})^\top \cdot \mathbf {R} _{i,x_i}=\mathfrak {U}_{i}^\top \). Hence, both directions of (6) hold.
This shows that (6) holds for all i. In particular, if we write
(as in (5)), then \(\mathbf {\varvec{\upbeta }} \in \mathfrak {U}_{k}\) if and only if \(X\ne X^{(0)}\). By definition of \(\mathfrak {U}_{k}\), this means that \(\beta _n=0\Leftrightarrow X\ne X^{(0)}\). Well-distributedness as in Definition 4 follows.
Lemma 3
(Indistinguishability of \(\mathsf {Trap}_\mathsf {VHF} \) ). If the \(n\)-rank assumption holds relative to \(\mathsf {GrpGen}\), then the above algorithms \(\mathsf {TrapGen}_\mathsf {VHF} \) and \(\mathsf {TrapEval}_\mathsf {VHF} \) satisfy the indistinguishability property from Definition 4. Specifically, for every adversary \(\mathcal {A}\), there exists an adversary \(\mathcal {B}\) (of roughly the same complexity) with
Proof
Fix an adversary \(\mathcal {A}\). We proceed in games.
Game 0. Game 0 is identical to the indistinguishability game with \(\overline{b}=0\). In this game, \(\mathcal {A}\) first selects a “target preimage” \(X^{(0)}\), and then gets a verification key \( vk \) generated by \(\mathsf {Gen}_\mathsf {VHF}\), and oracle access to an evaluation oracle \(\mathcal {O}\). Let \(G_{0} \) denote \(\mathcal {A}\) ’s output in this game. (More generally, let \(G_{i} \) denote \(\mathcal {A}\) ’s output in \(\text {Game}\,\, i\).) Our goal will be to gradually change this setting such that finally, \( vk \) is generated by \(\mathsf {TrapGen}_\mathsf {VHF} (\mathsf {GrpGen},[\mathbf {B} ],X^{(0)})\) (for an independently uniform invertible \(\mathbf {B} \)), and \(\mathcal {O}\) uses the corresponding trapdoor to generate images and proofs. Of course, \(\mathcal {A}\) ’s output must remain the same (or change only negligibly) during these transitions.
Game 1. \(\ell \) (for \(0\le \ell \le k\) ). In \(\text {Game 1}.\ell \) (for \(0\le \ell \le k\)), \( vk \) is generated in part as by \(\mathsf {TrapGen}_\mathsf {VHF}\), and in part as by \(\mathsf {Gen}_\mathsf {VHF}\). (\(\mathcal {O}\) is adapted accordingly.) Specifically, \(\text {Game 1}.\ell \) proceeds like Game 0, except for the following changes:
-
Initially, the game chooses \(\ell +1\) subspaces \(\mathfrak {U}_{i}\) (for \(0\le i\le \ell \)) of dimension \(n-1\) independently and uniformly, and picks \(\mathbf {u} \in \mathbbm {Z} _p^n\setminus \mathfrak {U}_{0}\). (Note that unlike in an execution of \(\mathsf {TrapGen}_\mathsf {VHF}\), also \(\mathfrak {U}_{k}\) is chosen uniformly when \(\ell =k\).)
-
Next, the game chooses \(2k\) matrices \(\mathbf {R} _{i,b}\) (for \(1\le i\le k\) and \(b\in \{0,1\}\)), as follows. For \(i\le \ell \), the \(\mathbf {R} _{i,b}\) are chosen as by \(\mathsf {TrapGen}_\mathsf {VHF}\), and thus conform to (3). For \(i>\ell \), the \(\mathbf {R} _{i,b}\) are chosen uniformly and independently (but invertible).
-
Finally, the game sets up \(\mathbf {M} _{i,b}:=\mathbf {R} _{i,b}\) for all i, b. (Again, note the slight difference to \(\mathsf {TrapGen}_\mathsf {VHF}\), which follows (4).)
The game hands the resulting verification key \( vk \) to \(\mathcal {A}\); since all \(\mathbf {M} _{i,b}\) are known over \(\mathbb {Z} _p\), oracle \(\mathcal {O}\) can be implemented as \(\mathsf {Eval}_\mathsf {VHF}\).
Now let us take a closer look at the individual games \(\text {Game 1}.\ell \). First, observe that \(\text {Game 1}.0\) is essentially Game 0: all \(\mathbf {M} _{i,b}\) are chosen independently and uniformly, and \(\mathcal {O}\) calls are answered in the only possible way (given \( vk \)). The only difference is that \(\mathbf {u}\) is chosen independently uniformly from \(\mathbb {Z} _p^{n}\setminus \{0\}\) in Game 0, and from \(\mathbb {Z} _p^{n}\setminus \mathfrak {U}_{0}\) (for a uniform dimension-\((n-1)\) subspace \(\mathfrak {U}_{0}\)) in \(\text {Game 1}.0\). However, both choices lead to the same distribution of \(\mathbf {u}\), so we obtain
Next, we investigate the change from \(\text {Game 1}.(\ell -1)\) to \(\text {Game 1}.\ell \). We claim the following:
Lemma 4
There is an adversary \(\mathcal {B}\) on the \(n\)-rank problem with
We postpone a proof of Lemma 4 until after the main proof.
Game 2. Finally, in Game 2, we slightly change the way \(\mathfrak {U}_{k}\) and the matrices \(\mathbf {M} _{k,b}\) (for \(b\in \{0,1\}\)) are set up:
-
Instead of setting up \(\mathfrak {U}_{k}\) uniformly (like all other \(\mathfrak {U}_{i}\)), we set up \(\mathfrak {U}_{k}\) like \(\mathsf {TrapGen}_\mathsf {VHF}\) would (i.e., as the subspace spanned by the first \(n-1\) unit vectors).
-
Instead of setting up \(\mathbf {M} _{k,b}=\mathbf {R} _{k,b}\), we set \(\mathbf {M} _{k,b}=\mathbf {R} _{k,b}\cdot \mathbf {B} ^\top \) for an independently and uniformly chosen invertible \(\mathbf {B}\) , exactly like \(\mathsf {TrapGen}_\mathsf {VHF}\) would.
Observe that since \(\mathbf {B}\) is invertible, these modifications do not alter the distribution of the matrices \(\mathbf {M} _{k,b}\) (compared to \(\text {Game 1}.\ell \)). Indeed, in both cases, both \(\mathbf {M} _{k,b}\) map \(\mathfrak {U}_{k-1}^\top \) to the same uniformly chosen \((n-1)\)-dimension subspace. In \(\text {Game 1}.\ell \), this subspace is \(\mathfrak {U}_{k}\), while in Game 2, this subspace is the subspace spanned by the first \(n-1\) columns of \(\mathbf {B}\) . We obtain:
Finally, it is left to observe that Game 2 is identical to the indistinguishability experiment with \(\overline{b}=1\): \( vk \) is prepared exactly as with \(\mathsf {TrapGen}_\mathsf {VHF} (\varPi ,[\mathbf {B} ],X^{(0)})\) for a random \(\mathbf {B}\) , and \(\mathcal {O}\) outputs the images and proofs uniquely determined by \( vk \). Hence,
as desired.
It remains to prove Lemma 4.
Proof
(Proof of Lemma 4 ). We describe an adversary \(\mathcal {B}\) on the \(n\)-rank problem. \(\mathcal {B}\) gets as input a matrix \([\mathbf {A} ]\) “in the exponent,” such that \(\mathbf {A} \) is either of rank \(n\), or of rank \(n-1\). Initially, \(\mathcal {B}\) uniformly picks \(\ell \in \{1,\dots ,k\}\). Our goal is construct \(\mathcal {B}\) such that it internally simulates \(\text {Game 1}.(\ell -1)\) or \(\text {Game 1}.\ell \), depending on \(\mathbf {A}\) ’s rank. To this end, \(\mathcal {B}\) sets up \( vk \) as follows:
-
Like \(\text {Game 1}.(\ell -1)\), \(\mathcal {B}\) chooses \(\ell \) subspaces \(\mathfrak {U}_{0},\dots ,\mathfrak {U}_{\ell -1}\), and \(\mathbf {u} \in \mathbbm {Z} _p^n\setminus \mathfrak {U}_{0}\) uniformly.
-
For \(i<\ell \), \(\mathcal {B}\) chooses matrices \(\mathbf {R} _{i,b}\) like \(\mathsf {TrapGen}_\mathsf {VHF}\) does, ensuring (3). For \(i>\ell \), all \(\mathbf {R} _{i,b}\) are chosen independently and uniformly but invertible. The case \(i=\ell \) is more complicated and will be described next.
-
To set up \(\mathbf {M} _{\ell ,0}\) and \(\mathbf {M} _{\ell ,1}\), \(\mathcal {B}\) first asks \(\mathcal {A}\) for its challenge input \(X^{(0)}=(x^{(0)}_i)_{i=1}^k\). Next, \(\mathcal {B}\) embeds its own challenge \([\mathbf {A} ]\) as \([\mathbf {R} _{\ell ,1-x^{(0)}_\ell }]:=[\mathbf {A} ]\). To construct an \([\mathbf {R} _{\ell ,x^{(0)}_\ell }]\) that achieves (3) (for \(i=\ell \)), \(\mathcal {B}\) first uniformly chooses a basis \(\{\mathbf {c} _1,\dots ,\mathbf {c} _{n}\}\) of \(\mathbb {Z} _p^{n}\), such that \(\{\mathbf {c} _1,\dots ,\mathbf {c} _{n-1}\}\) forms a basis of \(\mathfrak {U}_{\ell -1}\). (Note that \(\mathcal {B}\) chooses the subspace \(\mathfrak {U}_{\ell -1}\) on its own and over \(\mathbbm {Z} _p\), so this is possible efficiently for \(\mathcal {B}\).) In the sequel, let \(\mathbf {C}\) be the matrix whose i-th row is \(\mathbf {c} _i^\top \), and let \(\mathbf {C} ^{-1}\) be the inverse of \(\mathbf {C} \). Jumping ahead, the purpose of \(\mathbf {C} ^{-1}\) is to help translate vectors from \(\mathfrak {U}_{\ell -1}\) (as obtained through a partial product \(\mathbf {u} ^\top \prod _{j=1}^{\ell -1}\mathbf {M} _{j,x_j}\)) to a “more accessible” form.
Next, \(\mathcal {B}\) samples \(n-1\) random vectors \([\mathbf {c} '_i]\) (for \(1\le i\le n-1\)) in the image of \([\mathbf {A} ]\) (e.g., by choosing random \(\mathbf {r} _i^\top \) and setting \([\mathbf {c} '_i]=\mathbf {r} _i^\top \cdot [\mathbf {A} ]\)). Furthermore, \(\mathcal {B}\) samples \(\mathbf {c} '_{n}\in \mathbb {Z} _p^{n}\) randomly. Let \([\mathbf {C} ']\) be the matrix whose i-th row is \([{\mathbf {c} '_i}^\top ]\). The purpose of \(\mathbf {C} '\) is to define the image of \(\mathbf {R} _{\ell ,x^{(0)}_\ell }\). Specifically, \(\mathcal {B}\) computes
$$\begin{aligned}{}[\mathbf {R} _{\ell ,x^{(0)}_\ell }]\;:=\;\mathbf {C} ^{-1}\cdot [\mathbf {C} ']. \end{aligned}$$(Note that \(\mathcal {B}\) can compute \([\mathbf {R} _{\ell ,x^{(0)}_\ell }]\) efficiently, since \(\mathbf {C} ^{-1}\) is known “in the clear.”) We will show below that, depending on the rank of \(\mathbf {A} \), either \(\mathfrak {U}_{\ell -1}^\top \cdot \mathbf {R} _{\ell ,x^{(0)}_\ell }=\mathfrak {U}_{\ell -1}^\top \cdot \mathbf {A} \), or \(\mathfrak {U}_{\ell -1}^\top \cdot \mathbf {R} _{\ell ,x^{(0)}_\ell }\) is an independently random subspace of dimension \(n-1\).
-
Finally, \(\mathcal {B}\) sets \([\mathbf {M} _{i,b}]=[\mathbf {R} _{i,b}]\) for all i, b, and hands \(\mathcal {A}\) the resulting verification key \( vk \).
Furthermore, \(\mathcal {B}\) implements oracle \(\mathcal {O} \) as follows: if \(\mathcal {A}\) queries \(\mathcal {O} \) with some \(X=(x_i)_{i=1}^{k}\in \{0,1\}^{k}\), then \(\mathcal {B}\) can produce the (uniquely determined) image and proof from the values
On the other hand, \(\mathcal {B}\) can compute all \([\mathbf {v} _i^\top ]\) efficiently, since it knows all factors in (10) over \(\mathbb {Z} _p\), except for (at most) one factor \([\mathbf {M} _{\ell ,x_\ell }]\).
Finally, \(\mathcal {B}\) outputs whatever \(\mathcal {A}\) outputs.
We now analyze this simulation. First, note that \( vk \) and \(\mathcal {O}\) are simulated exactly as in both \(\text {Game 1}.(\ell -1)\) and \(\text {Game 1}.\ell \), except for the definition of \([\mathbf {R} _{\ell ,x^{(0)}_i}]\) and \([\mathbf {R} _{\ell ,1-x^{(0)}_i}]\). Now consider how these matrices are set up depending on the rank of \(\mathcal {B}\) ’s challenge \(\mathbf {A} \).
-
If \(\mathbf {A} \) is of rank \(n\), then \(\mathbf {R} _{\ell ,x^{(0)}_\ell }\) and \(\mathbf {R} _{\ell ,1-x^{(0)}_\ell }\) are (statistically close to) independently and uniformly random invertible matrices. Indeed, then each row \({\mathbf {c} '_i}^\top \) of \(\mathbf {C} '\) is independently and uniformly random: \(\mathbf {c} '_1,\dots ,\mathbf {c} '_{n-1}\) are independently random elements of the image of \(\mathbf {A}\) (which is \(\mathbbm {Z} _p^n\)), and \(\mathbf {c} '_{n}\) is independently random by construction. Hence, \(\mathbf {C} '\) is independently and uniformly random (and thus invertible, except with probability n / p). On the other hand, \(\mathbf {R} _{\ell ,x^{(0)}_\ell }=\mathbf {A} \) is uniformly random and invertible by assumption.
-
If \(\mathbf {A} \) is of rank \(n-1\), then \(\mathbf {R} _{\ell ,x^{(0)}_\ell }\) and \(\mathbf {R} _{\ell ,1-x^{(0)}_\ell }\) are (statistically close to) distributed as in (3). Indeed, then the rank of \(\mathbf {R} _{\ell ,1-x^{(0)}_\ell }=\mathbf {A} \) is \(n-1\), and the rank of \(\mathbf {R} _{\ell ,x^{(0)}_\ell }=\mathbf {C} ^{-1}\cdot \mathbf {C} '\) is \(n\), except with probability at most n / p.Footnote 3 Moreover, if we write \(\mathfrak {U}_{\ell }^\top :=(\mathbbm {Z} _p^{n})^\top \cdot \mathbf {A} \), then by construction \((\mathbbm {Z} _p^{n})^\top \cdot \mathbf {R} _{\ell ,1-x^{(0)}_\ell }=\mathfrak {U}_{\ell }^\top \), but also
$$\begin{aligned} (\mathbbm {Z} _p^{n})^\top \cdot \mathbf {R} _{\ell ,x^{(0)}_\ell } = \mathfrak {W}^\top \cdot \mathbf {C} ' = \mathfrak {U}_{\ell }^\top , \end{aligned}$$where \(\mathfrak {W}\) is the vector space spanned by the first \(n-1\) unit vectors. Furthermore, \(\mathbf {R} _{\ell ,x^{(0)}_\ell }\) and \(\mathbf {R} _{\ell ,1-x^{(0)}_\ell }\) are distributed uniformly with these properties.
Hence, summarizing, up to a statistical defect of at most 2 / p, \(\mathcal {B}\) simulates \(\text {Game 1}.(\ell -1)\) if \(\mathbf {A}\) is of rank \(n\), and \(\text {Game 1}.\ell \) if \(\mathbf {A}\) is of rank \(n-1\). This shows (8).
4.3 Adaptive Security
The idea behind the adaptively-secure construction is very similar to the selective case. Both the construction and the security proof are essentially identical, except that we apply an admissible hash function (AHF) \(\mathsf {AHF}:\{0,1\}^k\rightarrow \{0,1\}^{\ell _{\mathsf {AHF}}}\) (cf. Definition 7) to the inputs X of \(\mathsf {Eval}_\mathsf {VHF} \) and \(\mathsf {TrapEval}_\mathsf {VHF} \) before computing the matrix products. (We mention that suitable AHFs with \(\ell _h=\mathbf {O} (k)\) exist [24, 35].) Correctness and unique provability follow immediately. In order to prove well-distributedness, we rely on the properties of the admissible hash function. By a slightly more careful, AHF-dependent embedding of low-rank matrices in the verification key, these properties ensure that, for any sequence of queries issued by the adversary, it holds with non-negligible probability that the vector \(\left[ \mathbf {v^{(0)}} \right] \) assigned to input \(X^{(0)}\) does not lie in the subspace generated by \((\mathbf {b_1}, \ldots , \mathbf {b_{n-1}})\), while all vectors \(\left[ \mathbf {v^{(i)}} \right] \) assigned to input \(X^{(i)}\) do, which then yields the required well-distributedness property.
Admissible Hash Functions. To obtain adaptive security, we rely on a semi-blackbox technique based on admissible hash functions (AHFs, [3, 9, 14, 24, 35]). In the following, we use the formalization of AHFs from [24]:
Definition 7
(AHF). For a function \(\mathsf {AHF}:\{0,1\}^{k} \rightarrow \{0,1\}^{{\ell _{\mathsf {AHF}}}}\) (for a polynomial \({\ell _{\mathsf {AHF}}}={\ell _{\mathsf {AHF}}}(k)\)) and \(K\in (\{0,1,\bot \})^{\ell _{\mathsf {AHF}}}\), define the function through
where \(\mathsf {AHF} (X)_i\) denotes the i-th component of \(\mathsf {AHF} (X)\). We say that \(\mathsf {AHF}\) is q-admissible if there exists a PPT algorithm \(\mathsf {KGen}\) and a polynomial \(\mathsf {poly} (k)\), such that for all \(X^{(0)},\dots ,X^{(q)}\in \{0,1\}^k\) with \(X^{(0)}\not \in \{X^{(i)}\}\),
where the probability is over \(K\mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathsf {KGen} (1^k)\). We say that \(\mathsf {AHF}\) is an admissible hash function (AHF) if \(\mathsf {AHF}\) is q-admissible for all polynomials \(q=q(k)\).
There are efficient constructions of admissible hash functions [24, 35] with \({\ell _{\mathsf {AHF}}}=\mathbf {O} (k)\) from error-correcting codes.
A Hashed Variant of \(\mathsf {VHF}\) . Fix an AHF \(\mathsf {AHF}:\{0,1\}^k\rightarrow \{0,1\}^{\ell _{\mathsf {AHF}}}\) and a corresponding \(\mathsf {KGen}\) algorithm. Essentially, we will hash preimages (using \(\mathsf {AHF}\)) before feeding them into \(\mathsf {VHF}\) to obtain a slight variant \(\mathsf {VHF} '\) of \(\mathsf {VHF}\) that we can prove adaptively secure. More specifically, let \(\mathsf {VHF} '\) be the verifiable hash function that is defined like \(\mathsf {VHF}\), except for the following differences:
-
\(\mathsf {Gen}_\mathsf {VHF} '(\mathsf {GrpGen})\) proceeds like \(\mathsf {Gen}_\mathsf {VHF} (\mathsf {GrpGen})\), but samples \(2{\ell _{\mathsf {AHF}}}\) (not \(2k\)) matrices \(\mathbf {M} _{i,b}\).
-
\(\mathsf {Eval}_\mathsf {VHF} '( ek ,X)\) (for \(X\in \{0,1\}^{k}\)), first computes \(X'=(x'_i)_{i=1}^{\ell _{\mathsf {AHF}}}=\mathsf {AHF} (X)\in \{0,1\}^{\ell _{\mathsf {AHF}}}\), and then outputs an image \([\mathbf {v} ]=[\mathbf {v} _{{\ell _{\mathsf {AHF}}}}]\) and a proof
$$\begin{aligned} \pi =([\mathbf {v} _1],\dots ,[\mathbf {v} _{{\ell _{\mathsf {AHF}}}-1}]) \end{aligned}$$where \(\mathbf {v} _i^\top =\mathbf {u} ^\top \cdot \prod _{j=1}^i\mathbf {M} _{j,x'_j}\).
-
\(\mathsf {Vfy}_\mathsf {VHF} '( vk ,[\mathbf {v} ],\pi ,X)\) computes \(X'=(x'_i)_{i=1}^{\ell _{\mathsf {AHF}}}=\mathsf {AHF} (X)\in \{0,1\}^{\ell _{\mathsf {AHF}}}\) and outputs 1 if and only if \(e ( [\mathbf {v} _i^\top ], [1] ) = e ( [\mathbf {v} _{i-1}^\top ],[\mathbf {M} _{i,x'_i}])\) holds for all i with \(1\le i\le {\ell _{\mathsf {AHF}}}\), where \([\mathbf {v} _0]:=[\mathbf {u} ]\) and \([\mathbf {v} _{{\ell _{\mathsf {AHF}}}}]:=[\mathbf {v} ]\).
Theorem 3
(Adaptive Programmabability of \(\mathsf {VHF}\) ’). \(\mathsf {VHF} '\) is adaptively programmable in the sense of Definition 5.
The Trapdoor Algorithms. We proceed similarly to the selective case and start with a description of the algorithms \(\mathsf {TrapGen}_\mathsf {VHF} '\) and \(\mathsf {TrapEval}_\mathsf {VHF} '\).
-
\(\mathsf {TrapGen}_\mathsf {VHF} '(\varPi ,[\mathbf {B} ])\) proceeds like algorithm \(\mathsf {TrapGen}_\mathsf {VHF}\) from Sect. 4.2, except that
-
\(\mathsf {TrapGen}_\mathsf {VHF} '\) initializes \(K\mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathsf {KGen} (1^k)\) and includes K in \( td \).
-
\(\mathsf {TrapGen}_\mathsf {VHF} '\) chooses \({\ell _{\mathsf {AHF}}}+1\) (and not \(k+1\)) subspaces \(\mathfrak {U}_{i}\) (for \(0\le i\le {\ell _{\mathsf {AHF}}}\)). (The last subspace \(\mathfrak {U}_{{\ell _{\mathsf {AHF}}}}\) is chosen in a special way, exactly like \(\mathfrak {U}_{k}\) is chosen by \(\mathsf {TrapGen}_\mathsf {VHF}\).)
-
\(\mathsf {TrapGen}_\mathsf {VHF} '\) chooses \(2{\ell _{\mathsf {AHF}}}\) (and not \(2k\)) matrices \(\mathbf {R} _{i,b}\) (for \(1\le i\le {\ell _{\mathsf {AHF}}}\) and \(b\in \{0,1\}\)), as follows:
-
If \(K_i=b\), then \(\mathbf {R} _{i,b}\) is chosen uniformly of rank \(n-1\), subject to
$$\begin{aligned}\mathfrak {U}_{i-1}^\top \cdot \mathbf {R} _{i,1-x_i^{(0)}}=\mathfrak {U}_{i}^\top \end{aligned}$$ -
If \(K_i\ne b\), then \(\mathbf {R} _{i,b}\) is chosen uniformly of rank \(n\), subject to
$$\begin{aligned} \mathfrak {U}_{i-1}^\top \cdot \mathbf {R} _{i,x_i^{(0)}}=\mathfrak {U}_{i}^\top \end{aligned}$$
-
-
-
\(\mathsf {TrapEval}_\mathsf {VHF} '( td ,X)\) proceeds like algorithm \(\mathsf {TrapEval}_\mathsf {VHF}\) on input a preimage \(\mathsf {AHF} (X)\in \{0,1\}^{\ell _{\mathsf {AHF}}}\). Specifically, \(\mathsf {TrapEval}_\mathsf {VHF} '\) computes \([\mathbf {v} ]=[\mathbf {v} _{k}]\), along with a corresponding proof \(\pi =[\mathbf {v} _1,\dots ,\mathbf {v} _{k-1}]\) exactly like \(\mathsf {Eval}_\mathsf {VHF} '\). Finally, and analogously to \(\mathsf {TrapEval}_\mathsf {VHF}\), \(\mathsf {TrapEval}_\mathsf {VHF} '\) outputs \((\mathbf {\varvec{\upbeta }},\pi )\) for \(\mathbf {\varvec{\upbeta }} ^\top :=\mathbf {u} ^\top \cdot \prod _{j=1}^{k}\mathbf {R} _{j,x_j}\).
Correctness and indistinguishability follow as for \(\mathsf {TrapGen}_\mathsf {VHF}\) and \(\mathsf {TrapEval}_\mathsf {VHF}\), so we state without proof:
Lemma 5
(Correctness of \(\mathsf {Trap}_\mathsf {VHF} '\) ). The trapdoor algorithms \(\mathsf {TrapGen}_\mathsf {VHF} '\) and \(\mathsf {TrapEval}_\mathsf {VHF} '\) above satisfy correctness in the sense of Definition 5.
Lemma 6
(Indistinguishability of \(\mathsf {Trap}_\mathsf {VHF} '\) ). If the \(n\)-rank assumption holds relative to \(\mathsf {GrpGen}\), then the above algorithms \(\mathsf {TrapGen}_\mathsf {VHF} '\) and \(\mathsf {TrapEval}_\mathsf {VHF} '\) satisfy the indistinguishability property from Definition 5. Specifically, for every adversary \(\mathcal {A}\), there exists an adversary \(\mathcal {B}\) (of roughly the same complexity) with
The (omitted) proof of Lemma 6 proceeds exactly like that Lemma 3, only adapted to \(\mathsf {AHF}\)-hashed inputs. Note that the additional oracle \(\mathcal {O}_\mathsf {check} \) an adversary gets in the adaptive indistinguishability game can be readily implemented with the key K generated by \(\mathsf {TrapGen}_\mathsf {VHF} '\). (The argument from the proof of Lemma 3 does not rely on a secret \(X^{(0)}\), and so its straightforward adaptation could even expose the full key K to an adversary.)
Lemma 7
(Well-distributedness of \(\mathsf {Trap}_\mathsf {VHF} '\) ). The above trapdoor algorithms \(\mathsf {TrapGen}_\mathsf {VHF} '\) and \(\mathsf {TrapEval}_\mathsf {VHF} '\) have well-distributed outputs in the sense of Definition 5.
Proof
First, we make an observation. Fix a matrix \([\mathbf {B} ]\), and a corresponding keypair \(( td , vk )\mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathsf {TrapGen}_\mathsf {VHF} (\varPi ,[\mathbf {B} ])\). Like (6), we can show that for all \(X'=(x'_i)_{i=1}^{{\ell _{\mathsf {AHF}}}}\), the corresponding vectors \(\mathbf {v} _i^\top \) computed during evaluation satisfy
Hence, \(\mathbf {\varvec{\upbeta }} \in \mathfrak {U}_{{\ell _{\mathsf {AHF}}}}\) (and thus \(\beta _{n}=0\)) for the value \(\mathbf {\varvec{\upbeta }}\) that is computed by \(\mathsf {TrapEval}_\mathsf {VHF} '( td ,X)\) if and only if . By property (11) of \(\mathsf {AHF}\), the lemma follows.
5 VRFs from Verifiable PVHFs
Let \((\mathsf {Gen}_\mathsf {VRF},\mathsf {Eval}_\mathsf {VRF},\mathsf {Vfy}_\mathsf {VRF})\) be the following algorithms.
-
Algorithm \(( vk ,{ sk }) \mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathsf {Gen}_\mathsf {VRF} (1^k)\) takes as input a security parameter \(k\) and outputs a key pair \(( vk ,{ sk })\). We say that \({ sk } \) is the secret key and \( vk \) is the verification key.
-
Algorithm \((Y,\pi ) \mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathsf {Eval}_\mathsf {VRF} ({ sk },X)\) takes as input secret key \({ sk } \) and \(X \in \{0,1\} ^k\), and outputs a function value \(Y \in \mathcal {Y} \), where \(\mathcal {Y} \) is a finite set, and a proof \(\pi \). We write \(V_{{ sk }}(X)\) to denote the function value Y computed by \(\mathsf {Eval}_\mathsf {VRF} \) on input \(({ sk },X)\).
-
Algorithm \(\mathsf {Vfy}_\mathsf {VRF} ( vk ,X,Y,\pi ) \in \{0,1\} \) takes as input verification key \( vk \), \(X \in \{0,1\} ^k\), \(Y \in \mathcal {Y} \), and proof \(\pi \), and outputs a bit.
Definition 8
We say that a tuple of algorithms \((\mathsf {Gen}_\mathsf {VRF},\mathsf {Eval}_\mathsf {VRF},\mathsf {Vfy}_\mathsf {VRF})\) is a verifiable random function (VRF), if all the following properties hold.
-
Correctness. Algorithms \(\mathsf {Gen}_\mathsf {VRF} \), \(\mathsf {Eval}_\mathsf {VRF} \), \(\mathsf {Vfy}_\mathsf {VRF} \) are polynomial-time algorithms, and for all \(( vk ,{ sk }) \mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathsf {Gen}_\mathsf {VRF} (1^k)\) and all \(X \in \{0,1\} ^k\) holds: if \((Y,\pi ) \mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathsf {Eval}_\mathsf {VRF} ({ sk },X)\), then we have \(\mathsf {Vfy}_\mathsf {VRF} ( vk ,X,Y,\pi ) = 1\).
-
Unique Provability. For all strings \(( vk ,{ sk })\) (which are not necessarily generated by \(\mathsf {Gen}_\mathsf {VRF}\)) and all \(X \in \{0,1\} ^k\), there does not exist any \((Y_0,\pi _0,Y_1,\pi _1)\) such that \(Y_0 \ne Y_1\) and \(\mathsf {Vfy}_\mathsf {VRF} ( vk ,X,Y_0,\pi _0) = \mathsf {Vfy}_\mathsf {VRF} ( vk ,X,Y_1,\pi _1) = 1\).
-
Pseudorandomness. Let \(\mathsf {Exp}_{\mathcal {B}}^{\mathsf {VRF}} \) be the security experiment defined in Fig. 2, played with adversary \(\mathcal {B}\). We require that the advantage function
$$\begin{aligned} \mathsf {Adv}_{\mathcal {B}}^{\mathsf {VRF}} (k) := 2 \cdot \Pr \left[ \mathsf {Exp}_{\mathcal {B}}^{\mathsf {VRF}} (1^k)=1 \right] - 1 \end{aligned}$$is negligible for all PPT \(\mathcal {B}\) that never query \(\mathbf {Evaluate}\) on input \(X^*\).
5.1 A Generic Construction from Verifiable PVHFs
Let \((\mathsf {Gen}_\mathsf {VHF},\mathsf {Eval}_\mathsf {VHF},\mathsf {Vfy}_\mathsf {VHF})\) be a vector hash function according to Definition 3, and let \((\mathsf {GrpGen},\mathsf {GrpVfy})\) be a certified bilinear group generator according to Definitions 1 and 2. Let \((\mathsf {Gen}_\mathsf {VRF},\mathsf {Eval}_\mathsf {VRF},\mathsf {Vfy}_\mathsf {VRF})\) be the following algorithms.
-
Key Generation. \(\mathsf {Gen}_\mathsf {VRF} (1^k)\) runs \(\varPi \mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathsf {GrpGen} (1^k)\) to generate bilinear group parameters, and then \(( ek , vk ')\mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathsf {Gen}_\mathsf {VHF} (\varPi )\). Then it chooses a random vector \(\mathbf {w} \mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}(\mathbb {Z} _p^*)^n\), defines \({ sk }:= (\varPi , ek , \mathbf {w})\) and \( vk := (\varPi , vk ',\left[ \mathbf {w} \right] )\), and outputs \(( vk ,{ sk })\).
-
Function Evaluation. On input \({ sk }:= (\varPi , ek ,\mathbf {w})\) with \(\mathbf {w} = (w_1,\ldots ,w_n)^\top \in (\mathbb {Z} _p^*)^n\) and \(X \in \{0,1\} ^k\), algorithm \(\mathsf {Eval}_\mathsf {VRF} ({ sk },X)\) first runs
$$\begin{aligned} (\left[ \mathbf {v} \right] ,\pi ') \leftarrow \mathsf {Eval}_\mathsf {VHF} ( ek ,X). \end{aligned}$$Then it computes the function value Y and an additional proof \(\left[ \mathbf {z} \right] \in \mathbb {G} ^n\) as
$$\begin{aligned} Y := \prod _{i=1}^n \left[ \frac{v_i}{w_i}\right] \qquad \text {and}\qquad \left[ \mathbf {z} \right] := \left[ (z_1,\ldots ,z_n)^\top \right] := \left[ \left( \frac{v_1}{w_1},\ldots ,\frac{v_n}{w_n}\right) ^\top \right] \end{aligned}$$Finally, it sets \(\pi := (\left[ \mathbf {v} \right] ,\pi ',\left[ \mathbf {z} \right] )\) and outputs \((Y,\pi )\).
-
Proof Verification. On input \(( vk ,X,Y,\pi )\), \(\mathsf {Vfy}_\mathsf {VRF}\) outputs 0 if any of the following properties is not satisfied.
-
1.
\( vk \) has the form \( vk = (\varPi , vk ',\left[ \mathbf {w} \right] )\), such that \(\left[ \mathbf {w} \right] = (\left[ w_1\right] ,\ldots ,\left[ w_n\right] )\) and the bilinear group parameters and group elements contained in \( vk \) are valid. That is, it holds that \(\mathsf {GrpVfy} (\varPi ) = 1\) and \(\mathsf {GrpVfy} (\varPi ,\left[ w_i\right] ) = 1\) for all \(i \in \{1,\ldots ,n\}\).
-
2.
\(X \in \{0,1\} ^k\).
-
3.
\(\pi \) has the form \(\pi = (\left[ \mathbf {v} \right] , \pi ',\left[ \mathbf {z} \right] )\) with \(\mathsf {Vfy}_\mathsf {VHF} ( vk ',\left[ \mathbf {v} \right] ,\pi ',X) = 1\) and both vectors \(\left[ \mathbf {v} \right] \) and \(\left[ \mathbf {z} \right] \) contain only validly-encoded group elements, which can be checked by running \(\mathsf {GrpVfy} \).
-
4.
It holds that and \(\left[ z_i\right] = \left[ v_i/w_i\right] \) for all \(i \in \{1,\ldots ,n\}\) and \(Y = \left[ \sum _{i=1}^n v_i/w_i\right] \). This can be checked by testing
$$\begin{aligned} e\left( \left[ z_i\right] ,\left[ w_i\right] \right) \mathop {=}\limits ^{{\scriptscriptstyle ?}}e(\left[ v_i\right] ,\left[ 1\right] ) \quad \forall i \in \{1,\ldots ,n\} \qquad \text {and}\qquad Y \mathop {=}\limits ^{{\scriptscriptstyle ?}}\prod _{i=1}^n \left[ z_i\right] \end{aligned}$$
If all the above checks are passed, then \(\mathsf {Vfy}_\mathsf {VRF}\) outputs 1.
-
1.
5.2 Correctness, Unique Provability, and Pseudorandomness
Theorem 4
(Correctness and Unique Provability). The triplet of algorithms \((\mathsf {Gen}_\mathsf {VRF},\mathsf {Eval}_\mathsf {VRF},\mathsf {Vfy}_\mathsf {VRF})\) forms a correct verifiable random function, and it satisfies the unique provability requirement in the sense of Definition 8.
Proof
Correctness is straightforward to verify, therefore we turn directly to unique provability. We have to show that there does not exist any \((Y_0,\pi _0,Y_1,\pi _1)\) such that \(Y_0 \ne Y_1\) and \(\mathsf {Vfy}_\mathsf {VRF} ( vk ,X,Y_0,\pi _0) = \mathsf {Vfy}_\mathsf {VRF} ( vk ,X,Y_1,\pi _1) = 1\). Let us first make the following observations.
-
First of all, note that \(\mathsf {Vfy}_\mathsf {VRF} \) on input \(((\varPi , vk ',\left[ \mathbf {w} \right] ),X,Y,(\left[ \mathbf {v} \right] ,\pi ',(\mathbf {z}))\) checks whether \(\varPi \) contains valid certified bilinear group parameters by running \(\mathsf {GrpVfy} (\varPi )\). Moreover, it checks whether all group elements contained in \(\left[ \mathbf {w} \right] \), \(\left[ \mathbf {v} \right] \), and \(\left[ \mathbf {z} \right] \) are valid group elements with respect to \(\varPi \). Thus, we may assume in the sequel that all these group elements are valid and have a unique encoding. In particular, \(\left[ \mathbf {w} \right] \) is uniquely determined by \( vk \).
-
Furthermore, it is checked that \(X \in \{0,1\} ^k\). The unique provability property of the vector hash function \((\mathsf {Gen}_\mathsf {VHF},\mathsf {Eval}_\mathsf {VHF},\mathsf {Vfy}_\mathsf {VHF})\) guarantees that for all strings \( vk ' \in \{0,1\} ^*\) and all \(X \in \{0,1\} ^k\) there does not exist any tuple \((\left[ \mathbf {v} _0\right] ,\pi _0,\left[ \mathbf {v} _1\right] ,\pi _1)\) with \(\left[ \mathbf {v} _0\right] \ne \left[ \mathbf {v} _1\right] \) and \(\left[ \mathbf {v} _0\right] , \left[ \mathbf {v} _1\right] \in \mathbb {G} ^n\) such that
$$\begin{aligned} \mathsf {Vfy}_\mathsf {VHF} ( vk ',\left[ \mathbf {v} _0\right] ,\pi _0,X) = \mathsf {Vfy}_\mathsf {VHF} ( vk ',\left[ \mathbf {v} _1\right] ,\pi _1,X) = 1 \end{aligned}$$Thus, we may henceforth use that there is only one unique vector of group elements \(\left[ \mathbf {v} \right] \) which passes the test \(\mathsf {Vfy}_\mathsf {VHF} ( vk ',\left[ \mathbf {v} \right] ,\pi ',X) = 1\) performed by \(\mathsf {Vfy}_\mathsf {VRF} \). Thus, \(\left[ \mathbf {v} \right] \) is uniquely determined by X and the values \(\varPi \) and \( vk '\) contained in \( vk \).
-
Finally, note that \(\mathsf {Vfy}_\mathsf {VRF} \) tests whether \(\left[ z_i\right] = \left[ v_i/w_i\right] \) holds. Due to the fact that the bilinear group is certified, which guarantees that each group element has a unique encoding and that the bilinear map is non-degenerate, for each \(i \in \{1,\ldots ,n\}\) there exists only one unique group element encoding \(\left[ z_i\right] \) such that the equality \(\left[ z_i\right] = \left[ v_i/w_i\right] \) holds.
Therefore the value \(Y = \prod _{i=1}^n \left[ z_i\right] \) is uniquely determined by \(\left[ \mathbf {v} \right] \) and \(\left[ \mathbf {w} \right] \), which in turn are uniquely determined by X and the verification key \( vk \).
Assumption 9
The \((n-1)\) -linear assumption states that \(\left[ \mathbf {c},\mathbf {d},\sum _{i=1}^{n}d_i/c_i\right] \mathop {\approx }\limits ^{\mathrm {c}} [\mathbf {c},\mathbf {d},r],\) where \(\mathbf {c} =(c_1,\dots ,c_{n})^\top \in (\mathbbm {Z} _p^*)^{n}\), \(\mathbf {d} =(d_1,\dots ,d_{n})^\top \in \mathbbm {Z} _p^{n}\), and \(r\in \mathbbm {Z} _p\) are uniformly random. That is, we require that
is negligible for every PPT adversary \(\mathcal {A}\).
We remark that the above formulation is an equivalent formulation of the standard \((n-1)\)-linear assumption, cf. [21, Page 9], for instance.
Theorem 5
(Pseudorandomness). If \((\mathsf {Gen}_\mathsf {VHF},\mathsf {Eval}_\mathsf {VHF},\mathsf {Vfy}_\mathsf {VHF})\) is an adaptivly programmable VHF in the sense of Definition 3 and the \((n-1)\)-linear assumption holds relative to \(\mathsf {GrpGen}\), then algorithms \((\mathsf {Gen}_\mathsf {VRF},\mathsf {Eval}_\mathsf {VRF},\mathsf {Vfy}_\mathsf {VRF})\) form a verifiable random function which satisfies the pseudorandomness requirement in the sense of Definition 8.
Proof Sketch. The proof is based on a reduction to the indistinguishability and well-distributedness of the programmable vector hash function. The well-distributedness yields a leverage to embed the given instance of the \((n-1)\)-linear assumption in the view of the adversary, following the approach sketched already in the introduction. Given the PVHF as a powerful building block, the remaining main difficulty of the proof lies in dealing with the fact that the “partitioning” proof technique provided by PVHFs is incompatible with “decisional” complexity assumptions. This is a well-known difficulty, which appeared in many previous works. It stems from the fact that different sequences of queries of the VRF-adversary may lead to different abort probabilities in the security proof. We can overcome this issue by employing the standard artificial abort technique [44], which has also been used to prove security of Waters’ IBE scheme [44] and the VRF of Hohenberger and Waters [28], for example.
Proof
Let \(\mathcal {A}\) be an adversary in the VRF security experiment from Definition 8. We will construct an adversary \(\mathcal {B}\) on the \((n-1)\)-linear assumption, which simulates the VRF pseudorandomness security experiment for \(\mathcal {A}\). However, before we can construct this adversary, we have to make some changes to the security experiment. Consider the following sequence of games, where we let \(\mathsf {Exp}_{\mathcal {A}}^{i} (1^k)\) denote the experiment executed in Game i and we write \(\mathsf {Adv}_{\mathcal {A}}^{i} (k) := \Pr \left[ \mathsf {Exp}_{\mathcal {A}}^{i} (1^k)=1\right] \) to denote the advantage of \(\mathcal {A}\) in Game i.
Game 0. This is the original VRF security experiment, executed with algorithms \((\mathsf {Gen}_\mathsf {VRF},\mathsf {Eval}_\mathsf {VRF},\mathsf {Vfy}_\mathsf {VRF})\) as constructed above. Clearly, we have
In the sequel we write \( vk '_0\) to denote the VHF-key generated by \(( vk '_0, ek )\mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathsf {Gen}_\mathsf {VHF} (\varPi )\) in the experiment.
Game 1. This game proceeds exactly as before, except that it additionally samples a uniformly random invertible matrix \(\mathbf {B} \mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathrm {GL} _n(\mathbbm {Z} _p)\) and generates an additional key for the vector hash function as \(( vk '_1, td ) \mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathsf {TrapGen}_\mathsf {VHF} (\varPi ,\left[ \mathbf {B} \right] )\), which is not given to the adversary. That is, the adversary in Game 1 receives as input a VRF verification key \( vk = (\varPi , vk _0',\left[ \mathbf {w} \right] )\), where \( vk _0'\) is generated by \(\mathsf {Gen}_\mathsf {VHF} \), exactly as in Game 0.
Whenever \(\mathcal {A}\) issues an \(\mathbf{Evaluate} (X^{(i)})\)-query on some input \(X^{(i)}\), the experiment proceeds as in Game 0, and additionally computes \(((\beta _1,\ldots ,\beta _n),\pi ) \mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathsf {TrapEval}_\mathsf {VHF} ( td ,X^{(i)})\). If \(\beta _n \ne 0\), then the experiment aborts and outputs a random bit. Moreover, when \(\mathcal {A}\) issues a \(\mathbf{Challenge} (X^{(0)})\)-query, then the experiment computes \(((\beta _1,\ldots ,\beta _n),\pi ) \mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathsf {TrapEval}_\mathsf {VHF} ( td ,X^{(0)})\). If \(\beta _n = 0\), then the experiment aborts and outputs a random bit.
The well-distributedness of the PVHF guarantees that there is a polynomial \(\mathsf {poly}\) such that for all possible queries \(X^{(0)}, X^{(1)}, \ldots , X^{(q)}\) the probability that the experiment is not aborted is at least
where \(\lambda \) is a non-negligible lower bound on the probability of not aborting.
Artificial Abort. Note that the probability that the experiment aborts depends on the particular sequence of queries issued by \(\mathcal {A}\). This is problematic, because different sequences of queries may have different abort probabilities (cf. Appendix A). Therefore the experiment in Game 1 performs an additional artificial abort step, which ensures that the experiment is aborted with always the (almost) same probability \(1-\lambda \), independent of the particular sequence of queries issued by \(\mathcal {A}\). To this end, the experiment proceeds as follows.
After \(\mathcal {A}\) terminates and outputs a bit B, the experiment estimates the concrete abort probability \(\eta (\mathbf {X})\) for the sequence of queries \(\mathbf {X} := (X^{(0)}, \ldots , X^{(q)})\) issued by \(\mathcal {A}\). To this end, the experiment:
-
1.
Computes an estimate \(\eta '\) of \(\eta (\mathbf {X})\), by R-times repeatedly sampling trapdoors \(( vk _j', td _j) \mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathsf {TrapGen}_\mathsf {VHF} (\varPi ,\left[ \mathbf {B} \right] )\) and checking whether \(\beta _n^{(0)} = 0\) or \(\beta _n^{(i)} \ne 0\), where
$$\begin{aligned} ((\beta _1^{(i)},\ldots ,\beta _{n}^{(i)}),\pi ) \leftarrow \mathsf {TrapEval}_\mathsf {VHF} ( td _j,X^{(i)}) \quad \text {for}\quad i \in \{0,\ldots ,q\} \end{aligned}$$for sufficiently large R. Here \(\epsilon \) is defined such that \(2\cdot \epsilon \) is a lower bound on the advantage of \(\mathcal {A}\) in the original security experiment.
-
2.
If \(\eta ' \ge \lambda \), then the experiment aborts artificially with probability \((\eta '-\lambda )/\eta '\), and outputs a random bit.
Note that if \(\eta '\) was exact, that is, \(\eta ' = \eta (\mathbf {X})\), then the total probability of not aborting would always be \(\eta (\mathbf {X}) \cdot (1-(\eta '-\lambda )/\eta ') = \lambda \), independent of the particular sequence of queries issued by \(\mathcal {A}\). In this case we would have \(\mathsf {Adv}_{\mathcal {A}}^{1} (k) = \lambda \cdot \mathsf {Adv}_{\mathcal {A}}^{0} (k)\). However, the estimate \(\eta '\) of \(\eta (\mathbf {X})\) is not necessarily exact. By applying the standard analysis technique from [44] (see also [17, 28]), one can show that setting \(R := \mathbf {O} (\epsilon ^{-2} \ln (1/\epsilon )\lambda ^{-1}\ln (1/\lambda ))\) is sufficient to obtain
Game 2. The experiment now provides the adversary with the trapdoor VHF verification key \( vk '_1\), by including it in the VRF verification key \( vk = (\varPi , vk _1',\left[ \mathbf {z} \right] )\) in place of \( vk '_0\). Moreover, the experiment now evaluates the VHF on inputs X by running \((\mathbf {\varvec{\upbeta }},\pi ) \mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathsf {TrapEval}_\mathsf {VHF} ( td ,X)\) and then computing \(\left[ \mathbf {v} \right] := \left[ \mathbf {B} \right] \cdot \mathbf {\varvec{\upbeta }} \). The rest of the experiment proceeds exactly as before.
We claim that any adversary \(\mathcal {A}\) distinguishing Game 2 from Game 1 implies an adversary \(\mathcal {B}\) breaking the indistinguishability of the VHF according to Definition 5. Adversary \(\mathcal {B} ^{\mathcal {O} _b,\mathcal {O}_\mathsf {check}}( vk _b')\) receives as input a verification key \( vk _b'\), which is either generated by \(\mathsf {Gen}_\mathsf {VHF} (\varPi )\) or \(\mathsf {TrapGen}_\mathsf {VHF} (\varPi ,\left[ \mathbf {B} \right] )\) for a uniformly invertible random matrix \(\mathbf {B} \). It simulates the security experiment from Game 2 for \(\mathcal {A}\) as follows.
-
The given VHF verification key \( vk _b'\) is embedded in the VRF verification key \( vk = (\varPi , vk _b',\left[ \mathbf {z} \right] )\), where b is the random bit chosen by the indistinguishability security experiment played by \(\mathcal {B}\). All other values are computed exactly as before.
-
In order to evaluate the VHF on input X, \(\mathcal {B}\) is able to query its oracle \(\mathcal {O} _b\), which either computes and returns \((\left[ \mathbf {v} \right] ,\pi ) \leftarrow \mathsf {Eval}_\mathsf {VHF} ( ek ,X)\) (in case \(b=0\)), or it computes \((\mathbf {\varvec{\upbeta }},\pi ) \leftarrow \mathsf {TrapEval}_\mathsf {VHF} ( td ,X)\) and returns \(\left[ \mathbf {v} \right] := \left[ \mathbf {B} \right] \cdot \mathbf {\varvec{\upbeta }} \) (in case \(b=1\)).
-
To test whether a given value X requires a (non-artificial) abort, \(\mathcal {B}\) queries \(\mathcal {O}_\mathsf {check} (X)\), which returns 1 if and only if \(((\beta _1,\ldots ,\beta _n),\pi )\leftarrow \mathsf {TrapEval}_\mathsf {VHF} ( td )\) with \(\beta _n\ne 0\).
-
The artificial abort step is performed by \(\mathcal {B}\) exactly as in Game 1.
Note that if \(b=0\), then the view of \(\mathcal {A}\) is identical to Game 1, while if \(b=1\) then it is identical to Game 2. Thus, by the adaptive indistinguishability of the VHF, we have
for some negligible function \(\mathsf {negl} (k)\).
Game 3. Finally, we have to make one last technical modification before we are able to describe our reduction to the \((n-1)\)-linear assumption. Game 3 proceeds exactly as before, except that matrix \(\left[ \mathbf {B} \right] \) has a slightly different distribution. In Game 2, \(\mathbf {B} \mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathrm {GL} _n^(\mathbbm {Z} _p)\) is chosen uniformly random (and invertible). In Game 3, we instead choose matrix \(\mathbf {B} \) by sampling \(\mathbf {b_1}, \ldots , \mathbf {b_{n-1}} \mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}(\mathbb {Z} _p^*)^n\) and \(\mathbf {b_n} \mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathbb {Z} _p^n\), defining \(\mathbf {B} := (\mathbf {b_1}, \ldots , \mathbf {b_n})\), and then computing \(\left[ \mathbf {B} \right] \). Thus, we ensure that the first \(n-1\) vectors do not have any component which equals the identity element. This is done to adjust the distribution of \(\left[ \mathbf {B} \right] \) to the distribution chosen by our reduction algorithm.
By applying the union bound, we have \(\left| \mathsf {Adv}_{\mathcal {A}}^{3} (k)-\mathsf {Adv}_{\mathcal {A}}^{2} (k)\right| \le n^2/p\). Since n is polynomially bounded and \(\log (p)\in \varOmega (k)\), we have
for some negligible function \(\mathsf {negl} (k)\).
The Reduction to the \((n-1)\) -Linear Assumption. In this game, we describe our actual reduction algorithm \(\mathcal {B}\). Adversary \(\mathcal {B}\) receives as input a \((n-1)\)-linear challenge \(\left[ \mathbf {c}, \mathbf {d}, t\right] \), where \(\mathbf {c} = (c_1,\ldots ,c_n)^\top \mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}(\mathbb {Z} _p^*)^n\), \(\mathbf {d} \mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathbb {Z} _p^n\), and either \(t = \sum _{i=1}^{n} d_i/c_i\) or \(t \mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathbb {Z} _p\). It simulates the VRF security experiment exactly as in Game 3, with the following differences.
Initialization and Set-up of Parameters. Matrix \(\left[ \mathbf {B} \right] \) is computed as follows. First, \(\mathcal {B}\) chooses \(n(n-1)\) random integers \(\alpha _{i,j} \mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\mathbb {Z} _p^*\) for \(i \in \{1,\ldots ,n-1\}\) and \(j \in \{1,\ldots ,n\}\). Then it sets \(\left[ \mathbf {b_i} \right] := (\alpha _{i,1}c_1, \ldots , \alpha _{i,n},c_n)^\top \) and \(\left[ \mathbf {b_n} \right] := \left[ \mathbf {d} \right] \), and finally \(\left[ \mathbf {B} \right] := \left[ \mathbf {b_1},\ldots ,\mathbf {b_n} \right] \). Vector \(\left[ \mathbf {w} \right] \) is set to \(\left[ \mathbf {w} \right] := \left[ \mathbf {c} \right] \).
Note that matrix \(\left[ \mathbf {B} \right] \) and vector \(\left[ \mathbf {w} \right] \) are distributed exactly as in Game 3. Observe also that the first \(n-1\) column vectors of \(\left[ \mathbf {B} \right] \) depend on \(\mathbf {c} \), while the last vector is equal to \(\mathbf {d} \).
Answering Evaluate-Queries. Whenever \(\mathcal {A}\) issues an \(\mathbf{Evaluate}\)-query on input \(X^{(j)}\), then \(\mathcal {B}\) computes \(\mathbf {\varvec{\upbeta }} = (\beta _1,\ldots ,\beta _n)^\top \leftarrow \mathsf {TrapEval}_\mathsf {VHF} ( td ,X^{(j)})\). If \(\beta _n \ne 0\), then \(\mathcal {B}\) aborts and outputs a random bit. Otherwise it computes
for integers \(\gamma _1, \ldots , \gamma _n\), which are efficiently computable from \(\mathbf {\varvec{\upbeta }} \) and the \(\alpha _{i,j}\)-values chosen by \(\mathcal {B}\) above. Here we use that \(\beta _n = 0\) holds for all \(\mathbf{Evaluate}\)-queries that do not cause an abort.
Next, \(\mathcal {B}\) computes the proof elements in \(\left[ \mathbf {z} \right] \) by setting \(\left[ z_i\right] := \left[ \gamma _i\right] \) for all \(i \in \{1,\ldots ,n\}\). Note that, due to our setup of \(\left[ \mathbf {w} \right] \), it holds that
thus all proof elements can be computed correctly by \(\mathcal {B}\). Finally, \(\mathcal {B}\) sets
which yields the correct function value. Thus, all \(\mathbf{Evaluate} \)-queries can be answered by \(\mathcal {B}\) exactly as in Game 3.
Answering the Challenge-Query. When \(\mathcal {A}\) issues a \(\mathbf{Challenge}\)-query on input \(X^{(0)}\), then \(\mathcal {B}\) computes \(\mathbf {\varvec{\upbeta }} = (\beta _1,\ldots ,\beta _n)^\top \leftarrow \mathsf {TrapEval}_\mathsf {VHF} ( td ,X^{(0)})\). If \(\beta _n = 0\), then \(\mathcal {B}\) aborts and outputs a random bit. Otherwise again it computes the \(\gamma _i\)-values in
Writing \(v_i\) and \(d_i\) to denote the i-th component of \(\mathbf {v} \) and \(\mathbf {d} \), respectively, it thus holds that \(v_i = \gamma _i c_i + d_i \beta _n\). Observe that then the function value is
\(\mathcal {B}\) computes and outputs \(\left[ t \cdot \beta _n\right] \cdot \left[ \sum _{i=1}^{n} \gamma _i\right] = \left[ t \cdot \beta _n + \sum _{i=1}^{n} \gamma _i\right] \). Observe that if \(\left[ t\right] = \left[ \sum _{i=1}^{n}d_i/c_i\right] \), then it holds that
Thus, if \(\left[ t\right] = \left[ \sum _{i=1}^{n}d_i/c_i\right] \), then \(\mathcal {B}\) outputs the correct function value Y. However, if \(\left[ t\right] \) is uniformly random, then \(\mathcal {B}\) outputs a uniformly random group element.
Finally, \(\mathcal {B}\) performs an artificial abort step exactly as in Game 2. Note that \(\mathcal {B}\) provides a perfect simulation of the experiment in Game 3, which implies that
which is non-negligible, if \(\mathsf {Adv}_{\mathcal {A}}^{\mathsf {VRF}} (k)\) is.
Notes
- 1.
In a Diffie-Hellman gap group, the Decisional Diffie-Hellman problem is easy, but the Computational Diffie-Hellman is hard. A prominent candidate of such groups are pairing-friendly groups.
- 2.
Recall our notation from Sect. 3.
- 3.
To see this, observe that except with probability \((n-1)/p\), the first \(n-1\) columns of \(\mathbf {C} '\) are linearly independent (as they are random elements in the image of the rank-\((n-1)\) matrix \(\mathbf {A} \)). Further, the last row (which is independently and uniformly random) does not lie in the span of the first \(n-1\) rows except with probability at most 1 / p.
References
Abdalla, M., Catalano, D., Fiore, D.: Verifiable random functions from identity-based key encapsulation. In: Joux, A. (ed.) EUROCRYPT 2009. LNCS, vol. 5479, pp. 554–571. Springer, Heidelberg (2009)
Abdalla, M., Catalano, D., Fiore, D.: Verifiable random functions: relations to identity-based key encapsulation and new constructions. J. Cryptology 27(3), 544–593 (2014)
Abdalla, M., Fiore, D., Lyubashevsky, V.: From selective to full security: semi-generic transformations in the standard model. In: Fischlin, M., Buchmann, J., Manulis, M. (eds.) PKC 2012. LNCS, vol. 7293, pp. 316–333. Springer, Heidelberg (2012)
Au, M.H., Susilo, W., Mu, Y.: Practical compact E-cash. In: Pieprzyk, J., Ghodosi, H., Dawson, E. (eds.) ACISP 2007. LNCS, vol. 4586, pp. 431–445. Springer, Heidelberg (2007)
Belenkiy, M., Chase, M., Kohlweiss, M., Lysyanskaya, A.: Compact E-cash and simulatable VRFs revisited. In: Shacham, H., Waters, B. (eds.) Pairing 2009. LNCS, vol. 5671, pp. 114–131. Springer, Heidelberg (2009)
Bellare, M., Ristenpart, T.: Simulation without the artificial abort: simplified proof and improved concrete security for waters’ IBE scheme. In: Joux, A. (ed.) EUROCRYPT 2009. LNCS, vol. 5479, pp. 407–424. Springer, Heidelberg (2009)
Bellare, M., Yung, M.: Certifying cryptographic tools: the case of trapdoor permutations. In: Brickell, E.F. (ed.) CRYPTO 1992. LNCS, vol. 740, pp. 442–460. Springer, Heidelberg (1993)
Mihir Bellare and Moti Yung: Certifying Permutations: Noninteractive Zero-Knowledge Based on Any Trapdoor Permutation. J. Cryptology 9(3), 149–166 (1996)
Boneh, D., Boyen, X.: Secure identity based encryption without random oracles. In: Franklin, M. (ed.) CRYPTO 2004. LNCS, vol. 3152, pp. 443–459. Springer, Heidelberg (2004)
Boneh, D., Boyen, X., Shacham, H.: Short group signatures. In: Franklin, M. (ed.) CRYPTO 2004. LNCS, vol. 3152, pp. 41–55. Springer, Heidelberg (2004)
Boneh, D., Montgomery, H.W., Raghunathan, A.: Algebraic pseudorandom functions with improved efficiency from the augmented cascade. In: Proceedings of the ACM Conference on Computer and Communications Security 2010, pp. 131–140. ACM (2010)
Boneh, D., Halevi, S., Hamburg, M., Ostrovsky, R.: Circular-secure encryption from decision Diffie-Hellman. In: Wagner, D. (ed.) CRYPTO 2008. LNCS, vol. 5157, pp. 108–125. Springer, Heidelberg (2008)
Brakerski, Z., Goldwasser, S., Rothblum, G.N., Vaikuntanathan, V.: Weak verifiable random functions. In: Reingold, O. (ed.) TCC 2009. LNCS, vol. 5444, pp. 558–576. Springer, Heidelberg (2009)
Cash, D., Hofheinz, D., Kiltz, E., Peikert, C.: Bonsai trees, or how to delegate a lattice basis. In: Gilbert, H. (ed.) EUROCRYPT 2010. LNCS, vol. 6110, pp. 523–552. Springer, Heidelberg (2010)
Chase, M., Lysyanskaya, A.: Simulatable VRFs with applications to multi-theorem NIZK. In: Menezes, A. (ed.) CRYPTO 2007. LNCS, vol. 4622, pp. 303–322. Springer, Heidelberg (2007)
Chase, M., Meiklejohn, S.: Déjà Q: using dual systems to revisit q-type assumptions. In: Nguyen, P.Q., Oswald, E. (eds.) EUROCRYPT 2014. LNCS, vol. 8441, pp. 622–639. Springer, Heidelberg (2014)
Chatterjee, S., Sarkar, P.: On (Hierarchical) identity based en- cryption protocols with short public parameters (With an Exposition of Waters’ Artificial Abort Technique). Cryptology ePrint Archive, Report 2006/279 (2006). http://eprint.iacr.org/
Cheon, J.H.: Security analysis of the strong Diffie-Hellman problem. In: Vaudenay, S. (ed.) EUROCRYPT 2006. LNCS, vol. 4004, pp. 1–11. Springer, Heidelberg (2006)
Dodis, Y.: Efficient construction of (distributed) verifiable random functions. In: Desmedt, Y.G. (ed.) PKC 2003. LNCS, vol. 2567, pp. 1–17. Springer, Heidelberg (2002)
Dodis, Y., Yampolskiy, A.: A verifiable random function with short proofs and keys. In: Vaudenay, S. (ed.) PKC 2005. LNCS, vol. 3386, pp. 416–431. Springer, Heidelberg (2005)
Escala, A., Herold, G., Kiltz, E., Ráfols, C., Villar, J.: An algebraic framework for Diffie-Hellman assumptions. Cryptology ePrint Archive, Report 2013/377 (2013). http://eprint.iacr.org/
Escala, A., Herold, G., Kiltz, E., Ràfols, C., Villar, J.: An algebraic framework for Diffie-Hellman assumptions. In: Canetti, R., Garay, J.A. (eds.) CRYPTO 2013, Part II. LNCS, vol. 8043, pp. 129–147. Springer, Heidelberg (2013)
Fiore, D., Schröder, D.: Uniqueness is a different story: impossibility of verifiable random functions from trapdoor permutations. In: Cramer, R. (ed.) TCC 2012. LNCS, vol. 7194, pp. 636–653. Springer, Heidelberg (2012)
Freire, E.S.V., Hofheinz, D., Paterson, K.G., Striecks, C.: Programmable hash functions in the multilinear setting. In: Canetti, R., Garay, J.A. (eds.) CRYPTO 2013, Part I. LNCS, vol. 8042, pp. 513–530. Springer, Heidelberg (2013)
Fuchsbauer, G.: Constrained verifiable random functions. In: Abdalla, M., De Prisco, R. (eds.) SCN 2014. LNCS, vol. 8642, pp. 95–114. Springer, Heidelberg (2014)
Gerbush, M., Lewko, A., O’Neill, A., Waters, B.: Dual form signatures: an approach for proving security from static assumptions. In: Wang, X., Sako, K. (eds.) ASIACRYPT 2012. LNCS, vol. 7658, pp. 25–42. Springer, Heidelberg (2012)
Hofheinz, D., Kiltz, E.: Programmable hash functions and their applications. In: Wagner, D. (ed.) CRYPTO 2008. LNCS, vol. 5157, pp. 21–38. Springer, Heidelberg (2008)
Hohenberger, S., Waters, B.: Constructing verifiable random functions with large input spaces. In: Gilbert, H. (ed.) EUROCRYPT 2010. LNCS, vol. 6110, pp. 656–672. Springer, Heidelberg (2010)
Jager, T.: Verifiable random functions from weaker assumptions. In: Dodis, Y., Nielsen, J.B. (eds.) TCC 2015, Part II. LNCS, vol. 9015, pp. 121–143. Springer, Heidelberg (2015)
Jao, D., Yoshida, K.: Boneh-Boyen signatures and the strong Diffie-Hellman problem. In: Shacham, H., Waters, B. (eds.) Pairing 2009. LNCS, vol. 5671, pp. 1–16. Springer, Heidelberg (2009)
Jarecki, S.: Handcuffing big brother: an abuse-resilient transaction escrow scheme. In: Cachin, C., Camenisch, J.L. (eds.) EUROCRYPT 2004. LNCS, vol. 3027, pp. 590–608. Springer, Heidelberg (2004)
Kakvi, S.A., Kiltz, E., May, A.: Certifying RSA. In: Wang, X., Sako, K. (eds.) ASIACRYPT 2012. LNCS, vol. 7658, pp. 404–414. Springer, Heidelberg (2012)
Lauer, S.: Verifiable random functions. Master Thesis, Ruhr- University Bochum (2015)
Liskov, M.: Updatable zero-knowledge databases. In: Roy, B. (ed.) ASIACRYPT 2005. LNCS, vol. 3788, pp. 174–198. Springer, Heidelberg (2005)
Lysyanskaya, A.: Unique signatures and verifiable random functions from the DH-DDH separation. In: Yung, M. (ed.) CRYPTO 2002. LNCS, vol. 2442, pp. 597–612. Springer, Heidelberg (2002)
Micali, S., Rabin, M.O., Vadhan, S.P.: Verifiable random functions. In: Proceedings of the FOCS 1999, pp. 120–130. IEEE Computer Society (1999)
Micali, S., Reyzin, L.: Soundness in the public-key model. In: Kilian, J. (ed.) CRYPTO 2001. LNCS, vol. 2139, pp. 542–565. Springer, Heidelberg (2001)
Micali, S., Rivest, R.L.: Micropayments revisited. In: Preneel, B. (ed.) CT-RSA 2002. LNCS, vol. 2271, pp. 149–163. Springer, Heidelberg (2002)
Naor, M.: On cryptographic assumptions and challenges. In: Boneh, D. (ed.) CRYPTO 2003. LNCS, vol. 2729, pp. 96–109. Springer, Heidelberg (2003)
Moni Naor and Omer Reingold: Number-theoretic constructions of efficient pseudo-random functions. J. ACM 51(2), 231–262 (2004)
Naor, M., Segev, G.: Public-key cryptosystems resilient to key leakage. In: Halevi, S. (ed.) CRYPTO 2009. LNCS, vol. 5677, pp. 18–35. Springer, Heidelberg (2009)
Shacham, H.: A Cramer-shoup encryption scheme from the linear assumption and from progressively weaker linear variants. Cryptology ePrint Archive, Report 2007/074 (2007). http://eprint.iacr.org/
Waters, B.: Dual system encryption: realizing fully secure IBE and HIBE under simple assumptions. In: Halevi, S. (ed.) CRYPTO 2009. LNCS, vol. 5677, pp. 619–636. Springer, Heidelberg (2009)
Waters, B.: Efficient identity-based encryption without random oracles. In: Cramer, R. (ed.) EUROCRYPT 2005. LNCS, vol. 3494, pp. 114–127. Springer, Heidelberg (2005)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
A The Need for an Artificial Abort
A The Need for an Artificial Abort
The “artificial abort” technique of Waters [44] has become standard for security proofs that combine a “partitioning” proof technique with a “decisional” complexity assumption. For example, it is also used to analyze Waters’ IBE scheme [44], the verifiable random function of Hohenberger and Waters [28], and many other works.
Unfortunately, the artificial abort is necessary, because our \((n-1)\)-linear reduction algorithm \(\mathcal {B}\) is not able to use the output of \(\mathcal {A}\) directly in case the experiment it not aborted. This is because the abort probability may depend on the particular sequence of queries issued by \(\mathcal {A}\). For example, it may hold that \(\Pr \left[ B = b\right] = 1/2 + \epsilon \) for some non-negligible \(\epsilon \), which means that \(\mathcal {A}\) has a non-trivial advantage in breaking the VRF-security, while \(\Pr \left[ B = b \mid \lnot \mathsf {abort} \right] = 1/2\), which means that \(\mathcal {B}\) does not have any non-trivial advantage in breaking the \((n-1)\)-linear assumption. Essentially, the artificial abort ensures that \(\mathcal {B}\) aborts for all sequences of queries made by \(\mathcal {A}\) with approximately the same probability.
Alternatively, we could avoid the artificial abort by following the approach of Bellare and Ristenpart [6], which yields a tighter (but more complex) reduction. To this end, we would have to define and construct a PVHF which guarantees sufficiently close upper and lower bounds on the abort probability. This is possible by adopting the idea of balanced admissible hash functions (AHFs) from [29] to “balanced PVHFs”. Indeed, instantiating our adaptively-secure PVHF with the balanced AHF from [29] yields such a balanced PVHF. However, this would have made the definition of PVHFs much more complex. We preferred to keep this novel definition as simple as possible, thus used the artificial abort approach of Waters [44].
Rights and permissions
Copyright information
© 2016 International Association for Cryptologic Research
About this paper
Cite this paper
Hofheinz, D., Jager, T. (2016). Verifiable Random Functions from Standard Assumptions. In: Kushilevitz, E., Malkin, T. (eds) Theory of Cryptography. TCC 2016. Lecture Notes in Computer Science(), vol 9562. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-49096-9_14
Download citation
DOI: https://doi.org/10.1007/978-3-662-49096-9_14
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-662-49095-2
Online ISBN: 978-3-662-49096-9
eBook Packages: Computer ScienceComputer Science (R0)