Abstract
In this work, we present:
-
the first adaptively secure ABE for DFA from the k-Lin assumption in prime-order bilinear groups; this resolves one of open problems posed by Waters [CRYPTO’12];
-
the first ABE for NFA from the k-Lin assumption, provided the number of accepting paths is smaller than the order of the underlying group; the scheme achieves selective security;
-
the first compact adaptively secure ABE (supporting unbounded multi-use of attributes) for branching programs from the k-Lin assumption, which generalizes and simplifies the recent result of Kowalczyk and Wee for boolean formula (NC1) [EUROCRYPT’19].
Our adaptively secure ABE for DFA relies on a new combinatorial mechanism avoiding the exponential security loss in the number of states when naively combining two recent techniques from CRYPTO’19 and EUROCRYPT’19. This requires us to design a selectively secure ABE for NFA; we give a construction which is sufficient for our purpose and of independent interest. Our ABE for branching programs leverages insights from our ABE for DFA.
J. Gong—Supported by NSFC-ISF Joint Scientific Research Program (61961146004) and ERC Project aSCEND (H2020 639554).
H. Wee—Supported by ERC Project aSCEND (H2020 639554).
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
1 Introduction
Attribute-based encryption (ABE) [12, 19] is an advanced form of public-key encryption that supports fine-grained access control for encrypted data. Here, ciphertexts are associated with an attribute x and keys with a policy \({\Gamma }\); decryption is possible only when \({\Gamma }(x)=1\). One important class of policies we would like to support are those specified using deterministic finite automata (DFA). Such policies capture many real-world applications involving simple computation on data of unbounded size such as network logging application, tax returns and virus scanners.
Since the seminal work of Waters [21] introducing ABE for DFA and providing the first instantiation from pairings, substantial progress has been made in the design and analysis of ABE schemes for DFA [1,2,3,4,5, 11], proving various trade-offs between security assumptions and security guarantees. However, two central problems posed by Waters [21] remain open. The first question pertains to security and assumptions:
Q1: Can we build an ABE for DFA with adaptive security from static assumptions in bilinear groups, notably the k-Lin assumption in prime-order bilinear groups?
From both a practical and theoretical stand-point, we would like to base cryptography on weaker and better understood assumptions, as is the case with the k-Lin assumption, while also capturing more realistic adversarial models, as is the case with adaptive security. Prior ABE schemes for DFA achieve either adaptive security from less desirable q-type assumptions [1, 4, 5, 21], where the complexity of the assumption grows with the length of the string x, or very recently, selective security from the k-Lin assumption [2, 11]. Indeed, this open problem was reiterated again in the latter work [11], emphasizing a security loss that is polynomial (and not exponential) in the size of the DFA.
The next question pertains to expressiveness:
Q2: Can we build an ABE for nondeterministic finite automata (NFA) with a polynomial dependency on the NFA size?
The efficiency requirement rules out the naive approach of converting a NFA to a DFA, which incurs an exponential blow-up in size. Here, we do not know any construction even if we only require selective security under q-type assumptions. Partial progress was made very recently by Agrawal et al. [3] in the more limited secret-key setting, where encryption requires access to the master secret key. Throughout the rest of this work, we refer only to the standard public-key setting for ABE, and where the adversary can make an a-priori unbounded number of secret key queries.
1.1 Our Results
In this work, we address the afore-mentioned open problems:
-
We present an adaptively secure ABE for DFA from the k-Lin assumption in prime-order bilinear groups, which affirmatively answers the first open problem. Our scheme achieves ciphertext and key sizes with linear complexity, as well as security loss that is polynomial in the size of the DFA and the number of key queries. Concretely, over the binary alphabet and under the SXDH (=1-Lin) assumption, our ABE for DFA achieves ciphertext and key sizes 2–3 times that of Waters’ scheme (cf. Fig. 4), while simultaneously improving on both the assumptions and security guarantees.
-
We present a selectively secure ABE for NFA also from the k-Lin assumption, provided the number of accepting paths is smaller than p, where p is the order of the underlying group. We also present a simpler ABE for NFA with the same restriction from the same q-type assumption used in Waters’ ABE for DFA. Both ABE schemes for NFA achieve ciphertext and key sizes with linear complexity.
-
Finally, we present the first compact adaptively secure ABE for branching programs from the k-Lin assumption, which generalizes and simplifies the recent result of Kowalczyk and Wee [15] for boolean formula (NC1). Here, “compact” is also referred to as “unbounded multi-use of attributes” in [5]; each attribute/input bit can appear in the formula/program an unbounded number of times. Our construction leverages insights from our ABE for DFA, and works directly with any layered branching program and avoids both the pre-processing step in the latter work for transforming boolean formulas into balanced binary trees of logarithmic depth, as well as the delicate recursive pebbling strategy for binary trees.
We summarize the state of the art of ABE for DFA, NFA and branching programs in Figs. 1, 2, 3, respectively.
In the rest of this section, we focus on our three ABE schemes that rely on the k-Lin assumption, all of which follow the high-level proof strategy in [11, 15]. We design a series of hybrids that traces through the computation, and the analysis carefully combines (i) a “nested, two-slot” dual system argument [8, 13, 16,17,18, 20], (ii) a new combinatorial mechanism for propagating entropy along the NFA computation path, and (iii) the piecewise guessing framework [14, 15] for achieving adaptive security. We proceed to outline and motivate several of our key ideas. From now on, we use GWW to refer to the ABE for DFA by Gong et al. [11].
Adaptively Secure ABE for DFA. Informally, the piecewise guessing framework [14, 15] for ABE adaptive security says that if we have a selectively secure ABE scheme where proving indistinguishability of every pair of adjacent hybrids requires only knowing \(\log L\) bits of information about the challenge attribute x, then the same scheme is adaptively secure with a security loss of L. Moreover, when combined with the dual system argument, it suffices to consider selective security when the adversary only gets a single key corresponding to a single DFA.
In the GWW security proof, proving indistinguishability of adjacent hybrids requires knowing the subset of DFA states that are reachable from the accept states by “back-tracking” the computation. This corresponds to \(\log L = Q\)—we need Q bits to specify an arbitrary subset of [Q]—and a security loss of \(2^Q\). Our key insight for achieving adaptive security is that via a suitable transformation to the DFA, we can ensure that the subset of reachable states per input are always singleton sets, which corresponds to \(\log L = \log Q\) and a security loss of Q. The transformation is very simple: run the DFA “in reverse”! That is, start from the accept states, read the input bits in reverse order and the transitions also in reverse, and accept if we reach the start state. It is easy to see that this actually corresponds to an NFA computation, which means that we still need to design a selectively secure ABE for NFA. Also, back-tracking along this NFA corresponds to normal computation in the original DFA, and therefore always reaches singleton sets of states during any intermediate computation.
ABE for NFA. Next, we sketch our ABE for NFA, which uses an asymmetric bilinear group \((G_1,G_2,G_T,e)\) of prime order p where \(e: G_1 \times G_2 \rightarrow G_T\). As in Waters’ ABE for DFA [21], an encryption of \(x = (x_1,\ldots ,x_\ell ) \in \{0,1\}^\ell \) contains random scalars \(s_0,\ldots ,s_\ell \leftarrow \mathbb {Z}_p\) in the exponent in \(G_1\). In the secret key, we pick a random scalar \(d_u \leftarrow \mathbb {Z}_p\) for each state \(u \in [Q]\). We can now describe the invariant used during decryption with \(g_1,g_2\) being respective generators of \(G_1,G_2\):
-
In Waters’ ABE for DFA, if the computation reaches a state \(u_i \in [Q]\) upon reading \(x_1,\ldots ,x_i\), decryption computes \(e(g_1,g_2)^{s_i d_{u_i}}\). In particular, the scheme allows the decryptor to compute the ratios
$$\begin{aligned} e(g_1,g_2)^{s_j d_v - s_{j-1} d_u}, \; \forall j \in [\ell ], u \in [Q], v = \delta (u,x_j) \in [Q] \end{aligned}$$(1)where \(\delta : [Q] \times \{0,1\}\rightarrow [Q]\) is the DFA transition function.
-
The natural way to extend (1) to account for non-deterministic transitions in an NFA is to allow the decryptor to compute
$$\begin{aligned} e(g_1,g_2)^{s_j d_v - s_{j-1} d_u}, \; \forall j \in [\ell ], u \in [Q], v \in \delta (u,x_j) \subseteq [Q] \end{aligned}$$(2)where \(\delta : [Q] \times \{0,1\}\rightarrow 2^{[Q]}\) is the NFA transition function. As noted by Waters [21], such an ABE scheme for NFA is broken via a so-called “back-tracking attack”, which we describe in the full paper.
-
In our ABE for NFA, we allow the decryptor to compute
$$\begin{aligned} e(g_1,g_2)^{s_j (\sum _{v \in \delta (u,x_j)} d_v) - s_{j-1} d_u}, \; \forall j \in [\ell ], u \in [Q] \end{aligned}$$(3)A crucial distinction between (3) and (2) is that the decryptor can only compute one quantity for each j, u in the former (as is the case also in (1)), and up to Q quantities in the latter. The ability to compute multiple quantities in (2) is exactly what enables the back-tracking attack.
We clarify that our ABE for NFA imposes an extra restriction on the NFA, namely that the total number of accepting pathsFootnote 1 be non-zero \(\bmod \,p\) for accepting inputs; we use NFA\(^{\oplus _p}\) to denote such NFAs. In particular, this is satisfied by standard NFA where the total number of accepting paths is less than p for all inputs. This is in general a non-trivial restriction since the number of accepting paths for an arbitrary NFA can be as large as \(Q^\ell \). Fortunately, for NFAs obtained by running a DFA “in reverse”, the number of accepting paths is always either 0 or 1.
Indeed, the above idea, along with a suitable modification of Waters’ proof strategy, already yields our selectively secure ABE for NFA\(^{\oplus _p}\) under q-type assumptions in asymmetric bilinear groups of prime order p. We defer the details to the full paper.
-
To obtain a selectively secure scheme based on k-Lin, we apply the same modifications as in GWW [11]. For the proof of security, entropy propagation is defined via back-tracking the NFA computation, in a way analogous to that for back-tracking the DFA computation.
-
To obtain an adaptively secure scheme based on k-Lin, we adapt the selectively secure scheme to the piecewise guessing framework [15]. One naive approach is to introduce a new semi-functional space. In contrast, we introduce one extra components into master public key, secret key and ciphertext, respectively. With the extra components, we can avoid adding a new semi-functional subspace, by reusing an existing subspace as shown in previous unbounded ABE in [8]. Under k-Lin assumption, our technique roughly saves \(k \cdot \ell \) elements in the ciphertext and \(k \cdot (2|\varSigma |+2)Q\) elements in the secret key over the general approach. This way, we obtain ciphertext and key sizes that are almost the same as those in the GWW selectively secure scheme.
ABE for Branching Programs. We build our compact adaptively secure ABE for branching program (BP) in two steps analogous to our adaptively secure ABE for DFA. In particular, we first show how to transform branching programs to a subclass of nondeterministic branching programs (NBP) and construct adaptively secure ABE for such class of NBP. Note that the latter is sufficient to capture a special BP with permutation transition function (without transforming BP to NBP) and readily simplify the result of Kowalczyk and Wee [15] for boolean formula (NC1).
1.2 Technical Overview
We start by recalling the standard definitions of DFA and NFA using vector-matrix notation: that is, we describe the start and accept states using the character vectors, and specify the transition function via a transition matrix. The use of vector-matrix notation enables a more compact description of our ABE schemes, and also clarifies the connection to branching programs.
NFA, DFA, \(\mathbf {NFA}^{\varvec{\oplus _p}}\). An NFA \({\Gamma }\) is specified using \((\,Q,\varSigma ,\{\mathbf {M}_{\sigma }\}_{\sigma \in \varSigma },\mathbf {u},\mathbf {f}\,)\) where \(\varSigma \) is the alphabet and
The NFA \({\Gamma }\) accepts an input \(x = (x_1,\ldots ,x_\ell ) \in \varSigma ^\ell \), denoted by \({\Gamma }(x)=1\), if
and rejects the input otherwise, denoted by \({\Gamma }(x) = 0\). We will also refer to the quantity \(\mathbf {f}\mathbf {M}_{x_\ell } \cdots \mathbf {M}_{x_2}\mathbf {M}_{x_1} \mathbf {u}^{\!\scriptscriptstyle {\top }}\) as the number of accepting paths for x. The above relation (4) is equivalent to
The unusual choice of notation is to simplify the description of our ABE scheme. Let \(\mathcal {E}_Q\) be the collection of Q elementary row vectors of dimension Q.
-
A DFA \({\Gamma }\) is a special case of NFA where \(\mathbf {u}\in \mathcal {E}_Q\) and each column in every matrix \(\mathbf {M}_\sigma \) is an elementary column vector (i.e., contains exactly one 1).
-
An NFA\(^{\oplus _p}\), parameterized by a prime p, is the same as an NFA except we change the accept criterion in (4) to:
$$ \mathbf {f}\mathbf {M}_{x_\ell } \cdots \mathbf {M}_{x_2}\mathbf {M}_{x_1} \mathbf {u}^{\!\scriptscriptstyle {\top }}\ne 0 \bmod p $$Note that this coincides with the standard NFA definition whenever the total number of accepting paths for all inputs is less than p.
Throughout the rest of this work, when we refer to NFA, we mean NFA\(^{\oplus _p}\) unless stated otherwise.
ABE for NFA\(^{\varvec{\oplus _p}}\). Following our overview in Sect. 1.1, an encryption of \(x = (x_1,\ldots ,x_\ell ) \in \varSigma ^\ell \) contains random scalars \(s_0,\ldots ,s_\ell \) in the exponent, where the plaintext is masked by \(e(g_1,g_2)^{s_\ell \alpha }\). To generate a secret key for an NFA\(^{\oplus _p}\) \({\Gamma }\), we first pick \(\mathbf {d}= (d_1,\ldots ,d_Q) \leftarrow \mathbb {Z}_p^Q\) as before. We allow the decryptor to compute the following quantities in the exponent over \(G_T\):
If we write \(\mathbf {u}_{j,x}^{\!\scriptscriptstyle {\top }}= \mathbf {M}_{x_j}\cdots \mathbf {M}_{x_1}\mathbf {u}^{\!\scriptscriptstyle {\top }}\) for all \(j\in [\ell ]\) and \(\mathbf {u}_{0,x}=\mathbf {u}\), then we have
This means that whenever \(\mathbf {f}\mathbf {u}_{\ell ,x}^{\!\scriptscriptstyle {\top }}\ne 0 \bmod p\), as is the case when \({\Gamma }(x) = 1\), the decryptor will be able to recover \(e(g_1,g_2)^{s_\ell \alpha }\).
Indeed, it is straight-forward to verify that the following ABE scheme satisfies the above requirements, where \([\cdot ]_1,[\cdot ]_2,[\cdot ]_T\) denote component-wise exponentiations in respective groups \(G_1,G_2,G_T\) [10].
In the full paper, we prove that this scheme is selectively secure under \(\ell \)-EBDHE assumption; this is the assumption underlying Waters’ selectively secure ABE for DFA [21].
Selective Security from k-Lin. Following the GWW proof strategy which in turn builds on the dual system argument, we design a series of games \(\mathsf {G}_0,\ldots ,\mathsf {G}_\ell \) such that in \(\mathsf {G}_i\), the quantities \(s_i\) and \(\mathbf {d}\) have some extra entropy in the so-called semi-functional space (which requires first modifying the above scheme). The entropy in \(\mathbf {d}\) is propagated from \(\mathsf {G}_0\) to \(\mathsf {G}_1\), then \(\mathsf {G}_2\), and finally to \(\mathsf {G}_\ell \) via a combination of a computational and combinatorial arguments. In \(\mathsf {G}_\ell \), we will have sufficient entropy to statistically mask \(\alpha \) in the secret key, which allows us to argue that \(e(g_1,g_2)^{s_\ell \alpha }\) statistically masks the plaintext. In this overview, we focus on the novel component, namely the combinatorial argument which exploits specific properties of our scheme for NFA\(^{\oplus _p}\); the computational steps are completely analogous to those in GWW.
In more detail, we want to replace \(\mathbf {d}\) with \(\mathbf {d}+ \mathbf {d}'_i\) in \(\mathsf {G}_i\), where \(\mathbf {d}'_i \in \mathbb {Z}_p^Q\) corresponds to the extra entropy we introduce into the secret keys in the semi-functional space. Note that \(\mathbf {d}'_i\) will depend on both the challenge attribute \(x^*\) as well as the underlying NFA\(^{\oplus _p}\). We have the following constraints on \(\mathbf {d}'_i\)’s, arising from the fact that an adversarial distinguisher for \(\mathsf {G}_0,\ldots ,\mathsf {G}_\ell \) can always compute what a decryptor can compute in (5):
-
to mask \(\alpha \) in \(\mathsf {G}_\ell \), we set \(\mathbf {d}'_\ell = \varDelta \mathbf {f}\) where \(\varDelta \leftarrow \mathbb {Z}_p\), so that
$$\begin{aligned} \alpha \mathbf {f}- (\mathbf {d}+ \mathbf {d}'_\ell ) = (\alpha -\varDelta )\mathbf {f}- \mathbf {d}\end{aligned}$$perfectly hides \(\alpha \);
-
(ii) implies that
to prevent a distinguishing attackFootnote 2 between \(\mathsf {G}_{i-1}\) and \(\mathsf {G}_i\) by computing \(s_i \mathbf {d}\mathbf {M}_{x^*_i} - s_{i-1} \mathbf {d}\) in both games;
-
(iii) implies that \( s_0 (\mathbf {d}+ \mathbf {d}'_0) \mathbf {u}^{\!\scriptscriptstyle {\top }}= s_0 \mathbf {d}\mathbf {u}^{\!\scriptscriptstyle {\top }}\), and therefore, \(\mathbf {d}'_0 \mathbf {u}^{\!\scriptscriptstyle {\top }}= 0 \bmod p\). This is to prevent a distinguishing attackFootnote 3 between the real keys and those in \(\mathsf {G}_0\).
In particular, we can satisfy the first two constraints by settingFootnote 4
where \(\approx _s\) holds over \(\varDelta \leftarrow \mathbb {Z}_p\), as long as \(s_0,\ldots ,s_\ell \ne 0\). Whenever \({\Gamma }(x^*) = 0\), we have
and therefore the third constraint is also satisfied.
Two clarifying remarks. First, the quantity
used in defining \(\mathbf {d}'_i\) has a natural combinatorial interpretation: its u’th coordinate corresponds to the number of paths from the accept states to u, while back-tracking along \(x^*_\ell ,\ldots ,x^*_{i+1}\). In the specific case of a DFA, this value is 1 if u is reachable from an accept state, and 0 otherwise. It is then easy to see that our proof strategy generalizes that of GWW for DFA: the latter adds \(\varDelta \) to \(d_u\) in \(\mathsf {G}_i\) whenever u is reachable from accept state while back-tracking along the last \(\ell -i\) bits of the challenge attribute (cf. [11, Sec. 3.2]). Second, the “naive” (and insecure) ABE for NFA that captures non-deterministic transitions as in (2) introduces more equations in (ii) in (5); this in turn yields more –and ultimately unsatisfiable– constraints on the \(\mathbf {d}'_i\)’s.
Finally, we remark that our ABE for NFA\(^{\oplus _p}\) (and ABE for DFA from GWW as well) can be proved in the semi-adaptive model [9], which is weaker than adaptive security but stronger than both selective and selective* model used in [3].
Adaptive Security for Restricted NFA\(^{\varvec{\oplus _p}}\) and DFA. Fix a set \(\mathcal {F}\subseteq \mathbb {Z}^Q\). We say that an NFA or an NFA\(^{\oplus _p}\) is \(\mathcal {F}\)-restricted if
Note that \(\mathbf {f}\mathbf {M}_{x^*_\ell }\cdots \mathbf {M}_{x^*_{i+1}}\) corresponding to the challenge attribute \(x^*\) is exactly what is used to define \(\mathbf {d}'_i\) in the previous paragraph. Moreover, following GWW, knowing this quantity is sufficient to prove indistinguishability of \(\mathsf {G}_{i-1}\) and \(\mathsf {G}_i\). This means that to prove selective security for \(\mathcal {F}\)-restricted NFAs, it suffices to know \(\log |\mathcal {F}|\) bits about the challenge attribute, and via the piecewise guessing framework, this yields adaptive security with a security loss of \(|\mathcal {F}|\). Unfortunately, \(|\mathcal {F}|\) is in general exponentially large for general NFAs and DFAs. In particular, DFAs are \(\{0,1\}^Q\)-restricted, and naively applying this argument would yield adaptively secure DFAs with a \(2^Q\) security loss.
Instead, we show how to transform DFAs into \(\mathcal {E}_Q\)-restricted NFA\(^{\oplus _p}\), where \(\mathcal {E}_Q \subset \{0,1\}^Q\) is the collection of Q elementary row vectors of dimension Q; this yields adaptively secure ABE for DFAs with a security loss of \(|\mathcal {E}_Q| = Q\). Concretely, our adaptively secure ABE for DFA uses an adaptively secure ABE for \(\mathcal {E}_Q\)-restricted NFA\(^{\oplus _p}\), and proceeds
-
to encrypt \(x = (x_1,\ldots ,x_\ell )\), use the ABE for NFA to encrypt \(x^{\!\scriptscriptstyle {\top }}= (x_\ell ,\ldots ,x_1)\);Footnote 5
-
to generate a secret key for a DFA \({\Gamma }= (Q,\varSigma ,\{\mathbf {M}_\sigma \},\mathbf {u},\mathbf {f})\), use the ABE for NFA to generate a key for \({\Gamma }^{\!\scriptscriptstyle {\top }}=(Q,\varSigma ,\{\mathbf {M}^{\!\scriptscriptstyle {\top }}_\sigma \},\mathbf {f},\mathbf {u})\).
Note that we reversed x during encryption, and transposed \(\mathbf {M}_\sigma \), and switched \(\mathbf {u},\mathbf {f}\) during key generation. Correctness essentially follows from the equality
Furthermore \({\Gamma }^{\!\scriptscriptstyle {\top }}=(Q,\varSigma ,\{\mathbf {M}^{\!\scriptscriptstyle {\top }}_\sigma \},\mathbf {f},\mathbf {u})\) is indeed a \(\mathcal {E}_Q\)-restricted NFA\(^{\oplus _p}\). This follows from the fact that for any DFA \({\Gamma }\):
which is implied by the property of DFA: \(\mathbf {u}\in \mathcal {E}_Q\) and each column in every matrix \(\mathbf {M}_\sigma \) contains exactly one 1. We give an example of reversing DFA in the full paper.
1.3 Discussion
Tracing Executions. Recall that a DFA is specified using a transition function \(\delta : [Q] \times \varSigma \rightarrow [Q]\). A forward computation upon reading \(\sigma \) goes from a state u to \(v = \delta (u,\sigma )\), whereas back-tracking upon reading \(\sigma \) goes from v to u if \(v = \delta (u,\sigma )\).
-
GWW selective ABE for DFA: Decryption follows normal “forward” computation keeping track of whether a state is reachable from the start state, whereas the security proof introduces entropy based on whether a state is reachable from the accept states via “back-tracking”.
-
Our adaptive ABE for DFA and branching programs: Decryption uses back-tracking and keeps track of whether a state is reachable from the accept states, whereas the security proof introduces entropy based on whether a state is reachable from the start state via forward computation. To achieve polynomial security loss, we crucially rely on the fact that when reading i input bits, exactly one state is reachable from the start state via forward computation.
-
Naive and insecure ABE for NFA\(^{\oplus _p}\): Decryption follows normal forward computation keeping track of whether a state is reachable from the start state.
-
Our selective ABE for NFA\(^{\oplus _p}\): Decryption follows normal forward computation keeping track of the number of paths from the start state, whereas the security proof introduces entropy scaled by the number of paths that are reachable from the accept states via back-tracking.
We summarize the discussion in Fig. 5.
ABE for DFA vs Branching Programs. Our work clarifies that the same obstacle (having to guess a large subset of states that are reached upon back-tracking) arose in constructing adaptive ABE for DFA and compact adaptive ABE for branching programs from k-Lin, and presents a new technique that solves both problems simultaneously in the setting of KP-ABE. Furthermore, our results and techniques can carry over to the CP-ABE settings using more-or-less standard (but admittedly non-black-box) arguments, following e.g. [4, Sec. 8] and [6, Sec. 4]. See the full paper for adaptively secure CP-ABE for DFA and branching programs, respectively.
Interestingly, the very recent work of Agrawal et al. [2, 3] shows a related connection: namely that compact and unbounded adaptive KP and CP-ABE for branching programsFootnote 6 –for which they do not provide any instantiations– yields compact adaptive KP-ABE (as well as CP-ABE) for DFA. In particular, just getting to KP-ABE for DFA already requires both KP and CP-ABE for branching programs and also incurs a larger polynomial blow-up in the parameters compared to our constructions; furthermore, simply getting to compact, unbounded, adaptive KP-ABE for branching programs would also require most of the technical machinery used in this work, notably the “nested, two-slot” dual system argument and the piecewise guessing framework. Nonetheless, there is significant conceptual appeal to having a generic and modular transformation that also yields both KP-ABE and CP-ABE schemes. That said, at the core of our constructions and analysis is a very simple combinatorial object sketched in Sect. 1.2. We leave the question of properly formalizing this object and building a generic compiler to full-fledged KP-ABE and CP-ABE schemes to further work; in particular, such a compiler should (i) match or improve upon the concrete efficiency of our schemes, as with prior compilers such as [5, 7], and (ii) properly decouple the combinatorial arguments that are specific to DFA, NFA and branching programs from the computational arguments that are oblivious to the underlying computational model.
Organization. The next section gives some background knowledge. Section 3 shows the transformation from DFA to \(\mathcal {E}\)-restricted NFA\(^{\oplus _p}\). We show our selectively secure ABE for NFA\(^{\oplus _p}\) in Sect. 4 and upgrade to adaptive security for \(\mathcal {E}_Q\)-restricted NFA\(^{\oplus _p}\) in Sect. 5. The latter implies our adaptively secure ABE for DFA. See the full paper for the concrete description and our basic selectively secure ABE for NFA\(^{\oplus _p}\) from q-type assumption. We also defer our compact adaptively secure ABE for branching programs to the full paper.
2 Preliminaries
Notation. We denote by \(s \leftarrow S\) the fact that s is picked uniformly at random from a finite set S; by \(U(S)\), we indicate uniform distribution over finite set S. We use \(\approx _s\) to denote two distributions being statistically indistinguishable, and \(\approx _c\) to denote two distributions being computationally indistinguishable. We use \(\langle \mathcal {A},\mathsf {G}\rangle =1\) to denote that an adversary \(\mathcal {A}\) wins in an interactive game \(\mathsf {G}\). We use lower case boldface to denote row vectors and upper case boldcase to denote matrices. We use \(\mathbf {e}_i\) to denote the i’th elementary (row) vector (with 1 at the i’th position and 0 elsewhere) and let \(\mathcal {E}_Q\) denote the set of all elementary vectors of dimension Q. For matrix \(\mathbf {A}\), we use \(\mathsf {span}(\mathbf {A})\) to denote the row span of \(\mathbf {A}\) and use \(\mathsf {basis}(\mathbf {A})\) to denote a basis of column span of \(\mathbf {A}\). Throughout the paper, we use prime number p to denote the order of underlying groups.
2.1 Attribute-Based Encryption
Syntax. An attribute-based encryption (ABE) scheme for some class \(\mathcal {C}\) consists of four algorithms:
-
\(\mathsf {Setup}(1^\lambda ,\mathcal {C})\rightarrow ( \mathsf {mpk}, \textsf {msk})\). The setup algorithm gets as input the security parameter \(1^\lambda \) and class description \(\mathcal {C}\). It outputs the master public key \( \mathsf {mpk}\) and the master secret key \( \textsf {msk}\). We assume \( \mathsf {mpk}\) defines the message space \(\mathcal {M}\).
-
\(\mathsf {Enc}( \mathsf {mpk},x,m)\rightarrow \mathsf {ct}_x\). The encryption algorithm gets as input \( \mathsf {mpk}\), an input x and a message \(m \in \mathcal {M}\). It outputs a ciphertext \( \mathsf {ct}_{x}\). Note that x is public given \( \mathsf {ct}_x\).
-
\(\mathsf {KeyGen}( \mathsf {mpk}, \textsf {msk},{\Gamma })\rightarrow \mathsf {sk}_{{\Gamma }}\). The key generation algorithm gets as input \( \mathsf {mpk}\), \( \textsf {msk}\) and \({\Gamma }\in \mathcal {C}\). It outputs a secret key \( \mathsf {sk}_{{\Gamma }}\). Note that \({\Gamma }\) is public given \( \mathsf {sk}_{\Gamma }\).
-
\(\mathsf {Dec}( \mathsf {mpk}, \mathsf {sk}_{{\Gamma }}, \mathsf {ct}_x ) \rightarrow m\). The decryption algorithm gets as input \( \mathsf {sk}_{\Gamma }\) and \( \mathsf {ct}_x\) such that \({\Gamma }(x)=1\) along with \( \mathsf {mpk}\). It outputs a message m.
Correctness. For all input x and \({\Gamma }\) with \({\Gamma }(x)=1\) and all \(m\in \mathcal {M}\), we require
Security Definition. For a stateful adversary \(\mathcal {A}\), we define the advantage function
with the restriction that all queries \({\Gamma }\) that \(\mathcal {A}\) sent to \(\mathsf {KeyGen}( \mathsf {mpk}, \textsf {msk},\cdot )\) satisfy \({\Gamma }(x^*) = 0\). An ABE scheme is adaptively secure if for all PPT adversaries \(\mathcal {A}\), the advantage \(\mathsf {Adv}^{\textsc {abe}}_{\mathcal {A}}(\lambda )\) is a negligible function in \(\lambda \). The selective security is defined analogously except that the adversary \(\mathcal {A}\) selects \(x^*\) before seeing \( \mathsf {mpk}\). A notion between selective and adaptive is so-called semi-adaptive security [9] where the adversary \(\mathcal {A}\) is allowed to select \(x^*\) after seeing \( \mathsf {mpk}\) but before making any queries.
2.2 Prime-Order Groups
A generator \(\mathcal {G}\) takes as input a security parameter \(1^\lambda \) and outputs a description \(\mathbb {G}:= (p,G_1,G_2,G_T,e)\), where p is a prime of \(\varTheta (\lambda )\) bits, \(G_1\), \(G_2\) and \(G_T\) are cyclic groups of order p, and \(e : G_1 \times G_2 \rightarrow G_T\) is a non-degenerate bilinear map. We require that the group operations in \(G_1\), \(G_2\), \(G_T\) and the bilinear map e are computable in deterministic polynomial time in \(\lambda \). Let \(g_1 \in G_1\), \(g_2 \in G_2\) and \(g_T = e(g_1,g_2) \in G_T\) be the respective generators. We employ the implicit representation of group elements: for a matrix \(\mathbf {M}\) over \(\mathbb {Z}_p\), we define \([\mathbf {M}]_1:=g_1^{\mathbf {M}},[\mathbf {M}]_2:=g_2^{\mathbf {M}},[\mathbf {M}]_T:=g_T^{\mathbf {M}}\), where exponentiation is carried out component-wise. Also, given \([\mathbf {A}]_1,[\mathbf {B}]_2\), we let \(e([\mathbf {A}]_1,[\mathbf {B}]_2) = [\mathbf {A}\mathbf {B}]_T\). We recall the matrix Diffie-Hellman (MDDH) assumption on \(G_1\) [10]:
Assumption 1 (MDDH\(^{d}_{k,k'}\) Assumption). Let \(k' > k \ge 1\) and \(d \ge 1\). We say that the \(\text {MDDH}^{d}_{k,k'}\) assumption holds if for all PPT adversaries \(\mathcal {A}\), the following advantage function is negligible in \(\lambda \).
where \(\mathbb {G}:= (p,G_1,G_2,G_T,e) \leftarrow \mathcal {G}(1^\lambda )\), \(\mathbf {M}\leftarrow \mathbb {Z}_p^{k' \times k}\), \(\mathbf {S}\leftarrow \mathbb {Z}_p^{k \times d}\) and \(\mathbf {U}\leftarrow \mathbb {Z}_p^{k' \times d}\).
The MDDH assumption on \(G_2\) can be defined in an analogous way. Escala et al. [10] showed that
with a tight security reduction. We will use to denote the advantage function w.r.t. k-Lin assumption.
3 DFA, NFA, and Their Relationships
Let p be a global parameter and \(\mathcal {E}_Q=\{\mathbf {e}_1,\ldots ,\mathbf {e}_Q\}\) be the set of all elementary row vectors of dimension Q. This section describes various notions of DFA and NFA and studies their relationships.
Finite Automata. We use \({\Gamma }=(\,Q,\varSigma ,\{\mathbf {M}_{\sigma }\}_{\sigma \in \varSigma },\mathbf {u},\mathbf {f}\,)\) to describe deterministic finite automata (DFA for short), nondeterministic finite automata (NFA for short), p-bounded NFA (NFA\(^{<p}\) for short) and mod-p NFA (NFA\(^{\oplus _p}\) for short), where \(Q \in \mathbb {N}\) is the number of states, vectors \(\mathbf {u},\mathbf {f}\in \{0,1\}^{1 \times Q}\) describe the start and accept states, a collection of matrices \(\mathbf {M}_\sigma \in \{0,1\}^{Q \times Q}\) describe the transition function. Let \(x=(x_1,\ldots ,x_\ell )\) denote an input, then,
-
for DFA \({\Gamma }\), we have \(\mathbf {u}\in \mathcal {E}_Q\), each column in every matrix \(\mathbf {M}_\sigma \) is an elementary column vector (i.e., contains exactly one 1) and
$${\Gamma }(x)=1 \iff \mathbf {f}\mathbf {M}_{x_\ell } \cdots \mathbf {M}_{x_1}\mathbf {u}^{\!\scriptscriptstyle {\top }}=1;$$ -
for NFA \({\Gamma }\), we have
$${\Gamma }(x)=1\iff \mathbf {f}\mathbf {M}_{x_\ell } \cdots \mathbf {M}_{x_1}\mathbf {u}^{\!\scriptscriptstyle {\top }}> 0;$$ -
for NFA\(^{<p}\) \({\Gamma }\), we have \(\mathbf {f}\mathbf {M}_{x_\ell } \cdots \mathbf {M}_{x_1}\mathbf {u}^{\!\scriptscriptstyle {\top }}<p\) and
$${\Gamma }(x)=1\iff \mathbf {f}\mathbf {M}_{x_\ell } \cdots \mathbf {M}_{x_1}\mathbf {u}^{\!\scriptscriptstyle {\top }}> 0;$$ -
for NFA\(^{\oplus _p}\) \({\Gamma }\), we have
$${\Gamma }(x)=1\iff \mathbf {f}\mathbf {M}_{x_\ell } \cdots \mathbf {M}_{x_1}\mathbf {u}^{\!\scriptscriptstyle {\top }}\ne 0 \bmod p.$$
We immediately have: DFA \(\subset \) NFA\(^{<p}\) \(\subset \) NFA \(\cap \) NFA\(^{\oplus _p}\).
\(\varvec{\mathcal {E}_Q}\)-Restricted NFA\(^{\varvec{\oplus _p}}\). We introduce the notion of \(\mathcal {E}_Q\)-restricted NFA\(^{\oplus _p}\) which is an NFA\(^{\oplus _p}\) \({\Gamma }=(\,Q,\varSigma ,\{\mathbf {M}_{\sigma }\}_{\sigma \in \varSigma },\mathbf {u},\mathbf {f}\,)\) with an additional property: for all \(\ell \in \mathbb {N}\) and all \(x\in \varSigma ^\ell \), it holds that
Here \(\mathbf {M}_{x_\ell }\cdots \mathbf {M}_{x_{i+1}}\) for \(i=\ell \) refers to \(\mathbf {I}\) of size \(Q \times Q\).
Transforming DFA to \(\varvec{\mathcal {E}_Q}\)-Restricted NFA\(^{\varvec{\oplus _p}}\). In general, a DFA is not necessarily a \(\mathcal {E}_Q\)-restricted NFA\(^{\oplus _p}\). The next lemma says that we can nonetheless transform any DFA into a \(\mathcal {E}_Q\)-restricted NFA\(^{\oplus _p}\):
Lemma 1
(DFA to \(\varvec{\mathcal {E}_Q}\)-restricted NFA\(^{\oplus _p}\)). For each DFA \({\Gamma }=(\,Q,\varSigma ,\{\mathbf {M}_{\sigma }\}_{\sigma \in \varSigma },\mathbf {u},\mathbf {f}\,)\), we have NFA\(^{\oplus _p}\) \({\Gamma }^{\!\scriptscriptstyle {\top }}=(Q,\varSigma ,\{\mathbf {M}^{\!\scriptscriptstyle {\top }}_{\sigma }\}_{\sigma \in \varSigma },\mathbf {f},\mathbf {u})\) such that
-
1.
\({\Gamma }^{\!\scriptscriptstyle {\top }}\) is \(\mathcal {E}_Q\)-restricted;
-
2.
for all \(\ell \in \mathbb {N}\) and \(x=(x_1,\ldots ,x_\ell )\in \varSigma ^\ell \), it holds that
$$\begin{aligned} {\Gamma }(x) = 1 \iff {\Gamma }^{\!\scriptscriptstyle {\top }}(x^{\!\scriptscriptstyle {\top }})=1 \quad \text { where }\,x^{\!\scriptscriptstyle {\top }}=(x_\ell ,\ldots ,x_1)\in \varSigma ^\ell . \end{aligned}$$(7)
Proof
Recall that the definition of DFA implies two properties:
Property (9) comes from the facts that \(\mathbf {u}\in \mathcal {E}_Q\) and each column in every matrix \(\mathbf {M}_\sigma \) is an elementary column vector.
We parse \(x^{\!\scriptscriptstyle {\top }}=(x^{\!\scriptscriptstyle {\top }}_1,\ldots ,x^{\!\scriptscriptstyle {\top }}_\ell )\) and prove the two parts of the lemma as below.
-
1.
\({\Gamma }^{\!\scriptscriptstyle {\top }}\) is \(\mathcal {E}_Q\)-restricted since we have
$$ \mathbf {u}\mathbf {M}^{\!\scriptscriptstyle {\top }}_{x^{\!\scriptscriptstyle {\top }}_\ell }\cdots \mathbf {M}^{\!\scriptscriptstyle {\top }}_{x^{\!\scriptscriptstyle {\top }}_{i+1}} =(\mathbf {M}_{x_{\ell -i}}\cdots \mathbf {M}_{x_{1}}\mathbf {u}^{\!\scriptscriptstyle {\top }})^{\!\scriptscriptstyle {\top }}\in \mathcal {E}_Q,\quad \forall i\in [0,\ell ] $$where the equality is implied by the structure of \({\Gamma }^{\!\scriptscriptstyle {\top }},x^{\!\scriptscriptstyle {\top }}\) and we use property (9).
-
2.
To prove (7), we rely on the fact
$$\begin{aligned} {\Gamma }(x) = 1\iff & {} \mathbf {f}\mathbf {M}_{x_\ell } \cdots \mathbf {M}_{x_1} \mathbf {u}^{\!\scriptscriptstyle {\top }}= 1 \\\iff & {} \mathbf {f}\mathbf {M}_{x_\ell } \cdots \mathbf {M}_{x_1} \mathbf {u}^{\!\scriptscriptstyle {\top }}\ne 0 \bmod p \\\iff & {} \mathbf {u}\mathbf {M}_{x^{\!\scriptscriptstyle {\top }}_\ell }^{\!\scriptscriptstyle {\top }}\cdots \mathbf {M}_{x^{\!\scriptscriptstyle {\top }}_1}^{\!\scriptscriptstyle {\top }}\mathbf {f}^{\!\scriptscriptstyle {\top }}\ne 0 \bmod p \\\iff & {} {\Gamma }^{\!\scriptscriptstyle {\top }}(x^{\!\scriptscriptstyle {\top }})=1. \end{aligned}$$The second \(\iff \) follows from the fact that \(\mathbf {f}\mathbf {M}_{x_\ell } \cdots \mathbf {M}_{x_1} \mathbf {u}^{\!\scriptscriptstyle {\top }}\in \{0,1\}\) which is implied by property (8) and (9) while the third \(\iff \) is implied by the structure of \({\Gamma }^{\!\scriptscriptstyle {\top }},x^{\!\scriptscriptstyle {\top }}\). \(\square \)
4 Semi-adaptively Secure ABE for NFA\(^{\oplus _p}\)
In this section, we present our ABE for NFA\(^{\oplus _p}\) in prime-order groups. The scheme achieves semi-adaptive security under the k-Lin assumption. Our construction is based on GWW ABE for DFA [11] along with an extension of the key structure and decryption to NFA; the security proof follows that of GWW with our novel combinatorial arguments regarding our NFA extension. (See Sect. 1.2 for an overview.) We remark that our scheme and proof work well for a more general form of NFA\(^{\oplus _p}\) where \(\mathbf {u},\mathbf {f},\mathbf {M}_\sigma \) are over \(\mathbb {Z}_p\) instead of \(\{0,1\}\).
4.1 Basis
We will use the same basis as GWW [11]:
and use \((\mathbf {A}^{\!\scriptscriptstyle {\Vert }}_1 \mid \mathbf {a}^{\!\scriptscriptstyle {\Vert }}_2\mid \mathbf {A}^{\!\scriptscriptstyle {\Vert }}_3)\) to denote the dual basis so that \(\mathbf {A}_i \mathbf {A}^{\!\scriptscriptstyle {\Vert }}_i = \mathbf {I}\) (known as non-degeneracy) and \(\mathbf {A}_i \mathbf {A}^{\!\scriptscriptstyle {\Vert }}_j = \mathbf {0}\) if \(i \ne j\) (known as orthogonality). For notational convenience, we always consider \(\mathbf {a}^{\!\scriptscriptstyle {\Vert }}_2\) as a column vector. We review \(\text {SD}^{G_1}_{{\mathbf {A}_1} \mapsto {\mathbf {A}_1},{\mathbf {A}_3}}\) and \(\text {DDH}^{G_2}_{d,Q}\) assumption from [8] which are parameterized for basis (10) and tightly implied by k-Lin assumption. By symmetry, we may permute the indices for \(\mathbf {A}_1,\mathbf {a}_2,\mathbf {A}_3\).
Lemma 2
(\(\mathbf {MDDH}_{k,2k}\Rightarrow {\mathbf {SD}^{G_1}_{{\mathbf {A}_1}\mapsto {\mathbf {A}_1},{\mathbf {A}_3}}}\) [8]). Under the \(\text {MDDH}_{k,2k}\) assumption in \(G_1\), there exists an efficient sampler outputting random \(([\mathbf {A}_1]_1,[\mathbf {a}_2]_1,[\mathbf {A}_3]_1)\) along with base \(\mathsf {basis}(\mathbf {A}^{\!\scriptscriptstyle {\Vert }}_1)\), \(\mathsf {basis}(\mathbf {a}^{\!\scriptscriptstyle {\Vert }}_2)\), \(\mathsf {basis}(\mathbf {A}^{\!\scriptscriptstyle {\Vert }}_1,\mathbf {A}_3^{\!\scriptscriptstyle {\Vert }})\) (of arbitrary choice) such that the following advantage function is negligible in \(\lambda \).
where
More concretely, we have, for all \(\mathcal {A}\), there exists \(\mathcal {B}\) with \(\mathsf {Time}(\mathcal {B})\approx \mathsf {Time}(\mathcal {A})\) such that .
Lemma 3
(\(\mathbf {MDDH}^{d}_{k,k+d}\Rightarrow \mathbf {DDH}^{G_2}_{d,Q}\) [8]). Let \(d,Q \in \mathbb {N}\). Under the \(\text {MDDH}^{d}_{k,k+d}\) assumption in \(G_2\), the following advantage function is negligible in \(\lambda \).
where \(\mathbf {W}\leftarrow \mathbb {Z}_p^{d \times k}\), \(\mathbf {B}\leftarrow \mathbb {Z}_p^{k \times k}\), \(\mathbf {R}\leftarrow \mathbb {Z}_p^{k \times Q}\) and \(\mathbf {U}\leftarrow \mathbb {Z}_p^{d \times Q}\). More concretely, we have, for all \(\mathcal {A}\), there exists \(\mathcal {B}\) with \(\mathsf {Time}(\mathcal {B})\approx \mathsf {Time}(\mathcal {A})\) such that .
Lemma 4
(statistical lemma [8]). With probability \(1-1/p\) over \(\mathbf {A}_1,\mathbf {a}_2,\mathbf {A}_3,\mathbf {A}^{\!\scriptscriptstyle {\Vert }}_1,\mathbf {a}^{\!\scriptscriptstyle {\Vert }}_2,\mathbf {A}^{\!\scriptscriptstyle {\Vert }}_3\), the following two distributions are statistically identical.
where \(\mathbf {W}\leftarrow \mathbb {Z}_p^{(2k+1) \times k}\) and \(\mathbf {w}\leftarrow \mathbb {Z}_p^{1 \times k}\).
4.2 Scheme
Our ABE for NFA\(^{\oplus _p}\) in prime-order groups is described as follows:
-
\(\mathsf {Setup}(1^\lambda ,\varSigma ):\) Run \(\mathbb {G}= (p,G_1,G_2,G_T,e) \leftarrow \mathcal {G}(1^\lambda )\). Sample
$$ \mathbf {A}_1 \leftarrow \mathbb {Z}_p^{k \times (2k+1)},\mathbf {k}\leftarrow \mathbb {Z}_p^{1 \times (2k+1)},\, {\mathbf {W}_\text {start}},\,\mathbf {Z}_b,\,\mathbf {W}_{\sigma ,b},\,{\mathbf {W}_\text {end}}\leftarrow \mathbb {Z}_p^{(2k+1) \times k} $$for all \(\sigma \in \varSigma \) and \(b\in \{0,1\}\). Output
$$\begin{aligned} \mathsf {mpk}&= \big (\,[\,\mathbf {A}_1,\,\mathbf {A}_1{\mathbf {W}_\text {start}},\,\{\,\mathbf {A}_1\mathbf {Z}_b,\,\mathbf {A}_1\mathbf {W}_{\sigma ,b}\,\}_{\sigma \in \varSigma ,b\in \{0,1\}},\,\mathbf {A}_1{\mathbf {W}_\text {end}}\,]_1,\,[\mathbf {A}_1\mathbf {k}^{\!\scriptscriptstyle {\top }}]_T\,\big ) \\ \textsf {msk}&= \big (\,\mathbf {k},\,{\mathbf {W}_\text {start}},\,\{\,\mathbf {Z}_b,\,\mathbf {W}_{\sigma ,b}\,\}_{\sigma \in \varSigma ,b\in \{0,1\}},\,{\mathbf {W}_\text {end}}\,\big ). \end{aligned}$$ -
\(\mathsf {Enc}( \mathsf {mpk},x,m):\) Let \(x = (x_1,\ldots ,x_\ell ) \in \varSigma ^\ell \) and \(m \in G_T\). Pick \( \mathbf {s}_0,\mathbf {s}_1,\ldots ,\mathbf {s}_\ell \leftarrow \mathbb {Z}_p^{1\times k} \) and output
-
\(\mathsf {KeyGen}( \mathsf {mpk}, \textsf {msk},{\Gamma }):\) Let \({\Gamma }=(\,Q,\varSigma ,\{\mathbf {M}_{\sigma }\}_{\sigma \in \varSigma },\mathbf {u},\mathbf {f}\,)\). Pick \(\mathbf {D}\leftarrow \mathbb {Z}_p^{(2k+1)\times Q}\), \(\mathbf {R}\leftarrow \mathbb {Z}_p^{k \times Q}\) and output
$$ \mathsf {sk}_{\Gamma }= \begin{pmatrix} [\mathbf {D}\mathbf {u}^{\!\scriptscriptstyle {\top }}+ {\mathbf {W}_\text {start}}\mathbf {R}\mathbf {u}^{\!\scriptscriptstyle {\top }}]_2,[\mathbf {R}\mathbf {u}^{\!\scriptscriptstyle {\top }}]_2\\ \big \{ [-\mathbf {D}+ \mathbf {Z}_b\mathbf {R}]_2,[\mathbf {D}\mathbf {M}_\sigma + \mathbf {W}_{\sigma ,b}\mathbf {R}]_2,[\mathbf {R}]_2\big \}_{\sigma \in \varSigma ,b\in \{0,1\}}\\ [ \mathbf {k}^{\!\scriptscriptstyle {\top }}\mathbf {f}- \mathbf {D}+ {\mathbf {W}_\text {end}}\mathbf {R}]_2,[\mathbf {R}]_2 \end{pmatrix}. $$ -
\(\mathsf {Dec}( \mathsf {mpk}, \mathsf {sk}_{\Gamma }, \mathsf {ct}_x):\) Parse ciphertext for \(x= (x_1,\ldots ,x_\ell )\) and key for \({\Gamma }= (\,Q,\varSigma ,\{\mathbf {M}_\sigma \}_{\sigma \in \varSigma },\mathbf {u},\mathbf {f})\) as:
We define
$$\begin{aligned} \mathbf {u}_{j,x}^{\!\scriptscriptstyle {\top }}= \mathbf {M}_{x_j}\cdots \mathbf {M}_{x_1}\mathbf {u}^{\!\scriptscriptstyle {\top }}\bmod p,\ \forall j\in [0,\ell ] \end{aligned}$$(11)and proceed as follows:
-
1.
Compute
$$\begin{aligned} B_0 = e([\mathbf {c}_{0,1}]_1,[\mathbf {k}_{0}^{\!\scriptscriptstyle {\top }}]_2) \cdot e([\mathbf {c}_{0,2}]_1,[\mathbf {r}_{0}^{\!\scriptscriptstyle {\top }}]_2)^{-1}; \end{aligned}$$ -
2.
For all \(j \in [\ell ]\), compute
$$ [\mathbf {b}_{j}]_T = e([\mathbf {c}_{j-1,1}]_1,[\mathbf {K}_{j \,{\bmod }\, 2}]_2) \cdot e([\mathbf {c}_{j,1}]_1,[\mathbf {K}_{x_j,j \,{\bmod }\, 2}]_2) \cdot e([-\mathbf {c}_{j,2}]_1,[\mathbf {R}]_2)$$$$ \quad \text{ and }\quad B_j = [\mathbf {b}_j\mathbf {u}_{j-1,x}^{\!\scriptscriptstyle {\top }}]_T; $$ -
3.
Compute
$$ [\mathbf {b}_\text {end}]_T = e([\mathbf {c}_{\ell ,1}]_1,[\mathbf {K}_{\text {end}}]_2) \cdot e([-\mathbf {c}_{\text {end}}]_1,[\mathbf {R}]_2) \quad \text{ and }\quad B_\text {end} = [\mathbf {b}_\text {end}\mathbf {u}_{\ell ,x}^{\!\scriptscriptstyle {\top }}]_T;$$ -
4.
Compute
$$\begin{aligned} \textstyle B_\text {all} = B_0\cdot \prod _{j=1}^\ell B_{j} \cdot B_\text {end} \quad \text{ and }\quad B = B_\text {all}^{(\mathbf {f}\mathbf {u}_{\ell ,x}^{\!\scriptscriptstyle {\top }})^{-1}} \end{aligned}$$and output the message \(m' \leftarrow C \cdot B^{-1}\).
-
1.
Correctness. For \(x = (x_1,\ldots ,x_\ell )\) and \({\Gamma }=(\,Q,\varSigma ,\{\mathbf {M}_\sigma \}_{\sigma \in \varSigma },\mathbf {u},\mathbf {f})\) such that \({\Gamma }(x)=1\), we have:
Here (16) is trivial; (14) and (18) follow from
by the definition in (11), the remaining equalities follow [7], more detail can be found in the full paper.
Security. We have the following theorem stating that our construction is selectively secure. We remark that our construction achieves semi-adaptive security as is and the proof is almost the same.
Theorem 1
(Selectively secure ABE for \(\mathbf {NFA}^{\oplus _p}\)). The ABE scheme for NFA\(^{\oplus _p}\) in prime-order bilinear groups described above is selectively secure (cf. Sect. 2.1) under the k-Lin assumption with security loss \(O(\ell \cdot |\varSigma |)\). Here \(\ell \) is the length of the challenge input \(x^*\).
4.3 Game Sequence
The proof is analogous to GWW’s proof. We show the proof in the one-key setting where the adversary asks for at most one secret key; this is sufficient to motivate the proof in the next section. As in [11], it is straightforward to handle many keys, see the full paper for more details. Let \(x^* \in \varSigma ^\ell \) denote the selective challenge and let \(\bar{\ell }= \ell \bmod 2\). Without loss of generality, we assume \(\ell > 1\). We begin with some auxiliary distributions.
Auxiliary Distributions. We describe the auxiliary ciphertext and key distributions that we use in the proof. Throughout, the distributions are the same as the original distributions except for the so-called \(\mathbf {a}_2\)-components which is defined as below.
\(\mathbf {a}_2\)-Components. For a ciphertext in the following form, capturing real and all auxiliary ciphertexts (defined below):
where \(\mathbf {c}_j = \mathbf {s}_j \mathbf {A}_1 + s_j \mathbf {a}_2 + \tilde{\mathbf {s}}_j \mathbf {A}_3\) with \(\mathbf {s}_j,\tilde{\mathbf {s}}_j \in \mathbb {Z}_p^k\) and \(s_j \in \mathbb {Z}_p\), we define its \(\mathbf {a}_2\)-components, denoted by \( \mathsf {ct}_x[2]\), as follows:
For a key in the following form, capturing real and all auxiliary keys (defined below):
where \(\mathbf {k}_0 \in \mathbb {Z}_p^{1\times (2k+1)}\), \(\mathbf {K}_b,\mathbf {K}_{\sigma ,b},\mathbf {K}_\text {end}\in \mathbb {Z}_p^{(2k+1)\times Q}\) and \(\mathbf {r}_0 \in \mathbb {Z}_p^{1\times k}, \mathbf {R}\in \mathbb {Z}_p^{k \times Q}\), we define its \(\mathbf {a}_2\)-components, denoted by \( \mathsf {sk}_{\Gamma }[2]\), as follows:
For notation simplicity of \( \mathsf {ct}_x[2]\) and \( \mathsf {sk}_{\Gamma }[2]\) with \(\mathbf {k},\mathbf {D},{\mathbf {W}_\text {start}},{\mathbf {W}_\text {end}},\mathbf {Z}_b,\mathbf {W}_{\sigma ,b}\), we write
and call them the \(\mathbf {a}_2\)-components of \(\mathbf {k}^{\!\scriptscriptstyle {\top }},\mathbf {D},{\mathbf {W}_\text {start}},{\mathbf {W}_\text {end}},\mathbf {Z}_b,\mathbf {W}_{\sigma ,b}\), respectively. We also omit zeroes and adjust the order of terms in \( \mathsf {ct}_x[2]\). Furthermore, for all \(\mathbf {A}_1,\mathbf {a}_2,\mathbf {A}_3\), \( \mathsf {mpk}\) and various forms of \( \mathsf {ct}_x, \mathsf {sk}_{\Gamma }\) we will use in the proof, we have
where \(\widetilde{\mathbf {k}} \leftarrow \mathbb {Z}_p^{1\times (2k+1)}, \widetilde{\mathbf {D}}\leftarrow \mathbb {Z}_p^{(2k+1)\times Q}, \widetilde{\mathbf {W}}_\text {start}, \widetilde{\mathbf {W}}_\text {end}, \widetilde{\mathbf {Z}}_b, \widetilde{\mathbf {W}}_{\sigma ,b}\leftarrow \mathbb {Z}_p^{(2k+1)\times k}\) are fresh. This follows from Lemma 4 and the fact that all matrices \(\mathbf {W}\in \mathbb {Z}_p^{(2k+1) \times k'}\) with \(k' \in \mathbb {N}\) can be decomposed as
The property allows us to simulate \( \mathsf {mpk}, \mathsf {ct}_x, \mathsf {sk}_{\Gamma }\) from \( \mathsf {ct}_x[2], \mathsf {sk}_{\Gamma }[2]\) and \(\mathbf {A}_1,\mathbf {a}_2,\mathbf {A}_3\) so that we can focus on the crucial argument over \(\mathbf {a}_2\)-components in the proofs, e.g., those in Sects. 4.4, 4.5 and 4.6.
Ciphertext Distributions. We sample \(s_0,s_1,\ldots ,s_\ell \leftarrow \mathbb {Z}_p\) and define:
-
for \(i\in [0,\ell ]\): \( \mathsf {ct}^i_{x^*}\) is the same as \( \mathsf {ct}_{x^*}\) except we replace \(\mathbf {s}_i\mathbf {A}_1\) with \(\mathbf {s}_i\mathbf {A}_1 + s_i \mathbf {a}_2\);
-
for \(i\in [\ell ]\): \( \mathsf {ct}^{i-1,i}_{x^*}\) is the same as \( \mathsf {ct}_{x^*}\) except we replace \(\mathbf {s}_{i-1}\mathbf {A}_1,\mathbf {s}_i\mathbf {A}_1\) with \(\mathbf {s}_{i-1}\mathbf {A}_1 + s_{i-1} \mathbf {a}_2,\mathbf {s}_i\mathbf {A}_1 + s_i \mathbf {a}_2\).
That is, we have: writing \(\tau = i \bmod 2\),
They are exactly the same as those used in GWW’s proof [11].
Secret Key Distributions. Given \(x^* \in \varSigma ^\ell \) and \({\Gamma }= (\,Q,\varSigma ,\{\mathbf {M}_\sigma \}_{\sigma \in \varSigma },\mathbf {u},\mathbf {f})\), we define
For all \(i\in [\ell ]\), we sample \(\varDelta \leftarrow \mathbb {Z}_p\) and define:
-
\( \mathsf {sk}_{{\Gamma }}^{0}\) is the same as \( \mathsf {sk}_{\Gamma }\) except we replace \(\mathbf {D}\) with \(\mathbf {D}+\mathbf {a}^{\!\scriptscriptstyle {\Vert }}_2 \cdot s_0^{-1} \varDelta \cdot \mathbf {f}_{0,x^*}\) in the term \([\mathbf {D}\mathbf {u}^{\!\scriptscriptstyle {\top }}+ {\mathbf {W}_\text {start}}\mathbf {R}\mathbf {u}^{\!\scriptscriptstyle {\top }}]_2\);
-
\( \mathsf {sk}^{i}_{{\Gamma }}\) is the same as \( \mathsf {sk}_{\Gamma }\) except we replace \(\mathbf {D}\) with \(\mathbf {D}+ \mathbf {a}^{\!\scriptscriptstyle {\Vert }}_2 \cdot s_i^{-1} \varDelta \cdot \mathbf {f}_{i,x^*}\) in the term \([\mathbf {D}\mathbf {M}_{x^*_i} + \mathbf {W}_{x^*_i,i \,{\bmod }\, 2} \mathbf {R}]_2\);
-
\( \mathsf {sk}^{i-1,i}_{{\Gamma }}\) is the same as \( \mathsf {sk}_{\Gamma }\) except we replace \(-\mathbf {D}\) with \(-\mathbf {D}+ \mathbf {a}^{\!\scriptscriptstyle {\Vert }}_2 \cdot s_{i-1}^{-1} \varDelta \cdot \mathbf {f}_{i-1,x^*}\) in the term \([-\mathbf {D}+ \mathbf {Z}_{i \,{\bmod }\, 2} \mathbf {R}]_2\);
-
\( \mathsf {sk}_{\Gamma }^{\ell ,*}\) is the same as \( \mathsf {sk}_{\Gamma }\) except we replace \(-\mathbf {D}\) with \(-\mathbf {D}+ \mathbf {a}^{\!\scriptscriptstyle {\Vert }}_2 \cdot s_\ell ^{-1}\varDelta \cdot \mathbf {f}_{\ell ,x^*}\) in the term \([\mathbf {k}^{\!\scriptscriptstyle {\top }}\mathbf {f}-\mathbf {D}+{\mathbf {W}_\text {end}}\mathbf {R}]_2\).
That is, we have: writing \(\tau = i \bmod 2\),
They are analogous to those used in GWW’s proof [11] with a novel way to change \(\mathbf {a}_2\)-componentsFootnote 7. Following the notations in Sect. 1.2, we use \(\mathbf {d}'_i = s_i^{-1}\varDelta \cdot \mathbf {f}_{i,x^*}\) rather than \(\mathbf {d}'_i=\varDelta \cdot \mathbf {f}_{i,x^*}\). We remark that they are essentially the same but the former helps to simplify the exposition of the proof. Also, we note that \(s_i\) is independent of the challenge input \(x^*\) which will be crucial for the adaptive security in the next section.
Game Sequence. As in GWW’s proof, we prove Theorem 1 via a series of games summarized in Fig. 6:
-
\(\mathsf {G}_0\): Identical to the real game.
-
\(\mathsf {G}_1\): Identical to \(\mathsf {G}_0\) except that the challenge ciphertext is \( \mathsf {ct}^{0}_{{x^*}}\).
-
\(\mathsf {G}_{2.i.0},\,i \in [\ell ]\): In this game, the challenge ciphertext is \( \mathsf {ct}^{i-1}_{{x^*}}\) and the secret key is \( \mathsf {sk}_{\Gamma }^{i-1}\).
-
\(\mathsf {G}_{2.i.1},\,i \in [\ell ]\): Identical to \(\mathsf {G}_{2.i.0}\) except that the secret key is \( \mathsf {sk}_{\Gamma }^{i-1,i}\).
-
\(\mathsf {G}_{2.i.2},\,i \in [\ell ]\): Identical to \(\mathsf {G}_{2.i.1}\) except that the challenge ciphertext is \( \mathsf {ct}^{i-1,i}_{{x^*}}\).
-
\(\mathsf {G}_{2.i.3},\,i \in [\ell ]\): Identical to \(\mathsf {G}_{2.i.2}\) except that the secret key is \( \mathsf {sk}_{\Gamma }^i\).
-
\(\mathsf {G}_{2.i.4},\,i \in [\ell ]\): Identical to \(\mathsf {G}_{2.i.3}\) except that the challenge ciphertext is \( \mathsf {ct}^{i}_{{x^*}}\).
-
\(\mathsf {G}_{3}\): Identical to \(\mathsf {G}_{2.\ell .4}\) except that secret key is \( \mathsf {sk}_{\Gamma }^{\ell ,*}\).
Note that \(\mathsf {G}_{2.1.0}\) is identical to \(\mathsf {G}_1\) except that the secret key is \( \mathsf {sk}^0_{\Gamma }\) and we have \(\mathsf {G}_{2.i.0}=\mathsf {G}_{2.i-1.4}\) for all \(i\in [2,\ell ]\). The remaining of this section will be devoted to proving the indistinguishability of each pair of adjacent games described above. The proofs will be analogous to those for GWW, however, crucially use the property of \(\mathbf {f}_{0,x^*},\ldots ,\mathbf {f}_{\ell ,x^*}\). Due to lack of space, we focus on proofs using the properties; other proofs are completely analogous to GWW and can be found in the full paper.
Useful Lemmas. Before proceed to the proof, we show the next lemma describing the property of \(\mathbf {f}_{0,x^*},\ldots ,\mathbf {f}_{\ell ,x^*}\).
Lemma 5
(Property of \(\{\mathbf {f}_{i,x^*}\}_{i\in [0,\ell ]}\)). For any NFA\(^{\oplus _p}\) \({\Gamma }=(Q,\varSigma ,\{\mathbf {M}_\sigma \},\mathbf {u},\mathbf {f})\) and input \(x^*\in \varSigma ^\ell \), we have:
-
1.
\({\Gamma }(x^*)=0\iff \mathbf {f}_{0,x^*}\mathbf {u}^{\!\scriptscriptstyle {\top }}= 0 \bmod p\);
-
2.
\( \mathbf {f}_{i-1,x^*} = \mathbf {f}_{i,x^*}\mathbf {M}_{x^*_i}\bmod p \) for all \(i \in [\ell ]\);
-
3.
\(\mathbf {f}_{\ell ,x^*}=\mathbf {f}\).
Proof
The lemma directly follows from the definitions of NFA\(^{\oplus _p}\) in Sect. 3 and \(\mathbf {f}_{0,x^*},\ldots ,\mathbf {f}_{\ell ,x^*}\) in (22). \(\square \)
4.4 Initializing
It is standard to prove \(\mathsf {G}_0\approx _c\mathsf {G}_1\), see the full paper. We only show the proof sketch for \(\mathsf {G}_1 \approx _c \mathsf {G}_{2.1.0}\).
Lemma 6
(\(\mathsf {G}_1=\mathsf {G}_{2.1.0}\)). For all \(\mathcal {A}\), we have
Proof
Roughly, we will prove that
where we have
and
This follows from the statement:
which is implied by the fact \({\Gamma }(x^*)=0 \Longleftrightarrow \mathbf {f}_{0,x^*} \mathbf {u}^{\!\scriptscriptstyle {\top }}= 0 \bmod p\) (see Lemma 5). This is sufficient for the proof. \(\square \)
4.5 Switching Secret Keys II
This section proves \(\mathsf {G}_{2.i.2}\approx _c \mathsf {G}_{2.i.3}\) for all \(i\in [\ell ]\) using the the transition lemma from GWW [11].
Lemma 7
(\((\mathbf {z},\mathbf {w})\)-transition lemma [11]). For all \(s_{i-1},s_i \ne 0\) and \(\bar{\varDelta } \in \mathbb {Z}_p\), we have
where \(\mathsf {aux}= ([\mathbf {z}\mathbf {B},\mathbf {w}\mathbf {B},\mathbf {B}]_2)\) and \(\mathbf {z},\mathbf {w}\leftarrow \mathbb {Z}_p^{1 \times k}\), \(\mathbf {B}\leftarrow \mathbb {Z}_p^{k \times k}\), \(\mathbf {r}\leftarrow \mathbb {Z}_p^{1\times k}\). Concretely, the advantage function \(\mathsf {Adv}^{\textsc {trans}}_{{\mathcal {B}}}(\lambda )\) is bounded by with \(\mathsf {Time}(\mathcal {B}_0)\approx \mathsf {Time}(\mathcal {B})\).
Lemma 8
(\(\mathsf {G}_{2.i.2}\approx _c \mathsf {G}_{2.i.3}\)). For all \(i\in [\ell ]\) and all \(\mathcal {A}\), there exists \(\mathcal {B}\) with \(\mathsf {Time}(\mathcal {B}) \approx \mathsf {Time}(\mathcal {A})\) such that
Overview. This roughly means
more concretely, we want to prove the following statement over \(\mathbf {a}_2\)-components:
given \(\mathbf {d},\varDelta ,s_{i-1},s_i,s_{i-1}\mathbf {z}_\tau + s_i \mathbf {w}_{x^*_i,\tau }\) revealed by \( \mathsf {ct}_{x^*}^{i-1,i}\). The first row corresponds to \( \mathsf {sk}^{i-1,i}_{{\Gamma }}[2]\) while the second corresponds to \( \mathsf {sk}^{i}_{{\Gamma }}[2]\). This can be handled by the \((\mathbf {z}_\tau ,\mathbf {w}_{x^*_i,\tau })\)-transition lemma and the fact that \(\mathbf {f}_{i-1,x^*} = \mathbf {f}_{i,x^*}\mathbf {M}_{x^*_i}\bmod p\) (see Lemma 5).
Proof
Recall that \(\tau = i \bmod 2\). By Lemma 4, it suffices to prove the lemma over \(\mathbf {a}_2\)-components which roughly means:
in the presence of
One can sample basis \(\mathbf {A}_1,\mathbf {a}_2,\mathbf {A}_3,\mathbf {A}^{\!\scriptscriptstyle {\Vert }}_1,\mathbf {a}^{\!\scriptscriptstyle {\Vert }}_2,\mathbf {A}^{\!\scriptscriptstyle {\Vert }}_3\) and trivially simulate \( \mathsf {mpk}\), \( \mathsf {ct}^{i-1,i}_{x^*}\) and secret key using terms given out above. Furthermore, we prove this using \((\mathbf {z}_\tau ,\mathbf {w}_{x^*_i,\tau })\)-transition lemma. On input
where and
with \(\mathbf {z}_\tau ,\mathbf {w}_{x^*_i,\tau } \leftarrow \mathbb {Z}_p^{1 \times k}\), \(\mathbf {B}\leftarrow \mathbb {Z}_p^{k \times k}\), \(\mathbf {r}\leftarrow \mathbb {Z}_p^{1 \times k}\) and \(\bar{\varDelta } \leftarrow \mathbb {Z}_p\), we sample \(\alpha \leftarrow \mathbb {Z}_p,\mathbf {w}_\text {start},\mathbf {z}_{1-\tau }\), \(\mathbf {w}_{\sigma ,1-\tau },\mathbf {w}_\text {end}\leftarrow \mathbb {Z}_p^{1 \times k}\) for all \(\sigma \in \varSigma \) and \(\mathbf {w}_{\sigma ,\tau }\leftarrow \mathbb {Z}_p^{1\times k}\) for all \(\sigma \ne x^*_i\) and proceed as follows:
-
(Simulating challenge ciphertext) On input \((m_0,m_1)\), we trivially simulate \( \mathsf {ct}^{i-1,i}_{x^*}[2]\) using \(s_{i-1},s_i,s_{i-1}\mathbf {z}_\tau + s_i \mathbf {w}_{x^*_i,\tau }\) in \(\mathsf {aux}\) and \(\alpha \), \(\mathbf {w}_\text {start}\), \(\mathbf {w}_{\sigma ,1-\tau }\), \(\mathbf {z}_{1-\tau }\), \(\mathbf {w}_\text {end}\) as well.
-
(Simulating secret key) On input \({\Gamma }\), we want to return a secret key for \({\Gamma }\) in the form:
where . Observe that
-
when , the distribution is identical to ;
-
when , the distribution is identical to since \(\mathbf {f}_{i-1,x^*} = \mathbf {f}_{i,x^*}\mathbf {M}_{x^*_{i}} \bmod p\) (see Lemma 5).
-
We sample \(\mathbf {d}\leftarrow \mathbb {Z}_p^{1 \times Q}\) and \(\widetilde{\mathbf {R}} \leftarrow \mathbb {Z}_p^{k \times Q}\) and implicitly set
We then generate the key for \({\Gamma }\) as follows:
-
We simulate \([\mathbf {R}]_2\) from \([\mathbf {r}^{\!\scriptscriptstyle {\top }}]_2,[\mathbf {B}]_2\) and \(\mathbf {f}_{i-1,x^*},\widetilde{\mathbf {R}}\).
-
We rewrite the terms in the dashed box as follows:
$$\begin{aligned}{}[-\mathbf {d}+ (\bar{\varDelta }_0 + \mathbf {z}_\tau \mathbf {r}^{\!\scriptscriptstyle {\top }}) \cdot \mathbf {f}_{i-1,x^*} + \mathbf {z}_\tau \mathbf {B}\cdot \widetilde{\mathbf {R}}]_2,\ [\mathbf {d}\mathbf {M}_{x^*_i} + (\bar{\varDelta }_1 + \mathbf {w}_{x^*_i,\tau }\mathbf {r}^{\!\scriptscriptstyle {\top }}) \cdot \mathbf {f}_{i-1,x^*} + \mathbf {w}_{x^*_i,\tau } \mathbf {B}\cdot \widetilde{\mathbf {R}} ]_2 \end{aligned}$$and simulate them using \([\bar{\varDelta }_{0} + \mathbf {z}_\tau \mathbf {r}^{\!\scriptscriptstyle {\top }}]_2, [\bar{\varDelta }_{1} + \mathbf {w}_{x^*_i,\tau }\mathbf {r}^{\!\scriptscriptstyle {\top }}]_2, [\mathbf {z}_\tau \mathbf {B}]_2,[\mathbf {w}_{x^*_i,\tau }\mathbf {B}]_2\) and \(\mathbf {d},\mathbf {f}_{i-1,x^*},\widetilde{\mathbf {R}}\).
-
We simulate all remaining terms using \([\mathbf {R}]_2\) and \(\alpha \), \(\mathbf {d}\), \(\mathbf {w}_\text {start}\), \(\mathbf {z}_{1-\tau }\), \(\{\mathbf {w}_{\sigma ,\tau }\}_{\sigma \ne x^*_i}\), \(\{\mathbf {w}_{\sigma ,1-\tau }\}_{\sigma \in \varSigma }\), \(\mathbf {w}_\text {end}\).
Observe that, when , we have , then the secret key is and the simulation is identical to \(\mathsf {G}_{2.i.2}\); when , we have , then the secret key is and the simulation is identical to \(\mathsf {G}_{2.i.3}\). This completes the proof. \(\square \)
4.6 Finalize
We finally prove that the adversary wins \(\mathsf {G}_3\) with probability 1/2.
Lemma 9
\(\Pr [\langle \mathcal {A},\mathsf {G}_3\rangle =1] \approx 1/2\).
Proof
First, we argue that the secret key \( \mathsf {sk}_{\Gamma }^{\ell ,*}\) in this game perfectly hides the \(\mathbf {a}_2\)-component of \(\mathbf {k}^{\!\scriptscriptstyle {\top }}\), i.e., \(\alpha =\mathbf {a}_2\mathbf {k}^{\!\scriptscriptstyle {\top }}\). Recall the \(\mathbf {a}_2\)-components of the secret key:
By the property \(\mathbf {f}_{\ell ,x^*}=\mathbf {f}\) (see Lemma 5), we can see that \( \mathsf {sk}^{\ell ,*}_{\Gamma }[2]\) can be simulated using \( \alpha + s_\ell ^{-1}\varDelta \), which means the secret key perfectly hides \(\alpha =\mathbf {a}_2\mathbf {k}^{\!\scriptscriptstyle {\top }}\). Therefore, the unique term involving \(\mathbf {k}\) in \( \mathsf {ct}_{x^*}^{\ell }\), i.e., \([\mathbf {s}_\ell \mathbf {A}_1\mathbf {k}^{\!\scriptscriptstyle {\top }}+s_\ell \mathbf {a}_2\mathbf {k}^{\!\scriptscriptstyle {\top }}]_T\), is independently and uniformly distributed and thus statistically hides message \(m_\beta \). \(\square \)
5 Adaptively Secure ABE for \(\mathcal {E}_Q\)-Restricted NFA\(^{\oplus _p}\) and DFA
In this section, we present our adaptively secure ABE for \(\mathcal {E}_Q\)-restricted NFA\(^{\oplus _p}\). By our transformation from DFA to \(\mathcal {E}_Q\)-restricted NFA\(^{\oplus _p}\) (cf. Lemma 1), this readily gives us an adaptively secure ABE for DFA. We defer the concrete construction to the full paper.
Overview. Our starting point is the selectively secure ABE scheme in Sect. 4. To achieve adaptive security, we handle key queries one by one following standard dual system method [20]; for each key, we carry out the one-key selective proof in Sect. 4 with piecewise guessing framework [15]. However this does not work immediately, we will make some changes to the scheme and proof in Sect. 4.
Recall that, in the one-key setting, the (selective) proof in Sect. 4 roughly tells us
The two-key setting, for example, is expected to be handled by hybrid arguments:
The first step seems to be feasible with some natural extension but the second one is problematic. Since we can not switch the challenge ciphertext back to \( \mathsf {ct}_{x^*}\) due to the presence of \( \mathsf {sk}^{\ell ,*}_{{\Gamma }_1}\), the argument (23) can not be applied to the second key \( \mathsf {sk}_{{\Gamma }_2}\) literally. In more detail, recall that
leaks information of \(\mathbf {w}_{x^*_\ell ,\bar{\ell }}\) and \(\mathbf {w}_\text {end}\) while we need them to be hidden in some steps of the one-key proof; for example, Lemma in Sect. 4.5 for \(\mathsf {G}_{2.i.2}\approx _c\mathsf {G}_{2.i.3}\). We quickly argue that the natural solution of adding an extra subspace for fresh copies of \(\mathbf {w}_{x^*_\ell ,\bar{\ell }}\) and \(\mathbf {w}_\text {end}\) blows up the ciphertext and key sizes (see Sect. 1.1 for discussion).
Our approach reuses the existing \(\mathbf {a}_2\)-components as in [8]. Recall that, our one-key proof (23) uses a series of hybrids with random coins \(s_0,s_1,\ldots \) and finally stops at a hybrid with \(s_\ell \) (cf. (23) and (24)). Roughly, we change the scheme by adding an extra random coin s into the ciphertext and move one more step in the proof so that we finally stop at a new hybrid with the new s only. This allows us to release \(s_\ell \) and reuse \(\mathbf {w}_{x^*_\ell ,\bar{\ell }},\mathbf {w}_\text {end}\) for the next key. More concretely, starting with the scheme in Sect. 4.2, we introduce a new component \([\mathbf {W}]_1\in G_1^{(2k+1)\times k}\) into \( \mathsf {mpk}\):
-
during encryption, we pick one more random coin \(\mathbf {s}\leftarrow \mathbb {Z}_p^{1\times k}\) and replace the last three components in \( \mathsf {ct}_x\) with
$$ {[\mathbf {s}\mathbf {A}_1]_1},[\mathbf {s}_\ell \mathbf {A}_1{\mathbf {W}_\text {end}}+\mathbf {s}\mathbf {A}_1\mathbf {W}]_1,\,[\mathbf {s}\mathbf {A}_1\mathbf {k}^{\!\scriptscriptstyle {\top }}]_T \cdot m; $$this connects the last random coin \(\mathbf {s}_\ell \) with the newly introduced \(\mathbf {s}\); and \(\mathbf {s}\) corresponds to s in the proof;
-
during key generation, we replace the last two components in \( \mathsf {sk}_{\Gamma }\) with
$$ [ - \mathbf {D}+ {\mathbf {W}_\text {end}}\mathbf {R}]_2,[ \mathbf {k}^{\!\scriptscriptstyle {\top }}\mathbf {f}+ \mathbf {W}\mathbf {R}]_2,[\mathbf {R}]_2; $$the decryption will recover \([\mathbf {s}\mathbf {A}_1\mathbf {k}^{\!\scriptscriptstyle {\top }}\mathbf {f}-\mathbf {s}_\ell \mathbf {A}_1\mathbf {D}]_T\) instead of \([\mathbf {s}_\ell \mathbf {A}_1\mathbf {k}^{\!\scriptscriptstyle {\top }}\mathbf {f}-\mathbf {s}_\ell \mathbf {A}_1\mathbf {D}]_T\);
-
during the proof, we extend the proof in Sect. 4.3 by one more step (see the dashed box):
so that \( \mathsf {ct}_{x^*}^*[2]\) is in the following form:
$$ \mathsf {ct}^*_{x^*}[2] = \big ( [s\mathbf {w}]_1,[s]_1,[s\alpha ]_1\cdot m_\beta \big ) $$which leaks \(\mathbf {w}= \mathbf {a}_2\mathbf {W}\) instead of \(\mathbf {w}_{x^*_\ell ,\bar{\ell }},\mathbf {w}_\text {end}\); by this, we can carry out the one-key proof (23) for the next key (with some natural extensions).
Conceptually, we can interpret this as letting the NFA move to a specific dummy state whenever it accepts the input. Such a modification has been mentioned in [4] for simplifying the description rather than improving security and efficiency. In our formal description below, we will rename \({\mathbf {W}_\text {end}},\mathbf {W},\mathbf {s},s\) as \({\mathbf {Z}_\text {end}},{\mathbf {W}_\text {end}},\mathbf {s}_\text {end},s_\text {end}\), respectively.
5.1 Scheme
Our adaptively secure ABE for \(\mathcal {E}_Q\)-restricted NFA\(^{\oplus _p}\) in prime-order groups use the same basis as described in Sect. 4.1 and is described as follows:
-
\(\mathsf {Setup}(1^\lambda ,\varSigma ):\) Run \(\mathbb {G}= (p,G_1,G_2,G_T,e) \leftarrow \mathcal {G}(1^\lambda )\). Sample
$$ \mathbf {A}_1 \leftarrow \mathbb {Z}_p^{k \times (2k+1)},\,\mathbf {k}\leftarrow \mathbb {Z}_p^{1 \times (2k+1)},\, {\mathbf {W}_\text {start}},\mathbf {Z}_b,\mathbf {W}_{\sigma ,b},{\mathbf {Z}_\text {end}},{\mathbf {W}_\text {end}}\leftarrow \mathbb {Z}_p^{(2k+1) \times k} $$for all \(\sigma \in \varSigma \) and \(b\in \{0,1\}\). Output
$$\begin{aligned} \mathsf {mpk}&= \big ([\mathbf {A}_1,\mathbf {A}_1{\mathbf {W}_\text {start}},\{\mathbf {A}_1\mathbf {Z}_b,\mathbf {A}_1\mathbf {W}_{\sigma ,b}\}_{\sigma \in \varSigma ,b\in \{0,1\}},\mathbf {A}_1{\mathbf {Z}_\text {end}},\mathbf {A}_1{\mathbf {W}_\text {end}}\,]_1,[\mathbf {A}_1\mathbf {k}^{\!\scriptscriptstyle {\top }}]_T\big ) \\ \textsf {msk}&= \big (\,\mathbf {k},{\mathbf {W}_\text {start}},\{\,\mathbf {Z}_b,\mathbf {W}_{\sigma ,b}\}_{\sigma \in \varSigma ,b\in \{0,1\}},{\mathbf {Z}_\text {end}},{\mathbf {W}_\text {end}}\big ). \end{aligned}$$ -
\(\mathsf {Enc}( \mathsf {mpk},x,m):\) Let \(x = (x_1,\ldots ,x_\ell ) \in \varSigma ^\ell \) and \(m \in G_T\). Pick \(\mathbf {s}_0,\mathbf {s}_1,\ldots ,\mathbf {s}_\ell ,\mathbf {s}_\text {end}\leftarrow \mathbb {Z}_p^{1\times k}\) and output
-
\(\mathsf {KeyGen}( \mathsf {mpk}, \textsf {msk},{\Gamma }):\) Let \({\Gamma }=(\,Q,\varSigma ,\{\mathbf {M}_{\sigma }\}_{\sigma \in \varSigma },\mathbf {u},\mathbf {f}\,)\). Pick \(\mathbf {D}\leftarrow \mathbb {Z}_p^{(2k+1)\times Q}\), \(\mathbf {R}\leftarrow \mathbb {Z}_p^{k \times Q}\) and output
$$ \mathsf {sk}_{{\Gamma }} = \begin{pmatrix} [\mathbf {D}\mathbf {u}^{\!\scriptscriptstyle {\top }}+ {\mathbf {W}_\text {start}}\mathbf {R}\mathbf {u}^{\!\scriptscriptstyle {\top }}]_2,[\mathbf {R}\mathbf {u}^{\!\scriptscriptstyle {\top }}]_2\\ \big \{ [-\mathbf {D}+ \mathbf {Z}_b\mathbf {R}]_2,[\mathbf {D}\mathbf {M}_\sigma + \mathbf {W}_{\sigma ,b}\mathbf {R}]_2,[\mathbf {R}]_2\big \}_{\sigma \in \varSigma ,b\in \{0,1\}}\\ [ - \mathbf {D}+ {\mathbf {Z}_\text {end}}\mathbf {R}]_2,[ \mathbf {k}^{\!\scriptscriptstyle {\top }}\mathbf {f}+ {\mathbf {W}_\text {end}}\mathbf {R}]_2,[\mathbf {R}]_2 \end{pmatrix}. $$ -
\(\mathsf {Dec}( \mathsf {mpk}, \mathsf {sk}_{\Gamma }, \mathsf {ct}_x):\) Parse ciphertext for \(x= (x_1,\ldots ,x_\ell )\) and key for \({\Gamma }= (\,Q,\varSigma ,\{\mathbf {M}_\sigma \}_{\sigma \in \varSigma },\mathbf {u},\mathbf {f})\) as
$$ \mathsf {ct}_x = \begin{pmatrix} [\mathbf {c}_{0,1}]_1,[\mathbf {c}_{0,2}]_1\\ \big \{\, [\mathbf {c}_{j,1}]_1,[\mathbf {c}_{j,2}]_1 \,\big \}_{j}\\ [\mathbf {c}_{\text {end},1}]_1,[\mathbf {c}_{\text {end},2}]_1,C \end{pmatrix} \quad \text{ and }\quad \mathsf {sk}_{\Gamma }=\begin{pmatrix} [\mathbf {k}_{0}^{\!\scriptscriptstyle {\top }}]_2,[\mathbf {r}_{0}^{\!\scriptscriptstyle {\top }}]_2\\ \big \{\, [\mathbf {K}_{b}]_2, [\mathbf {K}_{\sigma ,b}]_2, [\mathbf {R}]_2\,\big \}_{\sigma ,b}\\ [\mathbf {K}_{\text {end},1}]_2,[\mathbf {K}_{\text {end},2}]_2,[\mathbf {R}]_2 \end{pmatrix} $$We define \(\mathbf {u}_{j,x}^{\!\scriptscriptstyle {\top }}\) for all \(j\in [0,\ell ]\) as (11) in Sect. 4.2 and proceed as follows:
-
1.
Compute
$$ B_0 = e([\mathbf {c}_{0,1}]_1,[\mathbf {k}_{0}^{\!\scriptscriptstyle {\top }}]_2) \cdot e([\mathbf {c}_{0,2}]_1,[\mathbf {r}_{0}^{\!\scriptscriptstyle {\top }}]_2)^{-1}; $$ -
2.
For all \(j \in [\ell ]\), compute
$$ [\mathbf {b}_{j}]_T = e([\mathbf {c}_{j-1,1}]_1,[\mathbf {K}_{j \,{\bmod }\, 2}]_2) \cdot e([\mathbf {c}_{j,1}]_1,[\mathbf {K}_{x_j,j \,{\bmod }\, 2}]_2) \cdot e([-\mathbf {c}_{j,2}]_1,[\mathbf {R}]_2)$$$$ \quad \text{ and }\quad B_j = [\mathbf {b}_j\mathbf {u}_{j-1,x}^{\!\scriptscriptstyle {\top }}]_T; $$ -
3.
Compute
$$ [\mathbf {b}_\text {end}]_T = e([\mathbf {c}_{\ell ,1}]_1,[\mathbf {K}_{\text {end},1}]_2) \cdot e([\mathbf {c}_{\text {end},1}]_1,[\mathbf {K}_{\text {end},2}]_2) \cdot e([-\mathbf {c}_{\text {end},2}]_1,[\mathbf {R}]_2)$$$$ \quad \text{ and }\quad B_\text {end} = [\mathbf {b}_\text {end}\mathbf {u}_{\ell ,x}^{\!\scriptscriptstyle {\top }}]_T; $$ -
4.
Compute
$$\begin{aligned} \textstyle B_\text {all} = B_0\cdot \prod _{j=1}^\ell B_{j} \cdot B_\text {end} \quad \text{ and }\quad B = B_\text {all}^{(\mathbf {f}\mathbf {u}_{\ell ,x}^{\!\scriptscriptstyle {\top }})^{-1}} \end{aligned}$$and output the message \(m' \leftarrow C \cdot B^{-1}\).
-
1.
It is direct to verify the correctness as in Sect. 4.2. See the full paper for more details.
Security. We prove the following theorem stating the adaptive security of the above ABE for \(\mathcal {E}_Q\)-restricted NFA\(^{\oplus _p}\). This readily implies our adaptively secure ABE for DFA thanks to Lemma 1.
Theorem 2
(Adaptively secure ABE for \(\mathcal {E}_Q\)-restricted \(\mathbf {NFA}^{\oplus _p}\)). The ABE scheme for \(\mathcal {E}_Q\)-restricted NFA\(^{\oplus _p}\) in prime-order bilinear groups described above is adaptively secure (cf. Sect. 2.1) under the k-Lin assumption with security loss \(O(q\cdot \ell \cdot |\varSigma |^3 \cdot Q^2)\). Here \(\ell \) is the length of the challenge input \(x^*\) and \(q\) is the number of key queries.
5.2 Proof of Main Theorem
From a high level, we employ the standard dual system proof switching the challenge ciphertext and keys into semi-functional forms in a one-by-one manner. To switch a secret key, we employ the proof technique for one-key selective setting in Sect. 4 in the piecewise guessing framework [14, 15]. We will capture this by a core lemma. Let \(x^* \in \varSigma ^\ell \) denote the adaptive challenge. We begin with auxiliary distributions and use the notation for \(\mathbf {a}_2\)-components in Sect. 4.3.
Auxiliary Distributions. We sample \(s_\text {end}\leftarrow \mathbb {Z}_p\), \(\varDelta \leftarrow \mathbb {Z}_p\) and define semi-functional ciphertext and key:
-
\( \mathsf {ct}_{x^*}^*\) is the same as \( \mathsf {ct}_{x^*}\) except we replace \(\mathbf {s}_\text {end}\mathbf {A}_1\) with \(\mathbf {s}_\text {end}\mathbf {A}_1 + s_\text {end}\mathbf {a}_2\);
-
\( \mathsf {sk}_{\Gamma }^*\) is the same as \( \mathsf {sk}_{\Gamma }\) except we replace \(\mathbf {k}^{\!\scriptscriptstyle {\top }}\) with \(\mathbf {k}^{\!\scriptscriptstyle {\top }}+\mathbf {a}^{\!\scriptscriptstyle {\Vert }}_2 \cdot s_\text {end}^{-1} \varDelta \) in the term \([\mathbf {k}^{\!\scriptscriptstyle {\top }}\mathbf {f}+ {\mathbf {W}_\text {end}}\mathbf {R}]_2\).
That is, we have:
Game Sequence and Core Lemma. We prove Theorem 2 via a series of games following standard dual system method [20]:
-
\(\mathsf {G}_0\): Identical to the real game.
-
\(\mathsf {G}_1\): Identical to \(\mathsf {G}_0\) except that the challenge ciphertext is semi-functional, i.e., \( \mathsf {ct}^*_{x^*}\).
-
\(\mathsf {G}_{2.\kappa }\) for \(\kappa \in [0,q]\): Identical to \(\mathsf {G}_1\) except that the first \(\kappa \) secret keys are semi-functional, i.e., \( \mathsf {sk}^*_{\Gamma }\).
-
\(\mathsf {G}_{3}\): Identical to \(\mathsf {G}_{2.q}\) except that the challenge ciphertext is an encryption of a random message.
Here we have \(\mathsf {G}_{2.0}=\mathsf {G}_1\). It is standard to prove \(\mathsf {G}_0\approx _c\mathsf {G}_1\), \(\mathsf {G}_{2.q}\approx _s\mathsf {G}_{3}\) and show that adversary in \(\mathsf {G}_3\) has no advantage. We sketch the proofs in the full paper. To prove \(\mathsf {G}_{2.\kappa -1}\approx _c \mathsf {G}_{2.\kappa }\) for all \(\kappa \in [q]\), we use core lemma:
Lemma 10
(Core lemma). For all \(\mathcal {A}\), there exists \(\mathcal {B}\) with \(\mathsf {Time}(\mathcal {B}) \approx \mathsf {Time}(\mathcal {A})\) and
where, for all \(b \in \{0,1\}\), we define:
where
with \({\mathbf {W}_\text {start}},\mathbf {Z}_0,\mathbf {Z}_1,\mathbf {W}_{\sigma ,0},\mathbf {W}_{\sigma ,1},{\mathbf {Z}_\text {end}},{\mathbf {W}_\text {end}}\leftarrow \mathbb {Z}_p^{(2k+1) \times k}\), \(\mathbf {B}\leftarrow \mathbb {Z}_p^{k \times k}\), \(\mathbf {r}\leftarrow \mathbb {Z}_p^{1\times k}\), \(s_\text {end},\varDelta \leftarrow \mathbb {Z}_p\) and the two oracles work as follows:
-
\(\mathsf {OEnc}(x^*,m)\): output \( \mathsf {ct}^*_{x^*}\) using \(s_\text {end}\) in \(\mathsf {aux}_2\);
-
: output if \(b=0\); output using \(\varDelta \) and \(s_\text {end}\) in \(\mathsf {aux}_2\) if \(b = 1\);
with the restrictions that (1) \(\mathcal {A}\) makes only one query to each oracle; (2) queries \({\Gamma }\) and \(x^*\) satisfy \({\Gamma }(x^*)=0\).
It is direct to see that the core lemma implies \(\mathsf {G}_{2.\kappa -1}\approx _c \mathsf {G}_{2.\kappa }\); here \(\mathsf {aux}_1\) and \(\mathsf {aux}_2\) are sufficient to simulate other \(q-1\) keys which are either \( \mathsf {sk}_{\Gamma }\) or \( \mathsf {sk}_{\Gamma }^*\), see the full paper for more details.
Notes
- 1.
An accepting path on input \(x\in \{0,1\}^\ell \) is described by a sequence of states \(u_0,\ldots ,u_\ell \in [Q]\) where \(u_0\) is the start state, \(u_\ell \) is an accept state and \(u_j\in \delta (u_{j-1},x_j)\) for all \(j\in [\ell ]\).
- 2.
Looking ahead to the proof of security in Sect. 4, this “simplified” attack corresponds roughly to using \( \mathsf {ct}_{x^*}^{i-1,i}\) to distinguish \( \mathsf {sk}_{\Gamma }^{i-1,i}\) and \( \mathsf {sk}_{\Gamma }^i\); this comes up in the proof of \(\mathsf {G}_{2.i.2} \approx _c \mathsf {G}_{2.i.3}\) in Lemma 8.
- 3.
- 4.
We adopt the standard convention that the product of an empty sequence of matrices is the identity matrix. This means \(\mathbf {d}'_\ell = \varDelta \cdot \mathbf {f}\).
- 5.
We acknowledge that writing \(x^{\!\scriptscriptstyle {\top }}\) constitutes an abuse of notation, but nonetheless convenient in analogy with \(\mathbf {M}^{\!\scriptscriptstyle {\top }}_\sigma \).
- 6.
The statement in [3] refers to monotone span programs, which is a more powerful object, but we believe that branching program suffices.
- 7.
We also change the definition of \( \mathsf {sk}_{\Gamma }^i\), \(i\in [0,\ell ]\), with the goal of improving the exposition.
References
Agrawal, S., Chase, M.: Simplifying design and analysis of complex predicate encryption schemes. In: Coron, J.-S., Nielsen, J.B. (eds.) EUROCRYPT 2017, Part I. LNCS, vol. 10210, pp. 627–656. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-56620-7_22
Agrawal, S., Maitra, M., Yamada, S.: Attribute based encryption (and more) for nondeterministic finite automata from LWE. In: Boldyreva, A., Micciancio, D. (eds.) CRYPTO 2019. LNCS, vol. 11693, pp. 765–797. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-26951-7_26
Agrawal, S., Maitra, M., Yamada, S.: Attribute based encryption for deterministic finite automata from \(\sf DLIN\). In: Hofheinz, D., Rosen, A. (eds.) TCC 2019. LNCS, vol. 11892, pp. 91–117. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-36033-7_4
Attrapadung, N.: Dual system encryption via doubly selective security: framework, fully secure functional encryption for regular languages, and more. In: Nguyen, P.Q., Oswald, E. (eds.) EUROCRYPT 2014. LNCS, vol. 8441, pp. 557–577. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-642-55220-5_31
Attrapadung, N.: Dual system encryption framework in prime-order groups via computational pair encodings. In: Cheon, J.H., Takagi, T. (eds.) ASIACRYPT 2016, Part II. LNCS, vol. 10032, pp. 591–623. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-662-53890-6_20
Attrapadung, N., Yamada, S.: Duality in ABE: converting attribute based encryption for dual predicate and dual policy via computational encodings. In: Nyberg, K. (ed.) CT-RSA 2015. LNCS, vol. 9048, pp. 87–105. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-16715-2_5
Chen, J., Gay, R., Wee, H.: Improved dual system ABE in prime-order groups via predicate encodings. In: Oswald, E., Fischlin, M. (eds.) EUROCRYPT 2015, Part II. LNCS, vol. 9057, pp. 595–624. Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-46803-6_20
Chen, J., Gong, J., Kowalczyk, L., Wee, H.: Unbounded ABE via bilinear entropy expansion, revisited. In: Nielsen, J.B., Rijmen, V. (eds.) EUROCRYPT 2018, Part I. LNCS, vol. 10820, pp. 503–534. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-78381-9_19
Chen, J., Wee, H.: Semi-adaptive attribute-based encryption and improved delegation for Boolean formula. In: Abdalla, M., De Prisco, R. (eds.) SCN 2014. LNCS, vol. 8642, pp. 277–297. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10879-7_16
Escala, A., Herold, G., Kiltz, E., Ràfols, C., Villar, J.: An algebraic framework for Diffie-Hellman assumptions. In: Canetti, R., Garay, J.A. (eds.) CRYPTO 2013, Part II. LNCS, vol. 8043, pp. 129–147. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40084-1_8
Gong, J., Waters, B., Wee, H.: ABE for DFA from k-Lin. In: Boldyreva, A., Micciancio, D. (eds.) CRYPTO 2019, Part II. LNCS, vol. 11693, pp. 732–764. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-26951-7_25
Goyal, V., Pandey, O., Sahai, A., Waters, B.: Attribute-based encryption for fine-grained access control of encrypted data. In: Juels, A., Wright, R.N., Vimercati, S. (eds.) ACM CCS 2006, pp. 89–98. ACM Press, October/November 2006. Available as Cryptology ePrint Archive Report 2006/309
Hofheinz, D., Koch, J., Striecks, C.: Identity-based encryption with (almost) tight security in the multi-instance, multi-ciphertext setting. In: Katz, J. (ed.) PKC 2015. LNCS, vol. 9020, pp. 799–822. Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-46447-2_36
Jafargholi, Z., Kamath, C., Klein, K., Komargodski, I., Pietrzak, K., Wichs, D.: Be adaptive, avoid overcommitting. In: Katz, J., Shacham, H. (eds.) CRYPTO 2017, Part I. LNCS, vol. 10401, pp. 133–163. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63688-7_5
Kowalczyk, L., Wee, H.: Compact adaptively secure ABE for \(\sf NC^1\) from k-Lin. In: Ishai, Y., Rijmen, V. (eds.) EUROCRYPT 2019, Part I. LNCS, vol. 11476, pp. 3–33. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-17653-2_1
Lewko, A., Waters, B.: New techniques for dual system encryption and fully secure HIBE with short ciphertexts. In: Micciancio, D. (ed.) TCC 2010. LNCS, vol. 5978, pp. 455–479. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-11799-2_27
Lewko, A., Waters, B.: Unbounded HIBE and attribute-based encryption. In: Paterson, K.G. (ed.) EUROCRYPT 2011. LNCS, vol. 6632, pp. 547–567. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-20465-4_30
Okamoto, T., Takashima, K.: Fully secure unbounded inner-product and attribute-based encryption. In: Wang, X., Sako, K. (eds.) ASIACRYPT 2012. LNCS, vol. 7658, pp. 349–366. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-34961-4_22
Sahai, A., Waters, B.: Fuzzy identity-based encryption. In: Cramer, R. (ed.) EUROCRYPT 2005. LNCS, vol. 3494, pp. 457–473. Springer, Heidelberg (2005). https://doi.org/10.1007/11426639_27
Waters, B.: Dual system encryption: realizing fully secure IBE and HIBE under simple assumptions. In: Halevi, S. (ed.) CRYPTO 2009. LNCS, vol. 5677, pp. 619–636. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-03356-8_36
Waters, B.: Functional encryption for regular languages. In: Safavi-Naini, R., Canetti, R. (eds.) CRYPTO 2012. LNCS, vol. 7417, pp. 218–235. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-32009-5_14
Acknowledgments
We thank Brent Waters for insightful discussions on adaptive security, as well as the anonymous reviewers for constructive feedback on our write-up.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 International Association for Cryptologic Research
About this paper
Cite this paper
Gong, J., Wee, H. (2020). Adaptively Secure ABE for DFA from k-Lin and More. In: Canteaut, A., Ishai, Y. (eds) Advances in Cryptology – EUROCRYPT 2020. EUROCRYPT 2020. Lecture Notes in Computer Science(), vol 12107. Springer, Cham. https://doi.org/10.1007/978-3-030-45727-3_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-45727-3_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-45726-6
Online ISBN: 978-3-030-45727-3
eBook Packages: Computer ScienceComputer Science (R0)