Professional Documents
Culture Documents
A Cryptographic Solution To A Game Theoretic Problem: Yevgeniy Dodis Shai Halevi Tal Rabin
A Cryptographic Solution To A Game Theoretic Problem: Yevgeniy Dodis Shai Halevi Tal Rabin
Abstract Although Game Theory and Cryptography seem to have some similar scenarios in common, it is very rare to nd instances where tools from one area are applied in the other. In this work we use cryptography to solve a game theoretic problem. The problem that we discuss arises naturally in the game theory area of two-party strategic games. In these games there are two players. Each player decides on a move (according to some strategy), and then the players execute the game, i.e. the two players make their moves simultaneously. Once these moves are played each player receives a payoff, which depends on both moves. Each player only cares to maximize its payoff. In the game theory literature it was shown that higher payoffs can be achieved by the players if they use correlated strategies. This is enabled through the introduction of a trusted third party (a mediator), who assists the players in choosing their move. Though now the property of the game being a two player game is lost. It is natural to ask whether a game can exist which would be a two player game yet maintain the high payoffs which the mediator aided strategy offered. We answer this question afrmatively. We extend the game by adding an initial step in which the two players communicate and then they proceed to execute the game as usual. For this extended game we can prove (informally speaking) the following: any correlated strategy for 2-player games can be achieved, provided that the players are computationally bounded and can communicate before playing the game. We obtain an efcient solution to the above game-theoretic problem, by providing a cryptographic protocol to the following Correlated Element Selection problem. Both Alice and Bob know a list of pairs (possibly with repetitions), and they want to pick a random index such that Alice learns only and Bob learns only . We believe that this problem has other applications, beyond our application to game theory. Our solution is quite efcient: it has constant number of rounds, negligible error probability, and uses only very simple zero-knowledge proofs. The protocol that we describe in this work uses as a basic building block blindable encryption schemes (such as ElGamal or Goldwasser-Micali). We note that such schemes seem to be a very useful general primitive for constructing efcient protocols. As an example, we show a simple -out-of- oblivious transfer protocol based on any such encryption scheme.
Key words. Game theory, Nash equilibria, Correlated equilibria, Element selection, Correlated coins, Coin Flipping, Blindable encryption, Oblivious transfer.
Lab. of Computer Science, Massachusetts Institute of Technology, 545 Tech Square, Cambridge, MA 02139, USA. Email: yevgen@theory.lcs.mit.edu. IBM T.J. Watson Research Center, P.O. Box 704, Yorktown Heights, New York 10598, USA. Email: shaih,talr @watson.ibm.com.
1 Introduction
The research areas of Game Theory and Cryptography are both extensively studied elds with many problems and solutions. Yet, the cross-over between them is surprisingly small: very rarely (if at all) are tools from one area borrowed to address problems in the other. In this paper we exhibit the benets which arise in such a combined setting. Namely, we show how cryptographic tools can be used to address a natural problem in the Game Theory world. We hope that this work will encourage greater synergy between these classical elds.
Theorem 1 If secure two-party protocols exist for non-trivial functions, then for any Correlated equilibrium of the original game , there exists an extended game with a computational Nash equilibrium , such that the payoffs for both players are the same in and . In other words, any Correlated equilibrium payoffs of can be achieved using a computational Nash equilibrium of . Thus, the mediator can be avoided if the players are computationally bounded and can communicate prior to the game. We stress that although this theorem is quite natural from a cryptography point of view, the models of Game Theory and Cryptography are different, thus proving it in the Game Theory framework requires some care. In particular, two-party cryptographic protocols always assume that at least one player is honest, while the other player could be arbitrarily malicious. In the game-theoretic setting, on the other hand, both players are selsh and rational: they (certainly) deviate from the protocol if they benet from it, and (can be assumed to) follow their protocol otherwise. Also, it is important to realize that in this setting we cannot use cryptography to enforce honest behavior. The only thing that the players are able to do is to choose their moves and execute them at the end of the game. Hence, even the most elaborate protocol would be useless if a cheating player can simply ignore the fact that it was caught cheating during the protocol, and nonetheless choose a move that maximizes its prot. We discuss these issues further in Section 3.
In other words, given that player follows , is an optimal response of player and vice versa. Correlated equilibrium. While Nash equilibrium is quite a natural and appealing notion (since players can follow their strategies independently of each other), one can wonder if it is possible to achieve higher expected payoffs if one allows correlated strategies.
3
For example, extensive games with perfect information and simultaneous moves, see [30].
In a correlated strategy prole [2], the induced distribution over can be an arbitrary distribution, not necessarily a product distribution. This can be implemented by having a trusted party (called mediator) sample a pair of actions according to some joint probability distribution , and recommend the action to player . We stress that knowing , player now knows a conditional distribution over the actions of the other player (which can be different for different s), but knows nothing more. We denote these distributions by and . For any , let be the expected value of when is distributed according to (similarly for ). In other words, measures the expected payoff of player 1 if his recommended action was (thus, is distributed according to ), but it decided to play instead. As before, we let be the expected value of when are drawn according to . Similarly to the notion of Nash equilibrium, we now dene a Correlated equilibrium, which ensures that players have no incentive to deviate from the recommendation they got from the mediator. Denition 2 A Correlated equilibrium is a strategy prole in the support of , and any and , we have .
Given Nash (resp. Correlated) equilibrium , we say that achieves Nash (resp. Correlated) equilibrium payoffs . It is known that Correlated equilibria can give equilibrium payoffs outside (and better!) than anything in the convex hull of Nash equilibria payoffs, as demonstrated in the following simple example rst observed by Aumann [2], who also dened the notion of Correlated equilibrium. C D C 4,4 1,5 Game of Chicken. We consider a simple game, the so called game of D 5,1 0,0 Chicken shown in the table to the right (much more dramatic examples can be Chicken shown in larger games). Here each player can either dare ( ) or chicken out has a devastating effect on both players (payoffs ), ( ). The combination C D is quite good (payoffs ), while each player would ideally prefer to dare C 1/4 1/4 while the other chickens-out (giving him and the opponent ). While the wisest pair of actions is , this is not a Nash equilibrium, since both players D 1/4 1/4 Mixed Nash (believing that the other player will stay at ). The are willing to deviate to game is easily seen to have three Nash equilibria: , and C D . The respective Nash equilibrium payoffs C 1/3 1/3 are , and . We see that the rst two pure strategy Nash equilibria D 1/3 0 are unfair, while the last mixed equilibrium has small payoffs, since the mutually Correlated undesirable outcome happens with non-zero probability in the product distribution. The best fair equilibrium in the convex hull of the Nash equilibria is the combination , yielding payoffs . On the other hand, the prole is a correlated equilibrium, yielding payoffs , which is better than any convex combination of Nash equilibria. To briey see it, consider the row player (same works for player ). If it is recommended to play , its expected payoff is since, conditioned on , player is recommended to play and with probability each. If player switched to , its expected payoff would still be , making player reluctant to switch. Similarly, if player is recommended , it knows that player plays (as is never played in ), so its payoff is . Since this is the maximum payoff of the game, player would not benet by switching to in this case. Thus, we indeed have a Correlated equilibrium, where each players payoff is , as claimed. 4
Punishment for Deviations. We employ the standard game-theoretic solution, which is to punish the cheating player to his minimax level. This is the smallest payoff that one player can force the other player to have. Namely, the minimax level of player is . Similarly, minimax level of player is . To complete the description of our proposed equilibrium, we let each player punish the other player to its minimax level, if the other player deviates from (and is caught). Namely, if player 2 cheats, player 1 will play in the last stage of the game the strategy achieving the minimax payoff for player and vice versa. First, we observe the following simple fact: Lemma 1 Let
Proof: Consider player . Let be the marginal strategy of player in the Correlated equilibrium , and let be the best (independent) response of player to . (The strategy can be thought of as what player 1 should do if it knows that player 2 plays according to , but it did not get any recommendation from the mediator.) Since is a Correlated equilibrium, it follows that , since a particular deviation of player from the correlated equilibrium is to ignore its recommendation and always play , and we know that no such deviation can increase the payoff of player 1. Also, recall that is the best (independent) strategy in response to , so we have . Hence we get
The same holds for player 2. We are now ready to prove Theorem 1, which we recall from the introduction: Theorem 1 If secure two-party protocols exist for non-trivial functions, then for any Correlated equilibrium of the original game , there exists an extended game with a computational Nash equilibrium , such that the payoffs for both players are the same in and . Proof: (sketch) The extended game is the game , preceeded with a cryptographic protocol for securely computing the function . (Such protocol exists by our assumption.) The computational Nash equilibrium consists of both players following their part in the protocol , and executing the moves that they got from this protocol. Clearly, this strategy achieves the same payoffs as the original correlated equilibrium. We only need to show that it is a computational Nash equilibrium. Indeed, if player believes that the other player follows its designated strategy (i.e., behaves honestly in the protocol ), then the correctness of implies that the probability of the this player cheating without getting caught is negligibly small. If we denote this probability of cheating by , and denote the highest payoff that player can get in the original game by , then the expected payoff of a strategy that includes cheating in is at most
, this is at most negligibly larger than . (We notice that a particular type of cheating (in the cryptographic setting) is early stopping. Since the extended game must always result in players getting payoffs, stopping is not an issue in game theory, since it will be punished by the minimax level as well.) Finally, if no player has cheated in , the privacy of implies that we achieved exactly the same effects as with the mediator: each player only learns its move, does not learn anything about the other players move except for what is implied by its move (and possibly except for a negligible additional advantage). Since is a Correlated equilibrium, both players will indeed take the action they outputted in . 6
where the inequality follows from Lemma 1. Since is just some constant which does not depend on
A few remarks are in order: rst, since the extended game produces some distribution over the moves of the players, then any Nash equilibrium payoffs achievable in the extended game are also achievable as Correlated equilibrium payoffs of the original game. Thus, Theorem 1 is the best possible result we could hope for.6 Also, one question that may come to mind is why would a player want to carry out a minimax punishment when it catches the other player cheating (since this punishment may also hurt the punishing player). However, the notion of Nash equilibrium only requires players actions to be optimal provided the other player follows its strategy. Thus, it is acceptable to carry out the punishment even if this results in a loss for both players. (the cheating player should have been rational and should not have cheated in the rst place). We note that this oddity (known as an empty threat in the game-theoretic literature) is not just a feature of our solution, but rather an inherent weakness of the notion of Nash equilibrium.
For any message (also referred to as the blinding factor), produces a random encryption of . Namely, the distribution should be equal to the distribution
For example, there is no way to enforce the desirable outcome in the Game of Chicken, no matter which cryptographic protocols we design. Indeed, both players will want to deviate to before making their nal moves. 7 Choosing from the list with distribution other than the uniform can be accommodated by having a list with repetitions, where a high-probability pair appears many times.
(1)
If
are the random coins used by two successive blindings, then for any two blinding factors
(2)
Thus, in a blindable encryption scheme anyone can randomly translate the encryption of into an encryption of , without knowledge of or the secret key, and there is an efcient way of combining several blindings into one operation. Both the ElGamal and the Goldwasser-Micali encryption schemes can be extended into blindable encryption schemes. We note that most of the components of our solution are independent of the specic underlying blindable encryption scheme, but there are some aspects that still have to be tailored to each scheme. (Specically, proving that the key generation process was done correctly is handled differently for different schemes. See Section 4.4). The reader is referred to Section B for further discussion of the blindable encryption primitive.
Protocol CES-1
, public key .
1. Permute and Encrypt. Pick a random permutation over . Let Send the list to . 2. Choose and Blind. Pick a random index Let Send to .
, for all .
3. Decrypt and Output. Set , Send to . 4. Unblind and Output. Set . Output .
. Output
The proof will go as follows: for each , the Preparer sends the element and corresponding random string that was used to obtain ciphertexts in the rst step. The Chooser can then check that the element that it chose in Step 2 was encrypted correctly, and learn the corresponding plaintext. Clearly, in this protocol the Chooser gets more information than just the decryption of (specically, it gets the decryption of all the s). However, this does not effect the security of the protocol, as the Chooser now sees a decryption of a permutation of list that he knew at the onset of the protocol. This permutation of the all s does not give any information about the output of the Preparer, other than what is implied by its output . In particular, notice that if appears more than once in the list, then the Chooser does not know which of these occurrences was encrypted by . Next, we observe that after the above change there is no need for the Chooser to send to the Preparer; it is sufcient if sends only in Step 2, since it can compute the decryption of by itself. A weaker condition in the second proof-of-knowledge. Finally, we observe that since the security of the Chooser relies on an information-theoretic argument, the second proof-of-knowledge (in which the Chooser proves that it knows the index ) does not have to be fully zero-knowledge. In fact, tracing through the proof of security, one can verify that it is sufcient for this proof to be witness hiding in the sense of Feige and Shamir [17]. The resulting protocol is described in Figure 2. Remark. Notice that for the modied protocol we did not use the full power of blindable encryption, since we only used blindings by zero. Namely, all that was used in these protocols is that we can transform any ciphertext into a random encryption of the same plaintext. (The zero-knowledge proofs also use only blindings by zero.) This is exactly the random self-reducibility property used by Sander et al. [31].
10
Protocol CES-2
, public key .
Sub-protocol : proves in zero-knowledge that it knows the randomness and permutation that were used to obtain the list . 2. Choose and Blind. Pick a random index Send to the ciphertext
Sub-protocol : proves in a witness-hiding manner that it knows the randomness and index that were used to obtain .
4. Verify and Output. Denote by the th entry in this lists (i.e., then output . If
).
0. Initially, the Preparer chooses the keys for the blindable encryption scheme, sends the public key to the Chooser and proves in zero-knowledge that the encryption is committing and has the blinding property. As we said above, this proof must be tailored to the particular encryption scheme that is used. Also, this step can be carried out only once, and the resulting keys can be used for many instances of the protocol. (Alternatively, we can use a trusted third party for this step.) 1. The Preparer encrypts the known list in some canonical manner, blinds with zero the list of ciphertexts, and permutes it with a random permutation . It sends the resulting lists to the Chooser, and uses the ELC protocol to prove in zero-knowledge that it knows the permutation that was used. 2. The Chooser blinds with zeros the list of s, and re-permutes it with a random permutation . It to the Prover, and again uses the ELC protocol to prove that it knows sends the resulting list the permutation that was used. Here we can optimize the proof somewhat, since we later only use , and also because the proof only needs to be witness hiding. 3. The Preparer decrypts the rst ciphertext
It also sends to the Chooser the list of the s, permuted according to , together with the randomness that was used to blind their canonical encryption to get the s in Step 1.
, and lets denote the th element and randomness, respectively, 4. The Chooser sets in the last list that it got from the Preparer. He checks that blinding with zero (and randomness ) of the canonical encryption of indeed yields the ciphertext . If this is correct, outputs .
Although we can no longer use the general theorems about secure two-party protocols, the security proof is nonetheless quite standard. Specically, we can prove: Theorem 3 Protocol CES-2 securely computes the (randomized) function of the Correlated Element Selection problem. Proof omitted. Efciency. We note that all the protocols that are involved are quite simple. In terms of number of communication ows, the key generation step (Step 0 above) takes at most ve ows, Step 1 takes ve ows, Step 2 takes three ows and Step 3 consists of just one ow. Moreover, these ows can be piggybacked on each other. Hence, we can implement the protocol with only ve ows of communication, which is equal to the ve steps which are required by a single proof. In terms of number of operations, the complexity of the protocol is dominated by the complexity of the proofs in Steps 1 and 2. The proof in Step 1 requires blinding operations (for a list of size and security parameter ), and the proof of Step 2 can be optimized to about blinding operations on the average. Hence, the whole protocol has about blinding operations.8
References
[1] M. Abe. Universally Veriable Mix-net with Verication Work Independent on the number of Mixcenters. In Proceedings of EUROCRYPT 98, pp. 437-447, 1998.
We note that the protocol includes just a single decryption operation, in Step 3. In schemes where encryption is much more efcient than decryption such as the Goldwasser-Micali encryption this may have a signicant impact on the performance of the protocol.
8
12
[2] R. Aumann. Subjectivity and Correlation in Randomized Strategies. In Journal of Mathematical Economics, 1, pp. 67-95, 1974 [3] I. Barany. Fair distribution protocols or how the players replace fortune. Mathematics of Operations Research, 17(2):327340, May 1992. [4] M. Bellare, R. Impagliazzo, and M. Naor. Does parallel repetition lower the error in computationally sound protocols? In 38th Annual Symposium on Foundations of Computer Science, pages 374383. IEEE, 1997. [5] J. Benaloh. Dense Probabilistic Encryption. In Proc. of the Workshop on Selected Areas in Cryptography, pp. 120-128, 1994. [6] M. Ben-Or, S. Goldwasser, and A. Wigderson. Completeness theorems for non-cryptographic faulttolerant distributed computation. In Proceedings of the 20th Annual ACM Symposium on Theory of Computing, pages 110, 1988. [7] M. Blum. Coin ipping by telephone: A protocol for solving impossible problems. In Advances in Cryptology CRYPTO 81. ECE Report 82-04, ECE Dept., UCSB, 1982. [8] G. Brassard, D. Chaum, and C. Cr peau. Minimum disclosure proofs of knowledge. JCSS, 37(2):156 e 189, 1988. [9] D. Chaum. Blind signatures for untraceable payment. In Advances in Cryptology CRYPTO 82, pages 199203. Plenum Press, 1982. [10] D. Chaum and H. Van Antwerpen. Undeniable signatures. In G. Brassard, editor, Advances in Cryptology Crypto89, pages 212217, Berlin, 1989. Springer-Verlag. Lecture Notes in Computer Science No. 435. [11] D. Chaum, C. Cr peau, and E. Damg rd. Multiparty unconditionally secure protocols. In Advances e a in Cryptology CRYPTO 87, volume 293 of 99 Lecture Notes in Computer Science, pages 462462. Springer-Verlag, 1988. [12] D. Chaum and T. Pedersen. Wallet databases with observers. In E. Brickell, editor, Advances in Cryptology Crypto92, pages 89105, Berlin, 1992. Springer-Verlag. Lecture Notes in Computer Science No. 740. [13] R. Cramer, I. Damgard, and P. MacKenzie. Efcient zero-knowledge proofs of knowledge without intractability assumptions. To appear in 2000 International Workshop on Practice and Theory in Public Key Cryptography, January 2000, Melbourne, Australia. [14] C. Cr peau and J. Kilian. Weakening security assumptions and oblivious transfer. In Advances in e Cryptology CRYPTO 88, volume 403 of Lecture Notes in Computer Science, pages 27. SpringerVerlag, 1990. [15] C. Dwork, M. Naor, and A. Sahai. Concurrent zero knowledge. In Proceedings of the 30th Annual ACM Symposium on Theory of Computing, pages 409418. ACM Press, 1998. [16] T. ElGamal. A public key cryptosystem and a signature scheme based on discrete logarithms. In Advances in Cryptology CRYPTO 84, volume 196 of Lecture Notes in Computer Science, pages 1018. Springer-Verlag, 1985. 13
[17] U. Feige and A. Shamir. Witness indistinguishable and witness hiding protocols. In Proceedings of the 22nd Annual ACM Symposium on Theory of Computing, pages 416426. ACM Press, 1990. [18] D. Fudenberg, J. Tirole. Game Theory. MIT Press, 1992. [19] J. Garay, R. Gennaro, C. Jutla, and T. Rabin. Secure distributed storage and retrieval. In Proc. 11th International Workshop on Distributed Algorithms (WDAG 97), volume 1320 of Lecture Notes in Computer Science, pages 275289. Springer-Verlag, 1997. [20] O. Goldreich, S. Micali, and A. Wigderson. Proofs that yield nothing but their validity and a methodology of cryptographic protocol design. In 27th Annual Symposium on Foundations of Computer Science, pages 174187. IEEE, 1986. [21] O. Goldreich, S. Micali, and A. Wigderson. How to play any mental game. In Proceedings of the 19th Annual ACM Symposium on Theory of Computing, pages 218229, 1987. [22] S. Goldwasser and S. Micali. Probabilistic encryption. Journal of Computer and System Sciences, 28(2):270299, April 1984. [23] S. Goldwasser, S. Micali, and C. Rackoff. The knowledge complexity of interactive proof systems. SIAM Journal on Computing, 18(1):186208, 1989. [24] M. Jakobsson. A Practical Mix. In Proceedings of EUROCRYPT 98, pp. 448461, 1998. [25] E. Lehrer and S. Sorin. One-shot public mediated talk. Discussion Paper 1108, Northwestern University, 1994. [26] J. Kilian. Founding Cryptography on Oblivious Transfer. In Proc. of STOC, pp. 2031, 1988. [27] P. MacKenzie. Efcient ZK Proofs of Knowledge. Unpublished manuscript, 1998. [28] M. Naor and B. Pinkas. Oblivious transfer with adaptive queries. In Advances in Cryptology CRYPTO 99, volume 1666 of Lecture Notes in Computer Science, pages 573590. Springer-Verlag, 1999. [29] J.F. Nash. Non-Cooperative Games. Annals of Mathematics, 54 pages 286295. [30] M. Osborne, A. Rubinstein. A Course in Game Theory. The MIT Press, 1994. [31] T. Sander, A. Young, and M. Yung. Non-interactive CryptoComputing for NC1. In 40th Annual Symposium on Foundations of Computer Science, pages 554567. IEEE, 1999. [32] A. C. Yao. Protocols for secure computations (extended abstract). In 23rd Annual Symposium on Foundations of Computer Science, pages 160164. IEEE, Nov. 1982.
In this section we provide efcient implementations for the sub-protocols and . Recall that in , the Preparer needs to prove that the list of ciphertexts is a permuted encryption of the known list , and in the Chooser needs to prove that the ciphertext was obtained from a blinding-with zero of one of the ciphertexts . Both protocols and are derived from a simple zero-knowledge proof for a problem which we call Encrypted List Correspondence. In this problem, which was studies in the context of mix-networks, 14
(and ).
, for all .
Choose a random permutation over , and random strings Set Set . Send to . Choose
and send it to ,
If If If
).
. .
a prover wants to prove in zero-knowledge that two lists of ciphertexts, and , are permuted encryptions of the same list. More precisely, wants to prove that one list was obtained by blinding-with-zero and permuting the other, and that he knows a permutation and random coins such that , for all .9 An efcient zero-knowledge proof for this problem was described by Abe [1]. (Although Abes proof assumes ElGamal encryptions, it is easy to see that any blindable encryption will do.) For self-containment, we describe this proof in Figure 3. The protocol in Figure 3 achieves knowledge-error of 1/2. There are known transformations that can be used to derive a constant-round, negligible-error protocol from a three-round, constant-error one. (Specically, we can get a 5-round, negligible-error zero-knowledge proof-of-knowledge for Encrypted List Correspondence.) We discuss this issue further in Appendix D.
In sub-protocol , The Preparer needs to prove that the list of ciphertexts is a permuted en . This can be done by having the prover encrypt the list cryption of the known list in some canonical manner (say, using the all-zero random string) to generate a ciphertext list , and are encryptions of and then prove using the ELC protocol that the two lists was not obtained the same list. The only problem here is that the original list of ciphertexts by blinding the list , but rather by directly encrypting the plaintext list . Hence, to use this protocol we slightly change the protocol CES-2 itself, and have the Preparer encrypt the plaintext list by rst encrypting it in a canonical manner to get , and then blind with zero and per . We stress that due to Equation (2) (from the denition of blindable mute the latter list to get encryption), this change in the protocol does not change the distribution of the ciphertext list .
We remark that a similar proof can be shown even when the two lists were not generated as blindings of each other, and even if the two lists were obtained using two different blindable encryption schemes.
9
15
At rst glance, the problem in Protocol seems substantially different than the Encrypted List Correspondence problem. Furthermore, constructing a simple protocol that the ciphertext was obtained as blinding of some ciphertext (without revealing ) seems rather hard. Nonetheless, we can pose the problem of protocol as an instance of the Encrypted List Correspondence. To this end, we again slightly modify the protocol CES-2, by having the Chooser blind with zero and re-permute the entire list (rather than just pick and blind one ciphertext), and then prove in zero-knowledge that it knows the corresponding permutation. Specically, the Chooser selects a random permutation over , permutes the entire list of s according ) is then sent to , and then blinds with zero each of the ciphertexts. The entire new list (denoted to the Preparer. The preparer only uses a single, agreed upon, ciphertext from the list (say, ), in the rest of the protocol, but it uses the whole list to verify the proof. (Therefore, the effective index that is chosen .) To prove that it chose the list of s according to the prescribed protocol, the by Chooser is Chooser now proves that it knows the permutation , which is exactly what is done in the ELC protocol. Two optimizations. Since we only use the ciphertext in the CES-2 protocol (and we dont really care whether the other s are obtained properly), we can somewhat simplify and optimize the proof in , as follows: The Chooser can send only (just as it is done in the protocol CES-2). Then, in the zero-knowledge proof, he prepares the full set of encryptions (as if he actually prepared and sent all the s). Then, depending on the query bit, he either reveals the correspondence between the s and all the s, or the correspondence between and one of the s. Another optimization takes advantage of the fact that in the context of the CES-2 protocol, we only need this proof to be witness hiding, rather than fully zero-knowledge. It is therefore possible to just repeat the protocol from Figure 3 many times in parallel to get a negligible error (and we do not need to worry about adding extra ows of commitment).
B.1
ElGamal Encryption
The generation algorithm picks a random -bit prime , where is prime and a generator for . The secret the subgroup of quadratic residues modulo . Then it picks a random and sets key is , the public key is (and ). . The To encrypt a message , one picks a random and sets . The encryption scheme is well known to be semantically secure under decryption outputs the decisional Dife-Hellman assumption (DDH). To blind a ciphertext with blinding factor , we compute , where is chosen at random from . We note that , (for some unknown and ), then , indeed if which is a random encryption of since is random when is. The operation is just . Checking that the public key is kosher can be done by verifying that are primes, , and is an element of order . Thus, no interaction is needed in the key generation phase. Proving that a ciphertext is an encryption of a message requires proving equality of the discrete-log of two
16
known elements with respect to two known bases, which can be done using several known simple protocols [10, 12].
B.2
Goldwasser-Micali Encryption
This is the original semantically secure scheme introduced by Goldwasser and Micali [22]. The message is encrypted bit by bit. The generation algorithm picks a random , a product of two -bit primes together with any quadratic non-residue with Jacobi symbol (in case is a Blum integer, i.e. , one can x ). The public key is and , and the secret key is and . To encrypt a bit , one picks a random and returns (i.e. a random square for and a random non-square for ). To decrypt one simply determines if the ciphertext is a square modulo , i.e. modulo both and . The scheme is semantically secure under the quadratic residue assumption. The scheme is clearly blindable, since if and , then (where the addition is done modulo 2, and all the other operations are done modulo ). Proving that a public key is committing requires proving that is a quadratic-non-residue mode , which can be done efciently. Proving that is an encryption of can be done by proving quadratic residuosity.
B.3
It is often useful when an encryption scheme has an efcient zero-knowledge proofs for claims of the form is an encryption of . Although we do not know how to construct such efcient protocol based only on the blindable encryption property (other than using generic constructions), we can show a simple and efcient proof for the special case where the decryption algorithm also reveals the random coins that were used in the encryption (as in Goldwasser-Micali). The following simple 3-round protocol is such a proof with soundness error of . For this protocol, we assume without loss of generality that
(3) Indeed, let be any xed encryption of (the identity of ). We can then redene the encryption process to be . Equation (1) from the denition of blindable encryption shows that is indeed a random encryption of , while Equation (2) now immediately implies Equation (3).
17
We show below a very simple implementation of 1-out-of- Oblivious Transfer using blindable encryption. Recall that a -out- Oblivious Transfer Protocol implements the following functionality. There are two players, and , who have some private inputs: has a set of strings , and has an index . At the end of the protocol should learn and nothing else, while should learn nothing at all. There are other avors of oblivious transfer, all known to be equivalent to the above. We let commit to his input by encrypting each , i.e. set (for all ) and send to . Now we are in the situation that the Chooser wants the Preparer to decrypt without telling him . A simple solution that works in the honest-but-curious model, is as follows: chooses a random blinding factor , sets , asks to decrypt , and subtracts from the result to recover the correct . Since is the encryption of a random element , indeed does not learn any information about . To adjust this protocol to work against malicious players, needs to prove that he knows the index and blinding factor , and needs to prove that it decrypted correctly. The proof of is essentially the same problem as in the sub-protocol in our CES-2 protocol, with the only difference being that now we also have the blinding factor . Accordingly, the protocol for solving it is nearly identical to the ELC protocol, with the only difference being that the prover blinds the encrypted lists with random elements rather than with zeros (and shows the blinding factors when he is asked to open the blinding. Due to space limitations, we omit further details. We note, though, that a small modication of the above protocol implements random -out-of- oblivious transfer, where should learn for a random . To implement that, simply chooses a random permutation in the rst step and sets .
Below we describe a known transformation from any 3-round, constant-error zero-knowledge proof-ofknowledge into a 5-round, negligible error zero-knowledge proof-of-knowledge, that uses trapdoor commitment schemes. We were not able to trace the origin of this transformation, although related ideas and techniques can be found in [15, 27, 13]. Assume that you have some 3-round, constant-error zero-knowledge proof-of-knowledge protocol, and consider the 3-round protocol that you get by running the constant-error protocol many times in parallel. Denote the rst prover message in the resulting protocol by , the verier message by , and the last prover message by . Note that since the original protocol was 3-round, then parallel repetition reduces the error exponentially (see proof in [4]). However, this protocol is no longer zero-knowledge. To get a zero-knowledge protocol, we use a trapdoor (or Chameleon) commitment schemes [8]. Roughly, this is a commitment scheme which is computationally binding and unconditionally secret, with the extra property that there exists a trapdoor information, knowledge of which enables one to open a commitment in any way it wants. In the zero-knowledge protocol, the prover sends to the verier in the rst round the public-key of the trapdoor commitment scheme. The verier then commits to , the prover sends , the verier opens the commitment to , and the prover sends and also the trapdoor for the commitment. The zero-knowledge simulator follows the one for the standard 4-round protocol. The knowledge extractor, on the other hand, rst runs one instance of the proof to get the trapdoor, and then it can effectively ignore the commitment in the second round, so you can use the extractor of the original 3-round protocol.
18