Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

A Cryptographic Solution to a Game Theoretic Problem

Yevgeniy Dodis Shai Halevi Tal Rabin

Abstract Although Game Theory and Cryptography seem to have some similar scenarios in common, it is very rare to nd instances where tools from one area are applied in the other. In this work we use cryptography to solve a game theoretic problem. The problem that we discuss arises naturally in the game theory area of two-party strategic games. In these games there are two players. Each player decides on a move (according to some strategy), and then the players execute the game, i.e. the two players make their moves simultaneously. Once these moves are played each player receives a payoff, which depends on both moves. Each player only cares to maximize its payoff. In the game theory literature it was shown that higher payoffs can be achieved by the players if they use correlated strategies. This is enabled through the introduction of a trusted third party (a mediator), who assists the players in choosing their move. Though now the property of the game being a two player game is lost. It is natural to ask whether a game can exist which would be a two player game yet maintain the high payoffs which the mediator aided strategy offered. We answer this question afrmatively. We extend the game by adding an initial step in which the two players communicate and then they proceed to execute the game as usual. For this extended game we can prove (informally speaking) the following: any correlated strategy for 2-player games can be achieved, provided that the players are computationally bounded and can communicate before playing the game. We obtain an efcient solution to the above game-theoretic problem, by providing a cryptographic protocol to the following Correlated Element Selection problem. Both Alice and Bob know a list of pairs (possibly with repetitions), and they want to pick a random index such that Alice learns only and Bob learns only . We believe that this problem has other applications, beyond our application to game theory. Our solution is quite efcient: it has constant number of rounds, negligible error probability, and uses only very simple zero-knowledge proofs. The protocol that we describe in this work uses as a basic building block blindable encryption schemes (such as ElGamal or Goldwasser-Micali). We note that such schemes seem to be a very useful general primitive for constructing efcient protocols. As an example, we show a simple -out-of- oblivious transfer protocol based on any such encryption scheme.

Key words. Game theory, Nash equilibria, Correlated equilibria, Element selection, Correlated coins, Coin Flipping, Blindable encryption, Oblivious transfer.

Lab. of Computer Science, Massachusetts Institute of Technology, 545 Tech Square, Cambridge, MA 02139, USA. Email: yevgen@theory.lcs.mit.edu. IBM T.J. Watson Research Center, P.O. Box 704, Yorktown Heights, New York 10598, USA. Email: shaih,talr @watson.ibm.com.

1 Introduction
The research areas of Game Theory and Cryptography are both extensively studied elds with many problems and solutions. Yet, the cross-over between them is surprisingly small: very rarely (if at all) are tools from one area borrowed to address problems in the other. In this paper we exhibit the benets which arise in such a combined setting. Namely, we show how cryptographic tools can be used to address a natural problem in the Game Theory world. We hope that this work will encourage greater synergy between these classical elds.

1.1 Two Player Strategic Games


The game-theoretic problem that we consider in this work belongs to the general area of two player strategic games, which is an important eld in Game Theory (see [18, 30]). In the most basic notion of a two player game, there are two players, each with a set of possible moves. The game itself consists of each player choosing a move from its set, and then both players execute their moves simultaneously. The rules of the game specify a payoff function for each player, which is computed on the two moves. Thus, the payoff of each player depends both on its move and the move of the other player. A strategy for a player is a (possibly randomized) method for choosing its move. The fundamental assumption of game theory is that each player is selsh and rational, i.e. its sole objective is to maximize its (expected) payoff. A pair of players strategies achieves an equilibrium when these strategies are self-enforcing, i.e. each players strategy is an optimal response to the other players strategy. In other words, once a player has chosen a move and believes that the other player will follow its strategy, its (expected) payoff will not increase by changing this move. This notion was introduced in the classical work of Nash [29]. In a Nash equilibrium, each player chooses its move independently of the other player. (Hence, the induced distribution over the pairs of moves is a product distribution.) Yet, Aumann [2] showed that in many games, the players can achieve much higher expected payoffs, while preserving the self-enforcement property, if their strategies are correlated (so the induced distribution over the pairs of moves is no longer a product distribution). To actually implement such a correlated equilibrium, the model of the game is modied and a trusted third party (called a mediator) is introduced. This mediator chooses the pair of moves according to the right distributions and privately tells each player what its designated move is. Since the strategies are correlated, the move of one player typically carries some information on the move of the other player. In a Correlated equilibrium, no player has an incentive to deviate from its designated move, even knowing this extra information about the other players move.

1.2 Removing the Mediator


As the game was intended for two players, it is natural to ask if correlated equilibria can be implemented without the mediator. In the language of cryptography, we ask if we can design a two party game to eliminate the trusted third party from the original game. It is well known that in the standard cryptographic models, the answer is positive, provided that the two players can interact, that they are computationally bounded, and assuming some standard hardness assumptions ([21, 32]). We show that this positive answer carries over also to the Game Theory model. Specically, we consider an extended game, in which the players rst exchange messages, and then they choose their moves and execute them simultaneously. (The payoffs are still computed as a function of the moves, according to the same payoff function as in the original game.) Also, we dene a computational Nash equilibrium as one where the strategies of both players are restricted to probabilistic polynomial time. Then, we prove the following:

Theorem 1 If secure two-party protocols exist for non-trivial functions, then for any Correlated equilibrium of the original game , there exists an extended game with a computational Nash equilibrium , such that the payoffs for both players are the same in and . In other words, any Correlated equilibrium payoffs of can be achieved using a computational Nash equilibrium of . Thus, the mediator can be avoided if the players are computationally bounded and can communicate prior to the game. We stress that although this theorem is quite natural from a cryptography point of view, the models of Game Theory and Cryptography are different, thus proving it in the Game Theory framework requires some care. In particular, two-party cryptographic protocols always assume that at least one player is honest, while the other player could be arbitrarily malicious. In the game-theoretic setting, on the other hand, both players are selsh and rational: they (certainly) deviate from the protocol if they benet from it, and (can be assumed to) follow their protocol otherwise. Also, it is important to realize that in this setting we cannot use cryptography to enforce honest behavior. The only thing that the players are able to do is to choose their moves and execute them at the end of the game. Hence, even the most elaborate protocol would be useless if a cheating player can simply ignore the fact that it was caught cheating during the protocol, and nonetheless choose a move that maximizes its prot. We discuss these issues further in Section 3.

1.3 Doing it Efciently


Although the assumption of Theorem 1 can be proven using tools of generic two-party computations [21, 32], it would be nice to obtain extended games (i.e. protocols) which are more efcient than the generic ones. In Section 4 we observe that for many cases, the underlying cryptographic problem reduces to a problem which we call Correlated Element Selection. We believe that this natural problem has other cryptographic and , know a list of pairs application and is of independent interest. In this problem, two players, (maybe with repetitions), and they need to jointly choose a random index , so that player only learns the value and player only learns the value .1 We therefore dedicate a part of the paper to presenting an efcient cryptographic solution to the Correlated Element Selection problem. Our nal protocol is very intuitive, has constant number of rounds, negligible error probability, and uses only very simple zero-knowledge proofs. We note that some of our techniques (in particular, the zero-knowledge proofs) are similar to those used for mixing networks (see [1, 24] and the references therein), even though our usage and motivation are quite different. Our protocol for Correlated Element Selection uses as a tool blindable encryption (which can be viewed as a counterpart of blindable signatures [9]). Stated roughly, blindable encryption is the following: given an encryption of an (unknown) message , and an additional message , a random encryption of can be easily computed. This should be done without knowing or the secret key. Examples of semantically secure blindable encryption schemes (under appropriate assumptions) include GoldwasserMicali [22], ElGamal [16] and Benaloh [5]. (In fact, for our Correlated Element Selection protocol, it is sufcient to use a weaker notion of blindability, such as the one in [31].) Aside from our main application, we also observe that blindable encryption appears to be a very convenient tool for devising efcient twoparty protocols and suggest that it might be used more often. Todemonstrate that, we show in Appendix C a very simple protocol to achieve -out-of- Oblivious Transfer ( -OT) protocol from any secure blindable encryption scheme.2
1 A special case of Correlated Element Selection when is just the standard coin-ipping problem [7]. However, this is a degenerate case of the problem, since it requires no secrecy. In particular, none of the previous coin-ipping protocols seem to extend to solve our problem. 2 It is known that oblivious transfer is complete for two-party secure computation [14, 26], hence blindable encryption schemes are sufcient for secure two-party computation.

1.4 Related Work


Game Theory. Realizing the advantages of removing the mediator, various papers in the Game Theory community have been published to try and achieve this goal. Barany [3] substitutes the trusted mediator with four (potentially untrusted) players. These players (two of which were the actual players who needed to play the game) distributively (and privately) computed the moves for the two active players. This protocol works in an information-theoretic setting (which explains the need for four players; see [6]). Of course, if one is willing to use a group of players to simulate the mediator, then the general multiparty computation tools (e.g. [6, 11]) can also be used, even though the solution of [3] is simpler and more efcient. The work of Lehrer and Sorin [25] describes protocols that reduce the role of the mediator (the mediator in this protocol computes some function on values which the players chose). Cryptography. We already mentioned the relation of our work to generic two-party secure computations [21, 32], and to mix-networks [1, 24]. Additionally, encryption schemes with various blinding properties were used for many different purposes, including among others for secure storage [19], and secure circuit evaluations [31].

2 Background in Game Theory


Two-player Games. Although our results apply to a much larger class of two-player games3 , we demonstrate them on the simplest possible class of nite strategic games (with perfect information). Such a game has two players and , each of whom has a nite set of possible actions and a payoff function ( ), known to both players. The players move simultaneously, each choosing an action . The payoff of player is . The (probabilistic) algorithm that tells player which action to take is called its strategy, and a pair of strategies is called a strategy prole. In our case, a strategy of player is simply a probability distribution over its actions , and a strategy prole is a probability distribution over . Game Theory assumes that each player is selsh and rational, i.e. only cares about maximizing its (expected) payoff. As a result, we are interested in strategy proles that are self-enforcing. In other words, even knowing the strategy of the other player, each player still has no incentive to deviate from its own strategy. Such a strategy prole is called an equilibrium. Nash equilibrium. This is the best known notion of an equilibrium [29]. It corresponds to a strategy prole in which players strategies are independent. More precisely, the induced distribution over the pairs of actions, must be a product distribution, . Deterministic (or pure) strategies are a special case of such strategies, where assigns probability 1 to some action. For strategies and , we denote by the expected payoff for player when players independently follow and . Denition 1 A Nash equilibrium of a game is an independent strategy prole and . , , we have

, such that for any

In other words, given that player follows , is an optimal response of player and vice versa. Correlated equilibrium. While Nash equilibrium is quite a natural and appealing notion (since players can follow their strategies independently of each other), one can wonder if it is possible to achieve higher expected payoffs if one allows correlated strategies.
3

For example, extensive games with perfect information and simultaneous moves, see [30].

In a correlated strategy prole [2], the induced distribution over can be an arbitrary distribution, not necessarily a product distribution. This can be implemented by having a trusted party (called mediator) sample a pair of actions according to some joint probability distribution , and recommend the action to player . We stress that knowing , player now knows a conditional distribution over the actions of the other player (which can be different for different s), but knows nothing more. We denote these distributions by and . For any , let be the expected value of when is distributed according to (similarly for ). In other words, measures the expected payoff of player 1 if his recommended action was (thus, is distributed according to ), but it decided to play instead. As before, we let be the expected value of when are drawn according to . Similarly to the notion of Nash equilibrium, we now dene a Correlated equilibrium, which ensures that players have no incentive to deviate from the recommendation they got from the mediator. Denition 2 A Correlated equilibrium is a strategy prole in the support of , and any and , we have .

, such that for any and


Given Nash (resp. Correlated) equilibrium , we say that achieves Nash (resp. Correlated) equilibrium payoffs . It is known that Correlated equilibria can give equilibrium payoffs outside (and better!) than anything in the convex hull of Nash equilibria payoffs, as demonstrated in the following simple example rst observed by Aumann [2], who also dened the notion of Correlated equilibrium. C D C 4,4 1,5 Game of Chicken. We consider a simple game, the so called game of D 5,1 0,0 Chicken shown in the table to the right (much more dramatic examples can be Chicken shown in larger games). Here each player can either dare ( ) or chicken out has a devastating effect on both players (payoffs ), ( ). The combination C D is quite good (payoffs ), while each player would ideally prefer to dare C 1/4 1/4 while the other chickens-out (giving him and the opponent ). While the wisest pair of actions is , this is not a Nash equilibrium, since both players D 1/4 1/4 Mixed Nash (believing that the other player will stay at ). The are willing to deviate to game is easily seen to have three Nash equilibria: , and C D . The respective Nash equilibrium payoffs C 1/3 1/3 are , and . We see that the rst two pure strategy Nash equilibria D 1/3 0 are unfair, while the last mixed equilibrium has small payoffs, since the mutually Correlated undesirable outcome happens with non-zero probability in the product distribution. The best fair equilibrium in the convex hull of the Nash equilibria is the combination , yielding payoffs . On the other hand, the prole is a correlated equilibrium, yielding payoffs , which is better than any convex combination of Nash equilibria. To briey see it, consider the row player (same works for player ). If it is recommended to play , its expected payoff is since, conditioned on , player is recommended to play and with probability each. If player switched to , its expected payoff would still be , making player reluctant to switch. Similarly, if player is recommended , it knows that player plays (as is never played in ), so its payoff is . Since this is the maximum payoff of the game, player would not benet by switching to in this case. Thus, we indeed have a Correlated equilibrium, where each players payoff is , as claimed. 4

3 Removing the Mediator


In this section we show how to remove the mediator using cryptographic means. We assume the existence of generic secure two-party protocols and show how to achieve our goal by using such protocols in the game-theoretic (rather than its designated cryptographic) setting. In other words, the players remain selfish and rational, even when running the cryptographic protocol. In Section 4 we give a specic efcient implementation for the types of cryptographic protocols that we need. Extended Games. To remove the mediator, we assume that the players are (1) computationally bounded and (2) can communicate prior to playing the original game, which we believe are quite natural and minimalistic assumptions. To formally dene the computational power of the players, we introduce an external security parameter into the game, and require that the strategies of both players can be computed in probabilistic polynomial time in the security parameter.4 To incorporate communication into the game, we consider an extended game, which is composed of three parts: rst the players are given the security parameter and they freely exchange messages (i.e., execute any two-party protocol), then each player locally selects its move, and nally both players execute their move simultaneously.5 The nal payoffs of the extended game are just the corresponding payoffs of the original game applied to players simultaneous moves at the last step. The notions of a strategy and a strategy prole are straightforwardly generalized from those of the basic game. Similarly to the notion of a Nash equilibrium, we dene the notion of a computational Nash equilibrium of the extended game, where the strategies of both players are restricted to probabilistic polynomial time. Also, since we are talking about a computational model, the denition must account for the fact that the players may break the underlying cryptographic scheme with negligible probability (e.g., by guessing the secret key), thus gaining some advantage in the game. Denition 3 A computational Nash equilibrium of an extended game is an independent strategy prole , such that (a) both , are PPT computable; and (b) for any other PPT computable strategies there exists a negligible function such that on security parameter , we have and . The idea of getting rid of the mediator is now very simple. Consider a Correlated equilibrium of the original game . Recall that the job of the mediator is to sample a pair of actions according to the distribution , and to give to player . We can view the mediator as a trusted party who securely computes a probabilistic (polynomial-time) function . Thus, to remove it we can have the two players execute a cryptographic protocol that securely computes the function . The strategy of each player would be to follow the protocol , and then play the action that it got from . Yet, several issues have to be addressed in order to make this idea work. First, the above description does not completely specify the strategies of the players. A full specication of a strategy must also indicate what a player should do if the other player deviates from its strategy (in our case, does not follow the protocol ). While cryptography does not address this question, it is crucial to resolve it in our setting, since the game must go on: No matter what happens inside , both players eventually have to take simultaneous actions, and receive the corresponding payoffs (which they wish to maximize). Hence we must explain how to implement a punishment for deviation within the game-theoretic framework.
Note that the parameters of the original game (like the payoff functions, the correlated equilibrium distribution, etc.) are all independent of the security parameter, and thus can always be computed in constant time. 5 The notion of an extended game is a special case of the well-studied notion of an extensive games see [30].
4

Punishment for Deviations. We employ the standard game-theoretic solution, which is to punish the cheating player to his minimax level. This is the smallest payoff that one player can force the other player to have. Namely, the minimax level of player is . Similarly, minimax level of player is . To complete the description of our proposed equilibrium, we let each player punish the other player to its minimax level, if the other player deviates from (and is caught). Namely, if player 2 cheats, player 1 will play in the last stage of the game the strategy achieving the minimax payoff for player and vice versa. First, we observe the following simple fact: Lemma 1 Let

be the payoffs achieved by Correlated equilibrium . Then,

Proof: Consider player . Let be the marginal strategy of player in the Correlated equilibrium , and let be the best (independent) response of player to . (The strategy can be thought of as what player 1 should do if it knows that player 2 plays according to , but it did not get any recommendation from the mediator.) Since is a Correlated equilibrium, it follows that , since a particular deviation of player from the correlated equilibrium is to ignore its recommendation and always play , and we know that no such deviation can increase the payoff of player 1. Also, recall that is the best (independent) strategy in response to , so we have . Hence we get

The same holds for player 2. We are now ready to prove Theorem 1, which we recall from the introduction: Theorem 1 If secure two-party protocols exist for non-trivial functions, then for any Correlated equilibrium of the original game , there exists an extended game with a computational Nash equilibrium , such that the payoffs for both players are the same in and . Proof: (sketch) The extended game is the game , preceeded with a cryptographic protocol for securely computing the function . (Such protocol exists by our assumption.) The computational Nash equilibrium consists of both players following their part in the protocol , and executing the moves that they got from this protocol. Clearly, this strategy achieves the same payoffs as the original correlated equilibrium. We only need to show that it is a computational Nash equilibrium. Indeed, if player believes that the other player follows its designated strategy (i.e., behaves honestly in the protocol ), then the correctness of implies that the probability of the this player cheating without getting caught is negligibly small. If we denote this probability of cheating by , and denote the highest payoff that player can get in the original game by , then the expected payoff of a strategy that includes cheating in is at most

, this is at most negligibly larger than . (We notice that a particular type of cheating (in the cryptographic setting) is early stopping. Since the extended game must always result in players getting payoffs, stopping is not an issue in game theory, since it will be punished by the minimax level as well.) Finally, if no player has cheated in , the privacy of implies that we achieved exactly the same effects as with the mediator: each player only learns its move, does not learn anything about the other players move except for what is implied by its move (and possibly except for a negligible additional advantage). Since is a Correlated equilibrium, both players will indeed take the action they outputted in . 6

where the inequality follows from Lemma 1. Since is just some constant which does not depend on

A few remarks are in order: rst, since the extended game produces some distribution over the moves of the players, then any Nash equilibrium payoffs achievable in the extended game are also achievable as Correlated equilibrium payoffs of the original game. Thus, Theorem 1 is the best possible result we could hope for.6 Also, one question that may come to mind is why would a player want to carry out a minimax punishment when it catches the other player cheating (since this punishment may also hurt the punishing player). However, the notion of Nash equilibrium only requires players actions to be optimal provided the other player follows its strategy. Thus, it is acceptable to carry out the punishment even if this results in a loss for both players. (the cheating player should have been rational and should not have cheated in the rst place). We note that this oddity (known as an empty threat in the game-theoretic literature) is not just a feature of our solution, but rather an inherent weakness of the notion of Nash equilibrium.

4 The Correlated Element Selection Problem


In most common games, the joint strategy of the players is described by a short list of pairs , where the strategy is to choose at random one pair from this list, and have Player 1 play and Player 2 play . (For example, in the game of chicken the list consists of three pairs .)7 Hence, to obtain an efcient solution for such games, we need an efcient cryptographic protocol for and , know a list of pairs (maybe with the following problem: Two players, repetitions), and they need to jointly choose a random index , and have player learn only the value and player learn only the value . We call this problem the Correlated Element Selection problem. In this section we describe our efcient solution for this problem. We start by presenting some notations and tools that we use (in particular, blindable encryption schemes). We then show a simple protocol that solves this problem in the special case where the two players are honest but curious, and explain how to modify this protocol to handle the general case where the players can be malicious.

4.1 Notations and Tools


We denote by the set . For a randomized algorithm and an input , we denote by the output distribution of on , and by we denote the output string when using the randomness . If one of the inputs to is considered a key, then we write it as a subscript (e.g., ). We use to denote public keys and to denote secret keys. The main tool that we use in our protocol is blindable encryption schemes. Like all public key encryption schemes, blindable encryption schemes include algorithms for key-generation, encryption and decryption. In addition they also have a blinding and combining algorithms. We denote these algorithms by , , , , and , respectively. Below we formally dene the blinding and combing functions. In this denition we assume that the message space forms a group (which we denote as an additive group with identity ). Denition 4 (Blindable encryption) A public key encryption scheme is blindable if there exist (PPT) algorithms Blind and Combine such that for every message and every ciphertext :

For any message (also referred to as the blinding factor), produces a random encryption of . Namely, the distribution should be equal to the distribution

For example, there is no way to enforce the desirable outcome in the Game of Chicken, no matter which cryptographic protocols we design. Indeed, both players will want to deviate to before making their nal moves. 7 Choosing from the list with distribution other than the uniform can be accommodated by having a list with repetitions, where a high-probability pair appears many times.

(1)

If

are the random coins used by two successive blindings, then for any two blinding factors

(2)

Thus, in a blindable encryption scheme anyone can randomly translate the encryption of into an encryption of , without knowledge of or the secret key, and there is an efcient way of combining several blindings into one operation. Both the ElGamal and the Goldwasser-Micali encryption schemes can be extended into blindable encryption schemes. We note that most of the components of our solution are independent of the specic underlying blindable encryption scheme, but there are some aspects that still have to be tailored to each scheme. (Specically, proving that the key generation process was done correctly is handled differently for different schemes. See Section 4.4). The reader is referred to Section B for further discussion of the blindable encryption primitive.

4.2 A Protocol for the Honest-but-Curious Case


Let us recall the Correlated Element Selection problem. Two players share a public list of pairs . For reasons that will soon become clear, we call the two players the Preparer ( ) and the Chooser ( ). The players wish to pick a random index such that only learns and only learns . Figure 1 describes the Correlated Element Selection protocol for the honest-but-curious players. We employ a semantically secure blindable encryption scheme and for simplicity, we assume that the keys for this scheme were chosen by a trusted party ahead of time and given to , and that the public key was also given to . We briey discuss some key generation issues in Section 4.4. The protocol is very simple: First the Preparer randomly permutes the list, encrypts it element-wise and sends the resulting list to the Chooser. (Since the encryption is semantically secure, the Chooser cannot extract any useful information about the permutation .) The Chooser picks a random pair of ciphertexts from the permuted list (so the nal output pair will be the decryption of these ciphertexts). It then blinds with (i.e. makes a random encryption of the same plaintext), blinds with a random blinding factor , and sends the resulting pair of ciphertexts back to the Preparer. Decryption of gives the Preparer its element (and nothing more, since is a random encryption of after the blinding with ), while the decryption of does not convey the value of the actual encrypted message since it was blinded with a random blinding factor. The Preparer sends to the Chooser, who recovers his element by subtracting the blinding factor . It is easy to show that if both players follow the protocol then their output is indeed a random pair from the known list. Moreover, at the end of the protocol the Preparer has no information about other than whats implied by its own output , and the Chooser gets computationally no information about other than whats implied by . Hence we have: Theorem 2 Protocol CES-1 securely computes the (randomized) function of the Correlated Element Selection problem in the honest-but-curious model. Proof omitted.

Common inputs: List of pairs Preparer knows: secret key .

Protocol CES-1

, public key .

1. Permute and Encrypt. Pick a random permutation over . Let Send the list to . 2. Choose and Blind. Pick a random index Let Send to .

, for all .

, and a random blinding factor . .

3. Decrypt and Output. Set , Send to . 4. Unblind and Output. Set . Output .

. Output

Figure 1: Protocol for Correlated Element Selection in the honest-but-curious model.

4.3 Dealing with Dishonest Players


Generic transformation. Following the common practice in the design of secure protocols, one can modify the above protocol to deal with dishonest players by adding appropriate zero-knowledge proofs. That is, after each ow of the original protocol, the corresponding player proves in zero knowledge that it indeed followed its prescribed protocol: After Step 1, the Preparer proves that it knows the permutation that was used to permute the list. After Step 2 the Chooser proves that it knows the index and the blinding factor . Finally, after Step 3 the Preparer proves that the plaintext is that was used to produce the pair indeed the decryption of the ciphertext . Given these zero-knowledge proofs, one can appeal to general theorems about secure two-party protocols, and prove that the resulting protocol is secure in the general case of potentially malicious players. We note that the zero-knowledge proofs that are involved in this protocol can be made very efcient, so even this generic protocol is quite efcient (these are essentially the same proofs that are used for mixnetworks in [1], see Appendix A). However, a closer look reveals that one does not need all the power of the generic transformation, and the protocol can be optimized in several ways. Proof of proper decryption. To withstand malicious players, the Preparer must prove that the element that it send in Step 3 of CES-1 is a proper decryption of the ciphertext . However, this can be done in a straightforward manner without requiring zero-knowledge proofs. Indeed, the Preparer can reveal additional information (such as the randomness used in the encryption of ), as long as this extra information does not compromise the semantic security of the ciphertext . The problem is that may not be able to compute the randomness of the blinded value (for example, in ElGamal encryption this would require computation of discrete log). Hence, we need to device a different method to enable the proof. 9

The proof will go as follows: for each , the Preparer sends the element and corresponding random string that was used to obtain ciphertexts in the rst step. The Chooser can then check that the element that it chose in Step 2 was encrypted correctly, and learn the corresponding plaintext. Clearly, in this protocol the Chooser gets more information than just the decryption of (specically, it gets the decryption of all the s). However, this does not effect the security of the protocol, as the Chooser now sees a decryption of a permutation of list that he knew at the onset of the protocol. This permutation of the all s does not give any information about the output of the Preparer, other than what is implied by its output . In particular, notice that if appears more than once in the list, then the Chooser does not know which of these occurrences was encrypted by . Next, we observe that after the above change there is no need for the Chooser to send to the Preparer; it is sufcient if sends only in Step 2, since it can compute the decryption of by itself. A weaker condition in the second proof-of-knowledge. Finally, we observe that since the security of the Chooser relies on an information-theoretic argument, the second proof-of-knowledge (in which the Chooser proves that it knows the index ) does not have to be fully zero-knowledge. In fact, tracing through the proof of security, one can verify that it is sufcient for this proof to be witness hiding in the sense of Feige and Shamir [17]. The resulting protocol is described in Figure 2. Remark. Notice that for the modied protocol we did not use the full power of blindable encryption, since we only used blindings by zero. Namely, all that was used in these protocols is that we can transform any ciphertext into a random encryption of the same plaintext. (The zero-knowledge proofs also use only blindings by zero.) This is exactly the random self-reducibility property used by Sander et al. [31].

4.4 Key Generation Issues


In the protocols above we assumed that the encryption keys were chosen in an appropriate way ahead of time (say, by a trusted party). If we want to include the key generation as part of the protocols themselves (i.e., have the Preparer choose them and send the public keys to the Chooser in Step 1), we must ensure that the Preparer doesnt choose bad keys. A generic way to solve this problem is to have the Preparer prove in zero knowledge that it followed the key generation algorithm. We note, however, that for our application this is an overkill. Indeed, the security of the Chooser does not depend on the hiding properties of the encryption scheme. What the Chooser needs to verify is that this is a committing encryption (so that the Preparer cannot open the list of s in more than one way), and that the output of the blinding operation is independent of which ciphertext of a given message was used as an input (i.e., that for all and , it holds that ). If we use ElGamal encryption to implement the blindable encryption, then checking these conditions is easy (assuming that the factorization of is known, where is the modulus used for the encryption). For the Goldwasser-Micali encryption, the rst property requires proving that a given element ( in the case of a Blum integer ) is a quadratic non-residue mod (which can be done quite efciently [23]), while the second property is automatically satised.

4.5 Putting It All Together


We now nally described all the components of our solution. Let us quickly re-cap everything and consider the efciency of the protocol.

10

Common inputs: List of pairs Preparer knows: secret key .

Protocol CES-2

, public key .

1. Permute and Encrypt. Pick a random permutation over Let Send to .

, and random strings . , for all .

Sub-protocol : proves in zero-knowledge that it knows the randomness and permutation that were used to obtain the list . 2. Choose and Blind. Pick a random index Send to the ciphertext

Sub-protocol : proves in a witness-hiding manner that it knows the randomness and index that were used to obtain .

3. Decrypt and Output. Set . Output . Send to the list of pairs

(in this order).

4. Verify and Output. Denote by the th entry in this lists (i.e., then output . If

).

Figure 2: Protocol for Correlated Element Selection.

0. Initially, the Preparer chooses the keys for the blindable encryption scheme, sends the public key to the Chooser and proves in zero-knowledge that the encryption is committing and has the blinding property. As we said above, this proof must be tailored to the particular encryption scheme that is used. Also, this step can be carried out only once, and the resulting keys can be used for many instances of the protocol. (Alternatively, we can use a trusted third party for this step.) 1. The Preparer encrypts the known list in some canonical manner, blinds with zero the list of ciphertexts, and permutes it with a random permutation . It sends the resulting lists to the Chooser, and uses the ELC protocol to prove in zero-knowledge that it knows the permutation that was used. 2. The Chooser blinds with zeros the list of s, and re-permutes it with a random permutation . It to the Prover, and again uses the ELC protocol to prove that it knows sends the resulting list the permutation that was used. Here we can optimize the proof somewhat, since we later only use , and also because the proof only needs to be witness hiding. 3. The Preparer decrypts the rst ciphertext

, and outputs the corresponding plaintext . 11

It also sends to the Chooser the list of the s, permuted according to , together with the randomness that was used to blind their canonical encryption to get the s in Step 1.

, and lets denote the th element and randomness, respectively, 4. The Chooser sets in the last list that it got from the Preparer. He checks that blinding with zero (and randomness ) of the canonical encryption of indeed yields the ciphertext . If this is correct, outputs .
Although we can no longer use the general theorems about secure two-party protocols, the security proof is nonetheless quite standard. Specically, we can prove: Theorem 3 Protocol CES-2 securely computes the (randomized) function of the Correlated Element Selection problem. Proof omitted. Efciency. We note that all the protocols that are involved are quite simple. In terms of number of communication ows, the key generation step (Step 0 above) takes at most ve ows, Step 1 takes ve ows, Step 2 takes three ows and Step 3 consists of just one ow. Moreover, these ows can be piggybacked on each other. Hence, we can implement the protocol with only ve ows of communication, which is equal to the ve steps which are required by a single proof. In terms of number of operations, the complexity of the protocol is dominated by the complexity of the proofs in Steps 1 and 2. The proof in Step 1 requires blinding operations (for a list of size and security parameter ), and the proof of Step 2 can be optimized to about blinding operations on the average. Hence, the whole protocol has about blinding operations.8

5 Epilogue: Cryptography and Game Theory


An interesting aspect of our work is the synergy achieved between cryptographic solutions and the gametheory world. Notice that by implementing our cryptographic solution in the game-theory setting, we gain on the game-theory front (by eliminating the need for a mediator), but we also gain on the cryptography front (for example, in that we eliminate the problem of early stopping). In principle, it may be possible to make stronger use of the game theory setting to achieve improved solutions. For example, maybe it is possible to prove that in the context of certain games, a player does not have an incentive to deviate from its protocol, and so in this context there is no point in asking this player to prove that it behaves honestly (so we can eliminate some zero-knowledge proofs that would otherwise be required). More generally, it may be the case that working in a model in which we know what the players are up to can simplify the design of secure protocols. It is a very interesting open problem to nd interesting examples that would demonstrate such phenomena.

References
[1] M. Abe. Universally Veriable Mix-net with Verication Work Independent on the number of Mixcenters. In Proceedings of EUROCRYPT 98, pp. 437-447, 1998.
We note that the protocol includes just a single decryption operation, in Step 3. In schemes where encryption is much more efcient than decryption such as the Goldwasser-Micali encryption this may have a signicant impact on the performance of the protocol.
8

12

[2] R. Aumann. Subjectivity and Correlation in Randomized Strategies. In Journal of Mathematical Economics, 1, pp. 67-95, 1974 [3] I. Barany. Fair distribution protocols or how the players replace fortune. Mathematics of Operations Research, 17(2):327340, May 1992. [4] M. Bellare, R. Impagliazzo, and M. Naor. Does parallel repetition lower the error in computationally sound protocols? In 38th Annual Symposium on Foundations of Computer Science, pages 374383. IEEE, 1997. [5] J. Benaloh. Dense Probabilistic Encryption. In Proc. of the Workshop on Selected Areas in Cryptography, pp. 120-128, 1994. [6] M. Ben-Or, S. Goldwasser, and A. Wigderson. Completeness theorems for non-cryptographic faulttolerant distributed computation. In Proceedings of the 20th Annual ACM Symposium on Theory of Computing, pages 110, 1988. [7] M. Blum. Coin ipping by telephone: A protocol for solving impossible problems. In Advances in Cryptology CRYPTO 81. ECE Report 82-04, ECE Dept., UCSB, 1982. [8] G. Brassard, D. Chaum, and C. Cr peau. Minimum disclosure proofs of knowledge. JCSS, 37(2):156 e 189, 1988. [9] D. Chaum. Blind signatures for untraceable payment. In Advances in Cryptology CRYPTO 82, pages 199203. Plenum Press, 1982. [10] D. Chaum and H. Van Antwerpen. Undeniable signatures. In G. Brassard, editor, Advances in Cryptology Crypto89, pages 212217, Berlin, 1989. Springer-Verlag. Lecture Notes in Computer Science No. 435. [11] D. Chaum, C. Cr peau, and E. Damg rd. Multiparty unconditionally secure protocols. In Advances e a in Cryptology CRYPTO 87, volume 293 of 99 Lecture Notes in Computer Science, pages 462462. Springer-Verlag, 1988. [12] D. Chaum and T. Pedersen. Wallet databases with observers. In E. Brickell, editor, Advances in Cryptology Crypto92, pages 89105, Berlin, 1992. Springer-Verlag. Lecture Notes in Computer Science No. 740. [13] R. Cramer, I. Damgard, and P. MacKenzie. Efcient zero-knowledge proofs of knowledge without intractability assumptions. To appear in 2000 International Workshop on Practice and Theory in Public Key Cryptography, January 2000, Melbourne, Australia. [14] C. Cr peau and J. Kilian. Weakening security assumptions and oblivious transfer. In Advances in e Cryptology CRYPTO 88, volume 403 of Lecture Notes in Computer Science, pages 27. SpringerVerlag, 1990. [15] C. Dwork, M. Naor, and A. Sahai. Concurrent zero knowledge. In Proceedings of the 30th Annual ACM Symposium on Theory of Computing, pages 409418. ACM Press, 1998. [16] T. ElGamal. A public key cryptosystem and a signature scheme based on discrete logarithms. In Advances in Cryptology CRYPTO 84, volume 196 of Lecture Notes in Computer Science, pages 1018. Springer-Verlag, 1985. 13

[17] U. Feige and A. Shamir. Witness indistinguishable and witness hiding protocols. In Proceedings of the 22nd Annual ACM Symposium on Theory of Computing, pages 416426. ACM Press, 1990. [18] D. Fudenberg, J. Tirole. Game Theory. MIT Press, 1992. [19] J. Garay, R. Gennaro, C. Jutla, and T. Rabin. Secure distributed storage and retrieval. In Proc. 11th International Workshop on Distributed Algorithms (WDAG 97), volume 1320 of Lecture Notes in Computer Science, pages 275289. Springer-Verlag, 1997. [20] O. Goldreich, S. Micali, and A. Wigderson. Proofs that yield nothing but their validity and a methodology of cryptographic protocol design. In 27th Annual Symposium on Foundations of Computer Science, pages 174187. IEEE, 1986. [21] O. Goldreich, S. Micali, and A. Wigderson. How to play any mental game. In Proceedings of the 19th Annual ACM Symposium on Theory of Computing, pages 218229, 1987. [22] S. Goldwasser and S. Micali. Probabilistic encryption. Journal of Computer and System Sciences, 28(2):270299, April 1984. [23] S. Goldwasser, S. Micali, and C. Rackoff. The knowledge complexity of interactive proof systems. SIAM Journal on Computing, 18(1):186208, 1989. [24] M. Jakobsson. A Practical Mix. In Proceedings of EUROCRYPT 98, pp. 448461, 1998. [25] E. Lehrer and S. Sorin. One-shot public mediated talk. Discussion Paper 1108, Northwestern University, 1994. [26] J. Kilian. Founding Cryptography on Oblivious Transfer. In Proc. of STOC, pp. 2031, 1988. [27] P. MacKenzie. Efcient ZK Proofs of Knowledge. Unpublished manuscript, 1998. [28] M. Naor and B. Pinkas. Oblivious transfer with adaptive queries. In Advances in Cryptology CRYPTO 99, volume 1666 of Lecture Notes in Computer Science, pages 573590. Springer-Verlag, 1999. [29] J.F. Nash. Non-Cooperative Games. Annals of Mathematics, 54 pages 286295. [30] M. Osborne, A. Rubinstein. A Course in Game Theory. The MIT Press, 1994. [31] T. Sander, A. Young, and M. Yung. Non-interactive CryptoComputing for NC1. In 40th Annual Symposium on Foundations of Computer Science, pages 554567. IEEE, 1999. [32] A. C. Yao. Protocols for secure computations (extended abstract). In 23rd Annual Symposium on Foundations of Computer Science, pages 160164. IEEE, Nov. 1982.

The Zero-Knowledge Proofs

In this section we provide efcient implementations for the sub-protocols and . Recall that in , the Preparer needs to prove that the list of ciphertexts is a permuted encryption of the known list , and in the Chooser needs to prove that the ciphertext was obtained from a blinding-with zero of one of the ciphertexts . Both protocols and are derived from a simple zero-knowledge proof for a problem which we call Encrypted List Correspondence. In this problem, which was studies in the context of mix-networks, 14

Protocol ELC Common inputs: knows: , s.t.


,

(and ).

, for all .

Choose a random permutation over , and random strings Set Set . Send to . Choose

and send it to ,

If If If

, reply with ( , check , check

), else reply with (

).

. .

Figure 3: A zero-knowledge proof-of-knowledge for Encrypted List Correspondence.

a prover wants to prove in zero-knowledge that two lists of ciphertexts, and , are permuted encryptions of the same list. More precisely, wants to prove that one list was obtained by blinding-with-zero and permuting the other, and that he knows a permutation and random coins such that , for all .9 An efcient zero-knowledge proof for this problem was described by Abe [1]. (Although Abes proof assumes ElGamal encryptions, it is easy to see that any blindable encryption will do.) For self-containment, we describe this proof in Figure 3. The protocol in Figure 3 achieves knowledge-error of 1/2. There are known transformations that can be used to derive a constant-round, negligible-error protocol from a three-round, constant-error one. (Specically, we can get a 5-round, negligible-error zero-knowledge proof-of-knowledge for Encrypted List Correspondence.) We discuss this issue further in Appendix D.

In sub-protocol , The Preparer needs to prove that the list of ciphertexts is a permuted en . This can be done by having the prover encrypt the list cryption of the known list in some canonical manner (say, using the all-zero random string) to generate a ciphertext list , and are encryptions of and then prove using the ELC protocol that the two lists was not obtained the same list. The only problem here is that the original list of ciphertexts by blinding the list , but rather by directly encrypting the plaintext list . Hence, to use this protocol we slightly change the protocol CES-2 itself, and have the Preparer encrypt the plaintext list by rst encrypting it in a canonical manner to get , and then blind with zero and per . We stress that due to Equation (2) (from the denition of blindable mute the latter list to get encryption), this change in the protocol does not change the distribution of the ciphertext list .
We remark that a similar proof can be shown even when the two lists were not generated as blindings of each other, and even if the two lists were obtained using two different blindable encryption schemes.
9

A.1 The protocol

15

At rst glance, the problem in Protocol seems substantially different than the Encrypted List Correspondence problem. Furthermore, constructing a simple protocol that the ciphertext was obtained as blinding of some ciphertext (without revealing ) seems rather hard. Nonetheless, we can pose the problem of protocol as an instance of the Encrypted List Correspondence. To this end, we again slightly modify the protocol CES-2, by having the Chooser blind with zero and re-permute the entire list (rather than just pick and blind one ciphertext), and then prove in zero-knowledge that it knows the corresponding permutation. Specically, the Chooser selects a random permutation over , permutes the entire list of s according ) is then sent to , and then blinds with zero each of the ciphertexts. The entire new list (denoted to the Preparer. The preparer only uses a single, agreed upon, ciphertext from the list (say, ), in the rest of the protocol, but it uses the whole list to verify the proof. (Therefore, the effective index that is chosen .) To prove that it chose the list of s according to the prescribed protocol, the by Chooser is Chooser now proves that it knows the permutation , which is exactly what is done in the ELC protocol. Two optimizations. Since we only use the ciphertext in the CES-2 protocol (and we dont really care whether the other s are obtained properly), we can somewhat simplify and optimize the proof in , as follows: The Chooser can send only (just as it is done in the protocol CES-2). Then, in the zero-knowledge proof, he prepares the full set of encryptions (as if he actually prepared and sent all the s). Then, depending on the query bit, he either reveals the correspondence between the s and all the s, or the correspondence between and one of the s. Another optimization takes advantage of the fact that in the context of the CES-2 protocol, we only need this proof to be witness hiding, rather than fully zero-knowledge. It is therefore possible to just repeat the protocol from Figure 3 many times in parallel to get a negligible error (and we do not need to worry about adding extra ows of commitment).

A.2 The protocol

B Implementation of Blindable Encryption


As we said above, possible implementations of blindable encryption include ElGamal encryption and the Goldwasser-Micali encryption scheme. Below we briey describe these scheme.

B.1

ElGamal Encryption

The generation algorithm picks a random -bit prime , where is prime and a generator for . The secret the subgroup of quadratic residues modulo . Then it picks a random and sets key is , the public key is (and ). . The To encrypt a message , one picks a random and sets . The encryption scheme is well known to be semantically secure under decryption outputs the decisional Dife-Hellman assumption (DDH). To blind a ciphertext with blinding factor , we compute , where is chosen at random from . We note that , (for some unknown and ), then , indeed if which is a random encryption of since is random when is. The operation is just . Checking that the public key is kosher can be done by verifying that are primes, , and is an element of order . Thus, no interaction is needed in the key generation phase. Proving that a ciphertext is an encryption of a message requires proving equality of the discrete-log of two

16

known elements with respect to two known bases, which can be done using several known simple protocols [10, 12].

B.2

Goldwasser-Micali Encryption

This is the original semantically secure scheme introduced by Goldwasser and Micali [22]. The message is encrypted bit by bit. The generation algorithm picks a random , a product of two -bit primes together with any quadratic non-residue with Jacobi symbol (in case is a Blum integer, i.e. , one can x ). The public key is and , and the secret key is and . To encrypt a bit , one picks a random and returns (i.e. a random square for and a random non-square for ). To decrypt one simply determines if the ciphertext is a square modulo , i.e. modulo both and . The scheme is semantically secure under the quadratic residue assumption. The scheme is clearly blindable, since if and , then (where the addition is done modulo 2, and all the other operations are done modulo ). Proving that a public key is committing requires proving that is a quadratic-non-residue mode , which can be done efciently. Proving that is an encryption of can be done by proving quadratic residuosity.

B.3

Verifying Decryption in Blindable Encryption Schemes

It is often useful when an encryption scheme has an efcient zero-knowledge proofs for claims of the form is an encryption of . Although we do not know how to construct such efcient protocol based only on the blindable encryption property (other than using generic constructions), we can show a simple and efcient proof for the special case where the decryption algorithm also reveals the random coins that were used in the encryption (as in Goldwasser-Micali). The following simple 3-round protocol is such a proof with soundness error of . For this protocol, we assume without loss of generality that

(3) Indeed, let be any xed encryption of (the identity of ). We can then redene the encryption process to be . Equation (1) from the denition of blindable encryption shows that is indeed a random encryption of , while Equation (2) now immediately implies Equation (3).

Protocol for Proving Common inputs: Prover knows: .

Compute such that Let

Send . Choose ; Send . If , send , else send If , check

. Pick random and . .


. , else check

17

Blindable Encryption and Oblivious Transfer

We show below a very simple implementation of 1-out-of- Oblivious Transfer using blindable encryption. Recall that a -out- Oblivious Transfer Protocol implements the following functionality. There are two players, and , who have some private inputs: has a set of strings , and has an index . At the end of the protocol should learn and nothing else, while should learn nothing at all. There are other avors of oblivious transfer, all known to be equivalent to the above. We let commit to his input by encrypting each , i.e. set (for all ) and send to . Now we are in the situation that the Chooser wants the Preparer to decrypt without telling him . A simple solution that works in the honest-but-curious model, is as follows: chooses a random blinding factor , sets , asks to decrypt , and subtracts from the result to recover the correct . Since is the encryption of a random element , indeed does not learn any information about . To adjust this protocol to work against malicious players, needs to prove that he knows the index and blinding factor , and needs to prove that it decrypted correctly. The proof of is essentially the same problem as in the sub-protocol in our CES-2 protocol, with the only difference being that now we also have the blinding factor . Accordingly, the protocol for solving it is nearly identical to the ELC protocol, with the only difference being that the prover blinds the encrypted lists with random elements rather than with zeros (and shows the blinding factors when he is asked to open the blinding. Due to space limitations, we omit further details. We note, though, that a small modication of the above protocol implements random -out-of- oblivious transfer, where should learn for a random . To implement that, simply chooses a random permutation in the rst step and sets .

Reducing the Error in a Zero-knowledge Proof-of-knowledge

Below we describe a known transformation from any 3-round, constant-error zero-knowledge proof-ofknowledge into a 5-round, negligible error zero-knowledge proof-of-knowledge, that uses trapdoor commitment schemes. We were not able to trace the origin of this transformation, although related ideas and techniques can be found in [15, 27, 13]. Assume that you have some 3-round, constant-error zero-knowledge proof-of-knowledge protocol, and consider the 3-round protocol that you get by running the constant-error protocol many times in parallel. Denote the rst prover message in the resulting protocol by , the verier message by , and the last prover message by . Note that since the original protocol was 3-round, then parallel repetition reduces the error exponentially (see proof in [4]). However, this protocol is no longer zero-knowledge. To get a zero-knowledge protocol, we use a trapdoor (or Chameleon) commitment schemes [8]. Roughly, this is a commitment scheme which is computationally binding and unconditionally secret, with the extra property that there exists a trapdoor information, knowledge of which enables one to open a commitment in any way it wants. In the zero-knowledge protocol, the prover sends to the verier in the rst round the public-key of the trapdoor commitment scheme. The verier then commits to , the prover sends , the verier opens the commitment to , and the prover sends and also the trapdoor for the commitment. The zero-knowledge simulator follows the one for the standard 4-round protocol. The knowledge extractor, on the other hand, rst runs one instance of the proof to get the trapdoor, and then it can effectively ignore the commitment in the second round, so you can use the extractor of the original 3-round protocol.

18

You might also like