Keywords

1 Introduction

Secure multi-party computation [31, 75] allows mutually distrusting parties to compute securely over their private data. However, guaranteeing output delivery to honest parties when the adversarial parties may abort during the protocol execution has been a challenging objective. A long line of highly influential works has undertaken the task of defining security with guaranteed output delivery (i.e., fair computation) and fairly computing functionalities [1,2,3,4,5, 10, 11, 14, 33, 34, 39, 60]. This work considers the case when honest parties are not in the majority. In particular, as is standard in this line of research, the sequel relies on the representative task of two-party secure coin-tossing, an elegant functionality providing uncluttered access to the primary bottlenecks of achieving security in any specific adversarial model.

In the information-theoretic plain model, one of the parties can fix the coin-tossing protocol’s output (using attacks in two-player zero-sum games, or games against nature [65]). If the parties additionally have access to the commitment functionality (a.k.a., the information-theoretic commitment-hybrid), an adversary is forced to follow the protocol honestly (otherwise, the adversary risks being identified), or abort the protocol execution prematurely. Against such adversaries, referred to as fail-stop adversaries [20], there are coin-tossing protocols [6, 12, 13, 19] where a fail-stop adversary can change the honest party’s output distribution by at most where r is the round-complexity of the protocol. That is, these protocols are -insecure. In a ground-breaking result, Moran, Naor, and Segev [61] constructed the first secure coin-tossing protocol in the oblivious transfer-hybrid [24, 67, 68] that is -insecure. No further security improvements are possible because Cleve [19] proved that -insecurity is unavoidable; hence, the protocol by Moran, Naor, and Segev is optimal.

Incidentally, all fair computation protocols (not just coin-tossing, see, for example, [1,2,3,4,5, 10, 11, 14, 33, 34, 39, 60]) rely on the oblivious transfer functionality to achieve -insecurity. A fundamental principle in theoretical cryptography is to securely realize cryptographic primitives based on the minimal computational hardness assumptions. Consequently, the following question is natural.

Is oblivious transfer necessary for optimal fair computation?

Towards answering this fundamental research inquiry, recently, Maji and Wang [59] proved that any coin-tossing protocol that uses one-way functions in a black-box manner [7, 44, 69] must incur -insecurity. This result proves the qualitative optimality of the coin tossing protocols of [6, 12, 13, 19] in Minicrypt [42] because the commitment functionality is securely realizable by the black-box use of one-way functions [38, 62, 63]. Consequently, the minimal hardness of computation assumption enabling optimal fair coin-tossing must be outside Minicrypt.

Summary of our results. This work studies the insecurity of fair coin-tossing protocols outside Minicrypt, within (various levels of) Cryptomania [42]. Our contributions are two-fold.

  1. 1.

    First, we generalize the (fully) black-box separation of Maji and Wang [59] to prove that any coin-tossing protocol using public-key encryption in a fully black-box manner must be -insecure.

  2. 2.

    Finally, we prove a dichotomy for two-party secure (possibly, randomized output) function evaluation functionalities. For any secure function evaluation functionality f, either (A) optimal fair coin-tossing exists in the information-theoretic f-hybrid, or (B) any coin-tossing protocol in the f-hybrid, even using public-key encryption algorithms in a black-box manner, is -insecure.

Remark 1

In the information-theoretic f-hybrid model, parties have access to a trusted party faithfully realizing the functionality f. However, this functionality is realized unfairly. That is, the trusted party delivers the output to the adversary first. If the adversary wants, it can abort the protocol and block the output delivery to the honest parties. Otherwise, it can also permit the delivery of the output to the honest parties and continue with the protocol execution. We highlight that the fair f-hybrid (where the adversary cannot block output delivery to the honest parties), for any f where both parties influence the output, straightforwardly yields perfectly or statistically secure fair coin-tossing protocol.Footnote 1

Fig. 1.
figure 1

The first column summarizes of the most secure fair coin-tossing protocols in Impagliazzo’s worlds [42]. Corresponding to each of these worlds, the second column has the best attacks on these fair coin-tossing protocols. All the adversarial attacks are fail-stop attackers except for the general attack in pessiland.

Our hardness of computation results hold even for a game-theoretic definition of fairness as well (which extends to the stronger simulation-based security definition). Section 1.1 summarizes our contributions. As shown in Fig. 1, our results further reinforce the widely-held perception that oblivious transfer is necessary for optimal fair coin-tossing. Our work nearly squeezes out the entire remaining space left open in the state-of-the-art after the recent breakthrough of [59], which was the first advancement on the quality of the attacks on fair coin-tossing protocols since [20] after almost three decades. However, there are fascinating problems left open by our work; Sect. 6 discusses one.

Positioning the technical contributions. Information-theoretic lower-bounding techniques that work in the plain model and also extend to the f-hybrid are rare. Maji and Wang [59] proved that optimal coin-tossing is impossible in the information-theoretic model even if parties can access a random oracle. This work extends the potential-based approach of [59] to f-hybrid information-theoretic models, such that oblivious transfer is impossible in the f-hybrid and parties additionally have access to a public-key encryption oracle.

Fig. 2.
figure 2

The Kushilevitz Function [51], where Alice holds input \(x\in \{0,1,2\}\) and Bob holds input \(y\in \{0,1,2\}\). For example, the output is \(z_0\) if \(x=0\) and \(y\in {\{0,1\}} \).

For the discussion below, consider f to be the Kushilevitz function [51] (see Fig. 2). One cannot realize this function securely in the information-theoretic plain model even against honest-but-curious adversaries [9, 49, 50, 57]. Furthermore, oblivious transfer is impossible in the f-hybrid [46, 47]. The characterization of the exact power of making ideal f-invocations is not entirely well-understood.

Invocations of the ideal f-functionality are non-trivially useful. For example, one can realize the commitment functionality in the f-hybrid model [58] (even with Universally Composable (UC) security [15, 16] against malicious adversaries). The f-functionality is also known to securely implement other secure function evaluation functionalities as well [71]. All these functionalities would otherwise be impossible to securely realize in the plain model [17, 52, 66]. Consequently, it is plausible that one can even implement optimal fair coin-tossing without implementing oblivious transfer in the f-hybrid model.

Our technical contribution is an information-theoretic lower-bounding technique that precisely characterizes the power of any f-hybrid vis-à-vis its ability to implement optimal fair coin-tossing. The authors believe that these techniques shall be of independent interest to characterize the power of performing ideal f-invocations in general.

1.1 Our Contribution

This section provides an informal summary of our results and positions our contributions relative to the state-of-the-art. To facilitate this discussion, we need to introduce a minimalistic definition of coin-tossing protocols. An (rX)-coin-tossing protocol is a two-party r-message interactive protocol where parties agree on the final output \(\in {\{0,1\}} \), and the expected output of an honest execution of the protocol is X. A coin-tossing protocol is \(\epsilon \) -unfair if one of the parties can change the honest party’s output distribution by \(\epsilon \) (in the statistical distance).

Maji and Wang [59] proved that the existence of optimal coin-tossing protocols is outside Minicrypt [42], where one-way functions and other private-key cryptographic primitives exist (for example, pseudorandom generator [40, 41, 43], pseudorandom function [29, 30], pseudorandom permutation [55], statistically binding commitment [62], statistically hiding commitment [38, 63], zero-knowledge proof [32], and digital signature [64, 70]). Public-key cryptographic primitives like public-key encryption, (multi-message) key-agreement protocols, and secure oblivious transfer protocol are in Cryptomania [44] (outside Minicrypt). Although the existence of a secure oblivious transfer protocol suffices for optimal fair coin-tossing, it was unknown whether weaker hardness of computation assumptions (like public-key encryption and (multi-message) key-agreement protocols [27]) suffice for optimal fair coin-tossing or not. Previously, Haitner, Makriyannis, Nissim, Omri, Shaltiel, and Silbak [35, 36], for any constant r, prove that r-message coin-tossing protocols imply key-agreement protocols, if they are less than \(1/\sqrt{r}\)-insecure.

Result I. Towards this objective, we prove the following result.

Corollary 1

(Separation from Public-key Encryption). Any (rX)-coin-tossing protocol that uses a public-key encryption scheme in a fully black-box manner is -unfair.

We emphasize that X may depend on the message complexity r of the protocol, which, in turn, depends on the security parameter. For example, consider an ensemble of fair coin-tossing protocols with round complexity r and expected output \(X=1/r\). This result shows a fail-stop adversary that changes the honest party’s output distribution by \(1/r^{3/2}\) in the statistical distance.

This hardness of computation result extends to the fair computation of any multi-party functionality (possibly with inputs) such that the output has some entropy, and honest parties are not in the majority (using a standard partition argument). At a high level, this result implies that relying on stronger hardness of computation assumptions like the existence of public-key cryptography provides no “fairness-gains” for coin-tossing protocols than only using one-way functions.

This result’s heart is the following relativized separation in the information-theoretic setting (refer to Theorem 5). There exists an oracle \(\text {PKE}_n\) [56] that enables the secure public-key encryption of n-bit messages. However, we prove that any (rX)-coin-tossing protocol where parties have oracle access to the \(\text {PKE}_n\) oracle (with polynomial query complexity) is -unfair. This relativized separation translates into a fully black-box separation using by-now-standard techniques in this field [69]. Conceptually, this black-box separation indicates that optimal fair coin-tossing requires a hardness of computation assumption that is stronger than the existence of a secure public-key encryption scheme.

Gertner, Kannan, Malkin, Reingold, and Vishwanathan [27] showed that the existence of a public-key encryption scheme with additional (seemingly innocuous) properties (like the ability to efficiently sample a public-key without knowing the private-key) enables oblivious transfer. Consequently, our oracles realizing public-key encryption must avoid any property enabling oblivious transfer (even unforeseen ones). This observation highlights the subtlety underlying our technical contributions. For example, our set of oracles permit testing whether a public-key or cipher-text is valid or not. Without this test, oblivious transfer and, in turn, optimal fair coin-tossing is possible. Surprisingly, these test oracles are also sufficient to rule out the possibility of oblivious transfer.

Since public-key encryption schemes imply key agreement protocols, our results prove that optimal fair coin-tossing is black-box separated from key agreement protocols as well.

Result II. Let be a two-party secure symmetric function evaluation functionality, possibly with randomized output. The function takes private inputs x and y from the parties and samples an output \(z\in Z\) according to the probability distribution \(p_f(z|x,y)\). The information-theoretic f-hybrid is an information-theoretic model where parties have additional access to the (unfair) f-functionality.

Observe that if f is the (symmetrized) oblivious transfer functionality,Footnote 2 then the Moran, Naor, and Segev protocol [61] is an optimal fair coin-tossing protocol in the (unfair) f-hybrid. More generally, if f is a functionality such that there is an oblivious transfer protocol in the f-hybrid, one can emulate the Moran, Naor, and Segev optimal coin-tossing protocol; consequently, optimal coin-tossing exists in the f-hybrid. Kilian [47] characterized all functions f such that there exists a secure oblivious transfer protocol in the f-hybrid, referred to as complete functions.

Our work explores whether a function f that is not complete may enhance the security of fair coin-tossing protocols.

Corollary 2

(Dichotomy of Functions). Let f be an arbitrary 2-party symmetric function evaluation functionality, possibly with randomized output. Then, exactly one of the following two statements holds.

  1. 1.

    For all and \(X\in [0,1]\), there exists an optimal (rX)-coin-tossing protocol in the f-hybrid (a.k.a., -unfair protocol).

  2. 2.

    Any (rX)-coin-tossing protocol that uses public-key encryption protocols in a black-box manner in the f-hybrid is -unfair.

For example, Corollary 1 is implied by the stronger version of our result by using a constant-valued f, a trivial function evaluation. For more details, refer to Theorem 6. In our model, we emphasize that parties can perform an arbitrary number of f-invocations in parallel in every round.

Let us further elaborate on our results. Consider a function f that has a secure protocol in the information-theoretic plain model, referred to as trivial functions. For deterministic output, trivial functions’ full characterization is known [9, 49, 50, 57]. For randomized output, the characterization of trivial functions is not known currently. Observe that trivial functions are definitely not complete; otherwise, a secure oblivious transfer protocol shall exist in the information-theoretic plain model, which is impossible. For every , there are functions \(f_t\) such that any secure protocol for \(f_t\) requires t rounds of interactive communication in the information-theoretic plain model. For the randomized output case, the authors know of functions such that \(\left| {X} \right| =\left| {Y} \right| =2\) and \(\left| {Z} \right| = (t+1)\) that need t-round protocols for secure computation, which is part of ongoing independent research. Compiling out the \(f_t\)-hybrid using such a t-round secure computation protocol allows only for an -insecurity, which yields a useless bound for . Consequently, compiling out the trivial functions is inadequate.

It is also well-known that functions of intermediate complexity exist [9, 49, 50, 57], which are neither complete nor trivial (for example, the Kushilevitz function, refer to Fig. 2). In fact, there are randomized functions (refer to Fig. 3) of intermediate complexity such that \(\left| {X} \right| =\left| {Y} \right| =2\) and \(\left| {Z} \right| =3\) [23].

Fig. 3.
figure 3

A randomized functionality of intermediate complexity with \(X=Y=\{0,1\}\) and \(Z=\{0,1,2\}\). For instance, when \(x=0\) and \(y=0\), the distribution of the output over Z is (18/54, 18/54, 18/54), i.e., a uniform distribution over Z.

Our result claims that even an intermediate function f is useless for optimal fair coin-tossing; it is as useless as one-way functions or public-key encryption. Therefore, our results’ technical approach must treat each f-hybrid invocation as one step in the protocol. We highlight that the intermediate functions are useful in securely realizing other non-trivial functionalities as well [58, 71]. However, for fair coin-tossing, they are useless.

1.2 Prior Works

Deterministic secure function evaluation. In this paper, we focus on two-party secure function evaluation functionalities that provide the same output to the parties. Consider a deterministic function \(f:X\times Y\rightarrow Z\). The unfair ideal functionality implementing f takes as input x and y from two parties and delivers the output f(xy) to the adversary. The adversary may choose to block the output delivery to the honest party, or permit the delivery of the output to the honest party.

In this document, we consider security against a semi-honest information-theoretic adversary, i.e., the adversary follows the protocol description honestly but is curious to find additional information about the other party’s private input. There are several natural characterization problems in this scenario. The functions that have perfectly secure protocols in the information-theoretic plain model, a.k.a., the trivial functions, are identical to the set of decomposable functions [9, 50]. For every , there are infinitely many functions that require t-rounds for their secure evaluation. Interestingly, relaxing the security from perfect to statistical security, does not change this characterization [49, 57].

Next, Kilian [46] characterized all deterministic functions f that enable oblivious transfer in the f-hybrid, the complete functions. Any functions that has an “embedded OR-minor” (refer to Definition 4) is complete. Such functions, intuitively, are the most powerful functions that enable general secure computation of arbitrary functionalities.

The sets of trivial and complete functions are not exhaustive (for \(\left| {Z} \right| >3\) [18, 48]). There are functions of intermediate complexity, which are neither trivial nor complete (see, for example, Fig. 2). The power of the f-hybrid, for an intermediate f, was explored by [71] using restricted forms of protocols.

Randomized secure function evaluation. A two-party randomized function is a function that, upon receipt of the inputs x and y, samples an output according to the distribution \(p_f(z|x,y)\) over the samples space Z. Kilian [47] characterized all complete randomized functions. Any function that has an “embedded generalized OR-minor” (refer to Definition 4) is complete. Recently, [23] characterized functions with 2-round protocols. Furthermore, even for \(\left| {X} \right| =\left| {Y} \right| =2\) and \(\left| {Z} \right| =3\), there are random function evaluations that are of intermediate complexity [23].

In the field of black-box separation, the seminal work of Impagliazzo and Rudich [44] first proposed the notion of black-box separation between cryptographic primitives. Since then, there has been many influential works [25,26,27,28, 69, 72, 74] in this line of research. Below, we elaborate on a few works that are most relevant to us.

Firstly, for the fair coin-tossing in the random oracle model, the work of Dachman-Soled, Lindell, Mahmoody, and Malkin [21] showed that when the message complexity is small, random oracle can be compiled away and hence is useless for fair coin-tossing. In another work, Dachman-Soled, Mahmoody, and Malkin [22] studied a restricted type of protocols that they called “function-oblivious” and showed that for this particular type of protocols, random oracles cannot yield optimal fair coin-tossing. Recently, Maji and Wang [59] resolved this problem in the full generality. They showed that any r-message coin-tossing protocol in the random oracle model must be -unfair.

In a recent work of Haitner, Nissim, Omri, Shaltiel, and Silbak [36] and Haitner, Makriyannis, and Omri [35], they proved that, for any constant r, the existence of an r-message fair coin-tossing protocol that is more secure than \(1/\sqrt{r}\) implies the existence of (infinitely often) key agreement protocols.

1.3 Technical Overview

In this section, we present a high-level overview of our proofs. We start by recalling the proofs of Maji and Wang [59].

Before we begin, we need to introduce the notion of Alice and Bob’s defense coins. At any instance of the protocol evolution, Alice has a private defense coin , referred to as the Alice defense coin, which she outputs if Bob aborts the protocol. Similarly, Bob has a Bob defense coin. When Alice prepares a next message of the protocol, she updates her defense coin. However, when Bob prepares a next message of the protocol, Alice’s defense coin remains unchanged. Analogously, Bob updates his defense coin when preparing his next messages in the protocol.

Abstraction of Maji and Wang [59] Technique. Consider an arbitrary fair coin-tossing protocol \(\pi ^{\mathcal O} \) where Alice and Bob have black-box access to some oracle \({\mathcal O} \). In their setting, \({\mathcal O} \) is a random oracle. Let r and X be the message complexity and the expected output of this protocol. They used an inductive approach to prove this protocol is \((c\cdot X (1-X)/\sqrt{r})\)-insecure as follows (c is a universal constant).

Fig. 4.
figure 4

An intuitive illustration of the approach of Maji and Wang [59].

For every possible first message of this protocol, they consider two attacks (refer to Fig. 4). Firstly, parties can attack by immediately abort upon this first message. Secondly, parties can defer their attack to the remaining sub-protocol, which has only \(r-1\) messages. Suppose when the first message is \(m_i\), the remaining sub-protocol has expected output \(x_i\). Additionally, the expectation of Alice and Bob defense is \(a_i\) and \(b_i\). The effectiveness of the first attack is precisely

$$\begin{aligned} \left| {{x_i-a_i}} \right| +\left| {{x_i-b_i}} \right| , \end{aligned}$$

where \(\left| {{x_i-a_i}} \right| \) is the change of Alice’s output if Bob aborts, and analogously, \(\left| {{x_i-b_i}} \right| \) is the change of Bob’s output if Alice aborts. On the other hand, by the inductive hypothesis, we know the effectiveness of the second attack is at least

$$c\cdot x_i(1-x_i)/\sqrt{r-1}.$$

Now, they employed a key inequality by [45] (refer to Imported Lemma 1) and show that the maximum of these two quantities is lower bounded by

$$\frac{c}{\sqrt{r}}\cdot \left( x_i(1-x_i)+(x_i-a_i)^2+(x_i-b_i)^2\right) .$$

Define potential function . Maji and Wang noted that if Jensen’s inequality holds, i.e.,

(1)

then the proof is complete. This is because the overall effectiveness of the attack is lower bounded by

To prove Eq. 1, they noted that \(\varPhi (x,a,b)\) could be rewritten as

$$\varPhi (x,a,b)=x+(x-a-b)^2-2ab.$$

Observe that x and \((x-a-b)^2\) are convex functions, and hence Jensen’s inequality holds. The only problematic term is ab. To resolve this, they noted that suppose we have the following guarantee.

figure c

Footnote 3

Then we shall have (refer to Claim 1).Footnote 4 Consequently, Eq. 1 shall hold and the proof is done.

Note that the argument thus far is oblivious to the fact that the oracle in use is a random oracle. For any oracle \({\mathcal O} \), if we have the guarantee above, this proof will follow.

In particular, when the oracle in use is the random oracle, Maji and Wang observed that, standard techniques (namely, the heavy querier [8]) do ensure that Alice private view and Bob private view are (close to) independent. This completes their proof.

Extending to f-hybrid. When f is a complete function, one can build oblivious transfer protocol in the f-hybrid model and, consequently, by the MNS protocol [61], optimal fair coin-tossing does exist in the f-hybrid model.

On the other hand, if f is not complete, Kilian [47] showed that f must satisfy the cross product rule (refer to Definition 4). This implies that conditioned on the partial transcript, which includes ideal calls to f, Alice and Bob private view are (perfectly) independent (refer to Lemma 3). Therefore, the proof strategy of Maji and Wang [59] is applicable.

Extending to Public-key Encryption. Our proof for the public-key encryption follows from the ideas of Mahmoody, Maji, and Prabhakaran [56]. First, we define a collection of oracles PKE\(_n\) (refer to Sect. 5.1), with respect to which public-key encryption exists. To prove that optimal fair coin-tossing protocol does not exist, it suffices to ensure that Alice and Bob private view are (close to) independent. However, since with the help of PKE\(_n\) oracle, Alice and Bob can agree on a secret key such that a third party, Eve, who sees the transcript and may ask polynomially many queries to the oracle, cannot learn any information about the key. It is impossible to ensure the independence of the private views by only invoking a public algorithm.

To resolve this, [56] showed that one could compile any protocol \(\pi \) in the PKE\(_n\) oracle to be a new protocol \(\pi '\) in the PKE\(_n\) oracle where parties never query the decryption oracle (refer to Imported Theorem 1). This compiler satisfies that given a local view of Alice (resp., Bob) in protocol \(\pi \), one could simulate the local view of Alice (resp., Bob) in protocol \(\pi '\) and vice versa. Therefore, instead of considering a fair coin-tossing protocol in the PKE\(_n\) oracle model, one could consider a fair coin-tossing protocol in the PKE\(_n\) oracle model where parties never query the decryption oracle. And [56] showed that, when the parties do not call the decryption oracle, there does exist a public algorithm, namely the common information learner, who can find all the correlation between Alice and Bob (refer to Imported Theorem 2). And conditioned on the partial transcript with the additional information from the common information learner, Alice and Bob private view are (close to) independent. Therefore, we can continue with the proof-strategy of Maji and Wang [59].

2 Preliminaries

For a randomized function \(f:{\mathcal X} \rightarrow {\mathcal Y} \), we shall use f(xs) for f evaluated with input x and randomness s.

We use uppercase letters for random variables, (corresponding) lowercase letters for their values, and calligraphic letters for sets. For a joint distribution (AB), A and B represent the marginal distributions, and \(A\times B\) represents the product distribution where one samples from the marginal distributions A and B independently. For two random variables A and B distributed over a (discrete) sample space \(\varOmega \), their statistical distance is defined as

For a sequence \((X_1,X_2,\ldots )\), we use \(X_{\le i}\) to denote the joint distribution \((X_1,X_2,\ldots ,X_i)\). Similarly, for any \((x_1,x_2,\dotsc )\in \varOmega _1\times \varOmega _2\times \cdots \), we define . Let \((M_1,M_2,\ldots ,M_r)\) be a joint distribution over sample space \(\varOmega _1\times \varOmega _2\times \cdots \times \varOmega _r\), such that for any \(i\in \{1,2,\ldots ,n\}\), \(M_i\) is a random variable over \(\varOmega _i\). A (real-valued) random variable \(X_i\) is said to be \(M_{\le i}\) measurable if there exists a deterministic function \(f:\varOmega _1\times \cdots \times \varOmega _i\rightarrow {\mathbb R} \) such that \(X_i=f(M_1,\ldots ,M_i)\). A random variable \(\tau :\varOmega _1\times \cdots \times \varOmega _r\rightarrow \{1,2,\ldots ,r\}\) is called a stopping time, if the random variable \(\mathbbm {1}_{\tau \le i}\) is \(M_{\le i}\) measurable, where \(\mathbbm {1}\) is the indicator function. For a more formal treatment of probability spaces, \(\sigma \)-algebras, filtrations, and martingales, refer to, for example, [73].

The following inequality shall be helpful for our proof.

Theorem 1

(Jensen’s inequality). If f is a multivariate convex function, then , for all probability distributions \(\mathbf {X}\) over the domain of f.

In particular, \(f(x,y,z)=(x-y-z)^2\) is a tri-variate convex function where Jensen’s inequality applys.

3 Fair Coin-Tossing Protocol in the f-hybrid Model

Let \(f:{\mathcal X} \times {\mathcal Y} \rightarrow {\mathcal Z} \) be an arbitrary (possibly randomized) function. As standard in the literature, we shall restrict to f such that the input domain \({\mathcal X} \) and \({\mathcal Y} \) and the range \({\mathcal Z} \) are of constant size. A two-party protocol in the f-hybrid model is defined as follows.

Definition 1

(f-hybrid Model [15, 53]). A protocol between Alice and Bob in the f-hybrid model is identical to a protocol in the plain model except that both parties have access to a trusted party realizing f. At any point during the execution, the protocol specifies which party is supposed to speak.

  • Alice/Bob message. If Alice is supposed to speak, she shall prepare her next message as a deterministic function of her private randomness and the partial transcript.If Bob is supposed to speak, his message is prepared in a similar manner.

  • Trusted party message. At some point during the execution, the protocol might specify that the trusted party shall speak next. In this case, the protocol shall also specify a natural number \(\ell \), which indicates how many instances of f should the trusted party compute. Alice (resp., Bob) will prepare her inputs \(\mathbf {x} = (x_1,\ldots ,x_\ell )\) (resp., \(\mathbf {y}=(y_1,\ldots ,y_\ell )\)) and send it privately to the trusted party. The trusted party shall compute \((f(x_1,y_1),\ldots ,f(x_\ell ,y_\ell ))\) and send it as the next message.

In this paper, we shall restrict to fail-stop adversarial behavior.

Definition 2

(Fail-stop Attacker in the f-hybrid Model). A fail-stop attacker follows the protocol honestly and might prematurely abort. She might decide to abort when it is her turn to speak. Furthermore, during the trusted party message, she shall always receive the trusted party message first and, based on this message, decide whether to abort or not. If she decides to abort, this action prevents the other party from receiving the trusted party message.

In particular, we shall focus on fair coin-tossing protocols in the f-hybrid model.

Definition 3

(Fair Coin-tossing in the f-hybrid Model). An \((X_0,r)\)-fair coin-tossing in the f-hybrid model is a two-party protocol between Alice and Bob in the f-hybrid model such that it satisfies the following.

  • \(X_0\) -Expected Output. At the end of the protocol, parties always agree on the output \(\in {\{0,1\}} \) of the protocol. The expectation of the output of an honest execution is \(X_0\in (0,1)\).

  • r -Message Complexity. The total number of messages of the protocol is (at most) r. This includes both the Alice/Bob message and the trusted party message.

  • Defense Preparation. Anytime a party speaks, she shall also prepare a defense coin based on her private randomness and the partial transcript. Her latest defense coin shall be her output when the other party decides to abort. To ensure that parties always have a defense to output, they shall prepare a defense before the protocol begins.

  • Insecurity. The insecurity is defined as the maximum change a fail-stop adversary can cause to the expectation of the other party’s output.

For any (randomized) functionality f, Kilian [47] proved that if f does not satisfy the following cross product rule, f is complete for information-theoretic semi-honest adversaries. That is, for any functionality g, there is a protocol in the f-hybrid model that realizes g, which is secure against information-theoretic semi-honest adversaries. In particular, this implies that there is a protocol in the f-hybrid model that realizes oblivious transfer.

Definition 4

(Cross Product Rule). A (randomized) functionality \(f:{\mathcal X} \times {\mathcal Y} \rightarrow {\mathcal Z} \) is said to satisfy the cross product rule if for all \(x_0,x_1\in {\mathcal X} \), \(y_0,y_1\in {\mathcal Y} \), and \(z\in {\mathcal Z} \) such that

we have

We recall the MNS protocol by Moran, Naor, and Segev [61]. The MNS protocol makes black-box uses of the oblivious transfer as a subroutine to construct optimal-fair coin-tossing protocols. In particular, their protocol enjoys the property that any fail-stop attack during the oblivious transfer subroutine is an entirely ineffective attack. Therefore, the MNS protocol, combined with the results of Kilian [47], gives us the following theorem.

Theorem 2

([47, 61]). Let f be a (randomized) functionality that is complete. For any \(X_0\in (0,1)\) and , there is an \((X_0,r)\)-fair coin-tossing protocol in the f-hybrid model that is (at most) -insecure against fail-stop attackers.

Remark 2 (On the necessity of the unfairness of f)

We emphasize that it is necessary that in the f-hybrid model, f is realized unfairly. That is, the adversary receives the output of f before the honest party does. If f is realized fairly, i.e., both parties receive the output simultaneously, it is possible to construct perfectly-secure fair coin-tossing. For instance, let f be the \(\mathsf {XOR}\) function. Consider the protocol where Alice samples \(x\xleftarrow {\$} {\{0,1\}} \), Bob samples \(y\xleftarrow {\$} {\{0,1\}} \), and the trusted party broadcast f(xy), which is the final output of the protocol. Trivially, one can verify that this protocol is perfectly-secure.

Intuitively, the results of Kilian [47] and Moran, Naor, and Segev [61] showed that when f is a functionality that does not satisfy the cross product rule, a secure protocol realizing f can be used to construct optimal-fair coin-tossing.

In this work, we complement the above results by showing that when f is a functionality that does satisfy the cross product rule, a fair coin-tossing protocol in the f-hybrid model is (qualitatively) as insecure as a fair coin-tossing protocol in the information-theoretic model. In other words, f is completely useless for fair coin-tossing. Our results are summarized as the following theorem.

Theorem 3

(Main Theorem for f-hybrid). Let f be a randomized functionality that is not complete. Any \((X_0,r)\)-fair coin-tossing protocol in the f-hybrid model is (at least) -insecure.

4 Proof of Theorem 3

4.1 Properties of Functionalities

Let f be a functionality that satisfies the cross product rule. We start by observing some properties of f. Firstly, let us recall the following definition.

Definition 5

(Function Isomorphism [57]). Let \(f:{\mathcal X} \times {\mathcal Y} \rightarrow {\mathcal Z} \) and \(g:{\mathcal X} \times {\mathcal Y} \rightarrow {\mathcal Z} '\) be any two (randomized) functionalities. We say \(f\le g\) if there exist deterministic mappings \(M_\mathsf {A} :{\mathcal X} \times {\mathcal Z} '\rightarrow {\mathcal Z} \) and \(M_\mathsf {B} :{\mathcal Y} \times {\mathcal Z} '\rightarrow {\mathcal Z} \) such that, for all \(x\in {\mathcal X} \), \(y\in {\mathcal Y} \), and randomness s,

$$M_\mathsf {A} \left( x,g(x,y;s)\right) =M_\mathsf {B} \left( y,g(x,y;s)\right) $$

and

$$\mathsf {SD} \left( {f(x,y)}\;,\;{M_\mathsf {A} \left( x,g(x,y)\right) }\right) =0.$$

We say f and g are isomorphic (i.e., \(f\cong g\)) if \(f\le g\) and \(g\le f\).

Intuitively, f and g are isomorphic if securely computing f can be realized by one ideal call to g without any further communication and vise versa. As an example, the (deterministic) \(\mathsf {XOR}\) functionality \({\begin{bmatrix}0&{}1\\ 1&{}0\end{bmatrix}}\) is isomorphic to \({\begin{bmatrix}0&{}1\\ 2&{}3\end{bmatrix}}\).

Given two isomorphic functionalities f and g, it is easy to see that there is a natural bijection between protocols in the f-hybrid model and g-hybrid model.

Lemma 1

Let f and g be two functionalities such that \(f\cong g\). For every fair coin-tossing protocol \(\pi \) in the f-hybrid model, there is a fair coin-tossing protocol \(\pi '\) in the g-hybrid model such that

  • \(\pi \) and \(\pi '\) have the same message complexity r and expected output \(X_0\).

  • For every fail-stop attack strategy for \(\pi \), there exists a fail-stop attack strategy for \(\pi '\) such that the insecurities they cause are identical and vice versa.

Proof (Sketch)

Given any protocol \(\pi \) in the f-hybrid model between \(\mathsf {A}\) and \(\mathsf {B}\), consider the protocol \(\pi '\) in the g-hybrid model between \(\mathsf {A} '\) and \(\mathsf {B} '\). In \(\pi '\), \(\mathsf {A} '\) simply simulates \(\mathsf {A}\) and does what \(\mathsf {A}\) does. Except when the trusted party sends the output of g, \(\mathsf {A} '\) uses the mapping \(M_\mathsf {A} \) to recover the output of f and feeds it to \(\mathsf {A}\). \(\mathsf {B} '\) behaves similarly. Easily, one can verify that these two protocols have the same message complexity and expected output. Additionally, for every fail-stop adversary \(\mathsf {A} ^*\) for \(\pi \), there is a fail-stop adversary \(\left( \mathsf {A} ^*\right) '\) for \(\pi '\) that simulates \(\mathsf {A} ^*\) in the same manner, which deviates the output of Bob by the same amount.

We are now ready to state our next lemma.

Lemma 2

(Maximally Renaming the Outputs of f). Let \(f:{\mathcal X} \times {\mathcal Y} \rightarrow {\mathcal Z} \) be a (randomized) functionality that is not complete. There exists a functionality \(f':{\mathcal X} \times {\mathcal Y} \rightarrow {\mathcal Z} '\) such that \(f\cong f'\) and \(f'\) satisfies the following strict cross product rule. That is, for all \(x_0,x_1\in {\mathcal X} \), \(y_0,y_1\in {\mathcal Y} \), and \(z'\in {\mathcal Z} '\), we have

The proof of this lemma follows from standard argument. We refer the reader to the full version for a complete proof.

Following the example above, the \(\mathsf {XOR}\) functionality \(\begin{bmatrix}0&{}1\\ 1&{}0\end{bmatrix}\) satisfies the cross product rule, i.e., \(\mathsf {XOR}\) is not complete, but it does not satisfy the strict cross product rule since

On the other hand, functionality \({\begin{bmatrix}0&{}1\\ 2&{}3\end{bmatrix}}\) is isomorphic to \(\mathsf {XOR}\) and does satisfy the strict cross product rule.

By Lemma 1, the insecurity of a fair coin-tossing protocol in the f-hybrid model is identical to a fair coin-tossing protocol in the \(f'\)-hybrid model when \(f\cong f'\). Therefore, in the rest of this section, without loss of generality, we shall always assume f is maximally renamed according to Lemma 2 such that it satisfies the strict cross product rule.

4.2 Notations and the Technical Theorem

Let \(\pi \) be an \((X_0,r)\)-fair coin-tossing protocol in the f-hybrid model. We shall use \(R^\mathsf {A} \) and \(R^\mathsf {B} \) to denote the private randomness of Alice and Bob. We use random variable \(M_i\) to denote the \(i^{th}\) message of the protocol, which could be either an Alice/Bob message or a trusted party message. Let \(X_i\) be the expected output of the protocol conditioned on the first i messages of the protocol. In particular, this definition is consistent with the definition of \(X_0\).

For an arbitrary i, we consider both Alice aborts and Bob aborts the \(i^{th}\) message. Suppose the \(i^{th}\) message is Alice’s message. Alice abort means that she aborts without sending this message to Bob. Conversely, Bob abort means he aborts in his next message immediately after receiving this message. On the other hand, if this is a trusted party message, then both a fail-stop Alice and a fail-stop Bob can abort this message. This prevents the other party from receiving the message. We refer to the defense output of Alice when Bob aborts the \(i^{th}\) message as Alice’s \(i^{th}\) defense. Similarly, we define the \(i^{th}\) defense of Bob. Let \(D^\mathsf {A} _i\) (resp., \(D^\mathsf {B} _i\)) be the expectation of Alice’s (resp., Bob’s) \(i^{th}\) defense conditioned on the first i messages.

Now, we are ready to define our score function.

Definition 6

Let \(\pi \) be a fair coin-tossing protocol in the f-hybrid model with message complexity r. Let \(\tau \) be a stopping time. Let \(\mathsf {P}\in \{\mathsf {A},\mathsf {B},\mathsf {T}\}\) be the party who sends the last message.Footnote 5 We define the score function as follows.

The following remarks, similar to [45, 59], provide additional perspectives.

Remark 3

  1. 1.

    In the information-theoretic plain model, for every message of the protocol, one usually only consider the attack by the sender of this message. The attack by the receiver, who may abort immediately after receiving this message, usually is ineffective. This is because the sender is not lagging behind in terms of the progress of the protocol. However, in the f-hybrid model, we have trusted party messages, which reveal information regarding both parties’ private randomness. Therefore, both parties’ defenses may lag behind, and both parties’ attacks could be effective. Hence, in our definition of the score function, for every message we pick in the stopping time, we consider the effectiveness of both parties’ attacks.

  2. 2.

    The last message of the protocol is a boundary case of the above argument. Suppose Alice sends the last message of the protocol, Bob does not have the opportunity to abort after receiving this message. Similarly, if this is a Bob message, Alice cannot attack this message. On the other hand, if the last message is a trusted party message, then both parties could potentially attack this message. This explains the indicator function in our definition.

  3. 3.

    Finally, given a stopping time \(\tau ^*\) that witnesses a high score. We can always find a fail-stop attack strategy that deviates the expected output of the other party by \(\frac{1}{4}\cdot \mathsf {Score}\left( {\pi ,\tau ^*}\right) \) in the following way. For Alice, we shall partition the stopping time \(\tau ^*\) by considering whether \(X_\tau \ge D^\mathsf {B} _{\tau }\) or not. Similarly, we partition \(\tau ^*\) for Bob. These four attacks correspond to either Alice or Bob favoring either 0 or 1. The quality of these four attacks sums up to be \(\mathsf {Score}\left( {\pi ,\tau ^*}\right) \). Hence, one of these four fail-stop attacks might be at least \(\frac{1}{4}\cdot \mathsf {Score}\left( {\pi ,\tau ^*}\right) \) effective.

The score function measures the effectiveness of a fail-stop attack corresponds to a stopping time \(\tau \). We are interested in the effectiveness of the most devastating fail-stop attacks. This motivates the following definition.

Definition 7

Let \(\pi \) be a fair coin-tossing protocol in the f-hybrid model. Define

Now, we are ready to state our main theorem, which shows that the most devastating fail-stop attack is guaranteed to achieve a high score. In light of the remarks above, Theorem 4 directly implies Theorem 3.

Theorem 4

For any \((X_0,r)\)-fair coin-tossing protocol \(\pi \) in the f-hybrid model, we have

$$\mathsf {Opt}\left( {\pi }\right) \ge \varGamma _r\cdot X_0\left( 1-X_0\right) ,$$

where .

4.3 Inductive Proof of Theorem 4

In this section, we shall prove Theorem 4 by using mathematical induction on the message complexity r. Let us first state some useful lemmas.

Firstly, we note that in the f-hybrid model, where f is a (randomized) functionality that satisfies the strict cross product rule, Alice view and Bob view are always independent conditioned on the partial transcript.

Lemma 3

(Independence of Alice and Bob view). For any i and partial transcript \(m_{\le i}\), conditioned on this partial transcript, the joint distribution of Alice and Bob private randomness is identical to the product of the marginal distribution. That is,

In particular, this lemma implies the following claim.

Claim 1

Let \(\pi \) be an arbitrary fair coin-tossing protocol in the f-hybrid model. Suppose there are \(\ell \) possible first messages, namely, \(m_1^{(1)},m_1^{(2)},\ldots ,m_1^{(\ell )}\), each happens with probability \(p^{(1)},p^{(2)},\ldots ,p^{(\ell )}\). Suppose conditioned on the first message being \(M_1=m_1^{(i)}\), the expected defense of Alice and Bob are \(d_1^{\mathsf {A},(i)}\) and \(d_1^{\mathsf {B},(i)}\) respectively. Then we have

$$\sum _{i=1}^\ell p^{(i)}\cdot d_1^{\mathsf {A},(i)}d_1^{\mathsf {B},(i)} = D^\mathsf {A} _0\cdot D^\mathsf {B} _0.$$

Lemma 3 and Claim 1 can be proven in a straightforward manner. We omit it due to space constraint. A proof can be found in the full version. Finally, the following lemma from [45] shall be helpful as well.

Imported Lemma 1

([45]). For all \(P\in [0,1]\) and \(Q \in [0,1/2]\), if P and Q satisfy that

$$Q\le \frac{P}{1+P^2},$$

then for all \(x,\alpha ,\beta \in [0,1]\), we have

In particular, for any integer \(r\ge 1\), the constraints are satisfied, if we set \(P=\varGamma _{r}\) and \(Q=\varGamma _{r+1}\), where .

Base case: \(r=1\). We are now ready to prove Theorem 4. Let us start with the base case. In the base case, the protocol consists of only one message. Recall that the last message of the protocol is a boundary case of our score function. It might not be the case that both parties can attack this message. Hence, we prove it in different cases.

Suppose this message is an Alice message. In this case, we shall only consider the attack by Alice. By definition, with probability \(X_0\), Alice will send a message, conditioned on which the output shall be 1. And with probability \(1-X_0\), Alice will send a message, conditioned on which the output shall be 0. On the other hand, the expectation of Bob’s defense will remain the same as \(D^\mathsf {B} _0\). Therefore, the maximum of the score shall be

$$X_0\cdot \left| {{1-D^\mathsf {B} _0}} \right| +\left( 1-X_0\right) \cdot \left| {{0-D^\mathsf {B} _0}} \right| ,$$

which is

$$\ge X_0\left( 1-X_0\right) .$$

In particular, this is

$$\ge \varGamma _1\cdot X_0\left( 1-X_0\right) .$$

This case is entirely analogous to case 1.

In this case, we shall consider the effectiveness of the attacks by both parties. Suppose there are \(\ell \) possible first message by the trusted party, namely, \(m_1^{(1)},m_1^{(2)},\ldots ,m_1^{(\ell )}\), each happens with probability \(p^{(1)},p^{(2)},\ldots ,p^{(\ell )}\). Conditioned on first message being \(M_1 = m_1^{(i)}\), the output of the protocol is \(x_1^{(i)}\). We must have \(x_1^{(i)}\in {\{0,1\}} \) since the protocol has ended and parties shall agree on the output. Furthermore, let the expected defense of Alice and Bob be \(d_1^{\mathsf {A},(i)}\) and \(d_1^{\mathsf {B},(i)}\). Therefore, the maximum of the score will be

$$\sum _{i=1}^\ell p^{(i)}\cdot \left( \left| {{x_1^{(i)}-d_1^{\mathsf {A},(i)}}} \right| + \left| {{x_1^{(i)}-d_1^{\mathsf {B},(i)}}} \right| \right) .$$

We have

This completes the proof of the base case.

Inductive Step. Suppose the statement is true for message complexity r. Let \(\pi \) be an arbitrary protocol with message complexity \(r+1\). Suppose there are \(\ell \) possible first messages, namely, \(m_1^{(1)},m_1^{(2)},\ldots ,m_1^{(\ell )}\), each happens with probability \(p^{(1)},p^{(2)},\ldots ,p^{(\ell )}\). Conditioned on first message being \(M_1 = m_1^{(i)}\), the output of the protocol is \(x_1^{(i)}\) and the expected defense of Alice and Bob are \(d_1^{\mathsf {A},(i)}\) and \(d_1^{\mathsf {B},(i)}\) respectively. Note that conditioned on the first message being \(M_1 = m_1^{(i)}\), the remaining protocol \(\pi ^{(i)}\) becomes a protocol with expected output \(x_1^{(i)}\) and message complexity r. By our inductive hypothesis, we have

$$\mathsf {Opt}\left( {\pi ^{(i)}}\right) \ge \varGamma _r\cdot x_1^{(i)}\left( 1- x_1^{(i)}\right) .$$

On the other hand, we could also pick the first message \(m_1^{(i)}\) as our stopping time, which yields a score of

$$ \left| {{ x_1^{(i)}-d_1^{\mathsf {A},(i)}}} \right| + \left| {{x_1^{(i)}-d_1^{\mathsf {B},(i)}}} \right| .$$

Therefore, the stopping time that witnesses the largest score yields (at least) a score of

Therefore, \(\mathsf {Opt}\left( {\pi }\right) \) is lower bounded by

This completes the proof of the inductive step.

5 Black-Box Uses of Public-Key Encryption is Useless for Optimal Fair Coin-Tossing

In this section, we prove that public-key encryption used in a black-boxed manner shall not enable optimal fair coin-tossing. Our objective is to prove the existence of an oracle, with respect to which public-key encryption exists, but optimal fair coin-tossing does not.

5.1 Public-Key Encrytion Oracles

Let n be the security parameter. We follow the work of [56] and define the following set of functions.

  • . This function is a random injective function.

  • . This function is uniformly randomly sampled among all functions that are injective with respect to the second input. That is, when the first input is fixed, this function is injective.

  • . This function is the uniquely determined by functions \(\mathsf {Gen}\) and as follows. \(\mathsf {Dec}\) takes as inputs a \(sk\in {\{0,1\}} ^n\) and a ciphertext \(c\in {\{0,1\}} ^{3n}\). If there exists a message such that \(\mathsf {Enc}(\mathsf {Gen}(sk),m)=c\), define . Otherwise, define . Note that such message m, if exists, must be unique, because \(\mathsf {Enc}\) is injective with respect to the second input.

  • \(\mathsf {Test}_1:{\{0,1\}} ^{3n}\rightarrow {\{0,1\}} \). This function is uniquely determined by function . It takes as an input a \(pk\in {\{0,1\}} ^{3n}\). If there exists a \(sk\in {\{0,1\}} ^n\) such that \(\mathsf {Gen}(sk)=pk\), define . Otherwise, define .

  • \(\mathsf {Test}_2:{\{0,1\}} ^{3n}\times {\{0,1\}} ^{3n}\rightarrow {\{0,1\}} \). This function is uniquely determined by function \(\mathsf {Enc}\). It takes as inputs a \(pk\in {\{0,1\}} ^{3n}\) and a ciphertext \(c\in {\{0,1\}} ^{3n}\). If there exists a message m such that , define . Otherwise, define .

We shall refer to this collection of oracles the PKE oracle. Trivially, the PKE oracle enables public-key encryption. We shall prove that it does not enable optimally-fair coin-tossing.

Remark 4

We stress that it is necessary to include the test functions \(\mathsf {Test}_1\) and \(\mathsf {Test}_2\). As shown by [27, 54], public-key encryption with additional features could be used to construct oblivious transfer protocols, which, in turn, could be used to construct optimally-fair coin-tossing protocols [61].[56] proved that with the test functions \(\mathsf {Test}_1\) and \(\mathsf {Test}_2\), Alice’s and Bob’s private views can only be correlated as a disjoint union of independent views, which is not sufficient to realize oblivious transfer.We refer the readers to [56] for more details.

5.2 Our Results

We shall prove the following theorem.

Theorem 5

(Main theorem for PKE Oracle). There exists a universal polynomial \(p(\cdot ,\cdot ,\cdot ,\cdot )\) such that the following holds. Let \(\pi \) be any fair coin-tossing protocol in the PKE oracle model, where Alice and Bob make at most m queries. Let \(X_0\) be the expected output, and r be the message complexity of \(\pi \). There exists an (information-theoretic) fail-stop attacker that deviates the expected output of the other party by (at least)

This attacker shall ask at most \(p\left( n,m,r,\frac{1}{X_0\left( 1-X_0\right) }\right) \) additional queries.

It is instructive to understand why Theorem 3 does not imply Theorem 5. One may be tempted to model the public-key encryption primitive as an idealized secure function evaluation functionality to prove this implication. The idealized functionality for public-key encryption delivers sender’s message to the receiver, while hiding it from the eavesdropper. So, the “idealized public-key encryption” functionality is a three-party functionality where the sender’s input is delivered to the receiver; the eavesdropper has no input or output. This idealized effect is easily achieved given secure point-to-point communication channels, which we assume in our work. The non-triviality here is that our result is with respect to an oracle that implements the public-key encryption functionality. An oracle for public-key encryption is not necessarily used just for secure message passing. Section 6 has a discussion elaborating the difference between an “ideal functionality” and an “oracle implementing the ideal functionality.”

Remark 5

As usual in the literature [21, 22, 59], we shall only consider instant protocols. That is, once a party aborts, the other party shall not make any additional queries to defend, but directly output her current defense coin. We refer the reader to [21] for justification and more details on this assumption.

In fact, our proof technique is sufficient to prove the following stronger theorem.

Theorem 6

There exists a universal polynomial \(p(\cdot ,\cdot ,\cdot ,\cdot )\) such that the following holds. Let f be any (randomized) functionality that is not complete. Let \(\pi \) be any fair coin-tossing protocol in the f-hybrid model where parties have access to the PKE oracle model. Assume Alice and Bob make at most m queries. Let \(X_0\) be the expected output, and r be the message complexity of \(\pi \). There exists an (information-theoretic) fail-stop attacker that deviates the expected output of the other party by (at least)

This attacker shall ask at most \(p\left( n,m,r,\frac{1}{X_0\left( 1-X_0\right) }\right) \) additional queries.

Our proof strategy consists of two steps, similar to that of [56].

  1. 1.

    Given a protocol in the PKE oracle model, we shall first convert it into a protocol where parties do not invoke the decryption queries. By Imported Theorem 1 proven in [56], we can convert it in a way such that the insecurity of these two protocols in the presence of a semi-honest adversary is (almost) identical. In particular, this ensures that the insecurity of fair coin-tossing protocol in the presence of a fail-stop adversary is (almost) identical.

  2. 2.

    Next, we shall extend the results of [59], where they proved a fair coin-tossing protocol in the random oracle model is highly insecure, to the setting of PKE oracles without decryption oracle. Intuitively, The proof of [59] only relied on the fact that in the random oracle model, there exists a public algorithm [8] that asks polynomially many queries and decorrelate the private view of Alice and Bob. Mahmoody, Maji, and Prabhakaran [56] proved that (summarized as Imported Theorem 2) the PKE oracles without the decryption oracle satisfies the similar property. Hence, the proof of [59] extends naturally to this setting.

Together, these two steps prove Theorem 5. The first step is summarized in Sect. 5.3. The second step is summarized in Sect. 5.4.

5.3 Reduction from PKE Oracle to Image Testable Random Oracle

A (keyed version of) image-testable random oracles is a collection of pairs of oracles parameterized by a key such that the following holds.

  • is a randomly sampled injective function.

  • is uniquely determined by function as follows. Define if there exists an \(\alpha \in {\{0,1\}} ^{n}\) such that . Otherwise, define .

Observe that the PKE oracle without the decryption oracle \(\mathsf {Dec}\) is exactly a (keyed version of) image-testable random oracles with the keys drawn from \(\{\bot \}\cup {\{0,1\}} ^{3n}\). If the key is \(\bot \), it refers to the pair of oracles . If the , it refers to the pair of oracles . We shall refer to the PKE oracle without the decryption oracle \(\mathsf {Dec}\) as ITRO. We shall use the following imported theorem, which is implicitly proven in [56].

Imported Theorem 1

([56]). There exists a universal polynomial \(p(\cdot ,\cdot )\) such that the following holds. Let \(\pi \) be a fair coin-tossing protocol in the PKE oracle model. Let \(X_0\) and r be the expected output and message complexity. Suppose Alice and Bob ask (at most) m queries. For any \(\epsilon >0\), there exists a fair coin-tossing protocol \(\pi '\) in the ITRO model such that the following holds.

  • Let \(X_0'\) and \(r'\) be the expected output and message complexity of \(\pi '\). Then, \(r'=r\) and .

  • Parties asks at most \(p(m,1/\epsilon )\) queries in protocol \(\pi '\).

  • For any semi-honest adversary \({\mathcal A} '\) for protocol \(\pi '\), there exists a semi-honest adversary \({\mathcal A} \) for protocol \(\pi \), such that the view of \({\mathcal A} \) is \(\epsilon \)-close to the view of \({\mathcal A} '\). And vice versa. In particular, this implies that if \(\pi '\) is \(\alpha \)-insecure. \(\pi \) is (at least) \((\alpha -\epsilon )\)-insecure.

The intuition behind this theorem is the following. To avoid the uses of decryption oracle, parties are going to help each other decrypt. In more detail, suppose Alice generates a ciphertext using Bob’s public key. Whenever the probability that Bob invokes the decryption oracle on this ciphertext is non-negligibly high, Alice will directly reveal the message to Bob. Hence, Bob does not need to use the decryption oracle. This shall not harm the security as a semi-honest Bob can recover the message by asking polynomially many additional queries. We refer the readers to [56] for more details.

Looking forward, we shall prove that any fair coin-tossing protocol in the ITRO model is -insecure. By setting \(\epsilon \) to be for some sufficiently large polynomial, we shall guarantee that

This guarantees that the insecurity of the protocol in the PKE oracle model is (qualitatively) identical to the insecure of the protocol in the ITRO model.

5.4 Extending the Proof of [59] to Image Testable Random Oracle

We first recall the following theorem from [56].

Imported Theorem 2

(Common Information Learner [56]). There exists a universal polynomial \(p(\cdot ,\cdot )\) such that the following holds. Let \(\pi \) be any two-party protocol in the ITRO model, in which both parties make at most m queries. For all threshold \(\epsilon \in (0,1)\), there exists a public algorithm, called the common information learner, who has access to the transcript between Alice and Bob. After receiving each message, the common information learner performs a sequence of queries and obtain its corresponding answers from the ITRO. Let \(M_i\) denote the \(i^{th}\) message of the protocol. Let \(H_i\) denote the sequence of query-answer pairs asked by the common information learner after receiving the message \(M_i\). Let \(T_i\) be the union of the \(i^{th}\) message \(M_i\) and the \(i^{th}\) common information learner message \(H_i\). Let \(V^\mathsf {A} _i\) (resp., \(V^\mathsf {B} _i\)) denote Alice’s (resp., Bob’s) private view immediately after message \(T_i\), which includes her private randomness, private queries, and the public partial transcript. The common information learner guarantees that the following conditions are simultaneously satisfied.

  • Cross-product Property. Fix any round i,

    Intuitively, it states that on average, the statistical distance between (1) the joint distribution of Alice and Bob’s private view, and (2) the product of the marginal distributions of Alice’s private views and Bob’s private views is small.

  • Efficient Property. The expected number of queries asked by the common information learner is bounded by \(p(m,1/\epsilon )\).

This theorem, combined with proof of [59] gives the following theorem.

Theorem 7

There exists a universal polynomial \(p(\cdot ,\cdot ,\cdot ,\cdot )\) such that the following holds. Let \(\pi \) be a protocol in the ITRO model, where Alice and Bob make at most m queries. Let \(X_0\) and r be the expected output and message complexity. Then, there exists an (information-theoretic) fail-stop adversary that deviates the expected output of the other party by

This attacker asks at most \(p\left( n,m,r,\frac{1}{X_0\left( 1-X_0\right) }\right) \) additional queries.

Below, we briefly discuss why Imported Theorem 2 is sufficient to prove this theorem. The full proof is analogous to [59] and the proof of the results in the f-hybrid model. Hence we omit it here.

On a high level, the proof goes as follows. We prove Theorem 7 by induction. Conditioned on the first message, the remaining protocol becomes an \((r-1)\)-message protocol, and one can apply the inductive hypothesis. For every possible first message i, we consider whether to abort immediately or defer the attack to the remaining sub-protocol. By invoking Imported Lemma 1, we obtain a potential function, which characterizes the insecurity of the protocol with first message being i. This potential function will be of the form

$$\varPhi (x_i,a_i,b_i)=x_i(1-x_i)+(x_i-a_i)^2+(x_i-b_i)^2,$$

where \(x_i\), \(a_i\), and \(b_i\) stands for the expected output, expected Alice defense, and expected Bob defense, respectively. To complete the proof, [59] showed that it suffices to prove the following Jensen’s inequality.

To prove this, one can rewrite \(\varPhi (x,a,b)\) as

$$\varPhi (x,a,b)=x+(x-a-b)^2-2ab.$$

We note that x and \((x-a-b)^2\) are convex functions, and hence Jensen’s inequality holds. As for the term ab, we shall have

as long as, conditioned on every possible first message i, Alice’s private view is (almost) independent to Bob’s private view. This is exactly what Imported Theorem 2 guarantees except for a small error depending on \(\epsilon \), which we shall set to be sufficiently small. Therefore, the proof shall follow.

6 Open Problems

In this work, we proved that access to ideal invocations to the secure function evaluation functionalities like the Kushilevitz function [51] (Fig. 2) does not enable optimal fair coin-tossing. However, we do not resolve the following stronger statement. Suppose there exists an oracle relative to which there exists a secure protocol for the Kushilevitz function. Is optimal fair coin-tossing impossible relative to this oracle?

To appreciate the distinction between these two statements, observe that there may be additional ways to use the “oracle implementing Kushilevitz function” than merely facilitating the secure computing of the Kushilevitz function. More generally, there may be implicit consequences implied by the existence of such an oracle. For example, “the existence of an efficient algorithm for not only allows solving problems, but it also allows efficiently solving any problem in \(\mathsf {PH}\) because the entire \(\mathsf {PH}\) collapses to \(\mathsf {P}\).

This problem is incredibly challenging and one of the major open problems in this field. The technical tools developed in this paper also bring us closer to resolving this problem.