Abstract
In this work we revisit the SPDZ multiparty computation protocol by Damgård et al. for securely computing a function in the presence of an unbounded number of dishonest parties. The SPDZ protocol is distinguished by its fast performance. A downside of the SPDZ protocol is that one single dishonest party can enforce the computation to fail, meaning that the honest parties have to abort the computation without learning the outcome, whereas the cheating party may actually learn it. Furthermore, the dishonest party can launch such an attack without being identified to be the cheater. This is a serious obstacle for practical deployment: there are various reasons for why a party may want the computation to fail, and without cheater detection there is little incentive for such a party not to cheat. As such, in many cases, the protocol will actually fail to do its job.
In this work, we enhance the SPDZ protocol to allow for cheater detection: a dishonest party that enforces the protocol to fail will be identified as being the cheater. As a consequence, in typical real-life scenarios, parties will actually have little incentive to cheat, and if cheating still takes place, the cheater can be identified and discarded and the computation can possibly be re-done, until it succeeds.
The challenge lies in adding this cheater detection feature to the original protocol without increasing its complexity significantly. In case no cheating takes place, our new protocol is as efficient as the original SPDZ protocol which has no cheater detection. In case cheating does take place, there may be some additional overhead, which is still reasonable in size though, and since the cheater knows he will be caught, this is actually unlikely to occur in typical real-life scenarios.
G. Spini—Supported by the Algant-Doc doctoral program, www.algant.eu.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
One has to be careful with this “solution” though: collaborating dishonest parties that remained passive during the first run may now adjust their inputs, given that they have learned the output from the first (failed) run.
- 2.
But every player that is identified by an honest player to be a cheater is a cheater; thus, this case can only occur if there is more than one cheater.
- 3.
We emphasize that, by definition, these private and public openings do not involve any checking of the correctness of z by means of its tag; this will have to be done on top.
- 4.
Note that there is no issue of \(\varepsilon \) being incorrect since any \(\varepsilon \) corresponds to a possible input for \(P_i\).
- 5.
Here and below, when we make information-theoretic statements, we understand v to not include the encryptions/commitments of the honest parties shares etc. that were produced during the preprocessing phase. Adding these elements to the adverary’s view of course renders the information-theoretic statements invalid, but has a negligible effect with respect to a computationally bounded adversary.
- 6.
Recall that dishonest \(P_k\) may send different values for \(\tilde{z}^{(i)}\) to different players.
- 7.
Note that we treat broadcast as a given primitive here; implementing it using the point-to-point communication and, say, digital signatures, causes some (communication and computation) overhead, but this overhead is independent of the circuit size.
- 8.
The actual cost of these cryptographic operations depends on how the commitment scheme is implemented.
- 9.
Plus that we have to do real broadcasts, whereas in the original SPDZ protocol without cheater detection it is good enough to do a simple consistency check and abort as soon as there is an inconsistency.
- 10.
For the same reason, we omit here the fact that the view also contain random sharings of \(x-a\), \(r-b\) and \(\pi \).
References
Baum, C., Orsini, E., Scholl, P.: Efficient secure multiparty computation with identifiable abort. IACR Cryptology ePrint Archive 2016:187 (2016)
Ben-Or, M., Goldwasser, S., Wigderson, A.: Completeness theorems for non-cryptographic fault-tolerant distributed computation. In: STOC 1988, pp. 1–10. ACM (1988)
Bendlin, R., Damgård, I., Orlandi, C., Zakarias, S.: Semi-homomorphic encryption and multiparty computation. In: Paterson, K.G. (ed.) EUROCRYPT 2011. LNCS, vol. 6632, pp. 169–188. Springer, Heidelberg (2011). doi:10.1007/978-3-642-20465-4_11
Canetti, R.: Universally composable security: a new paradigm for cryptographic protocols. In: FOCS 2001, pp. 136–145. IEEE Computer Society (2001)
Chaum, D., Crépeau, C., Damgard, I.: Multiparty unconditionally secure protocols. In: STOC 1988, pp. 11–19. ACM (1988)
Damgård, I., Keller, M., Larraia, E., Pastro, V., Scholl, P., Smart, N.P.: Practical covertly secure MPC for dishonest majority – or: breaking the SPDZ limits. In: Crampton, J., Jajodia, S., Mayes, K. (eds.) ESORICS 2013. LNCS, vol. 8134, pp. 1–18. Springer, Heidelberg (2013). doi:10.1007/978-3-642-40203-6_1
Damgård, I., Pastro, V., Smart, N., Zakarias, S.: Multiparty computation from somewhat homomorphic encryption. In: Safavi-Naini, R., Canetti, R. (eds.) CRYPTO 2012. LNCS, vol. 7417, pp. 643–662. Springer, Heidelberg (2012). doi:10.1007/978-3-642-32009-5_38
Goldreich, O., Micali, S., Wigderson, A.: How to play any mental game or a completeness theorem for protocols with honest majority. In: STOC 1987, pp. 218–229. ACM (1987)
Ishai, Y., Ostrovsky, R., Zikas, V.: Secure multi-party computation with identifiable abort. In: Garay, J.A., Gennaro, R. (eds.) CRYPTO 2014. LNCS, vol. 8617, pp. 369–386. Springer, Heidelberg (2014). doi:10.1007/978-3-662-44381-1_21
Yao, A.C.: Protocols for secure computations. In: FOCS, pp. 160–164. IEEE Computer Society (1982)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
A The Protocol BlockCheck in Detail
We shall now begin the study of the sub-protocol Block Check; we first establish some notation rules that will be used in the whole section: t will denote a positive integer; we assume that t multiplication opening values \(\langle z^{(1)}\rangle ,\cdots ,\langle z^{(t)}\rangle \) have been publicly opened via a king player \(P_k\), and we will use the following notation: for each shared value \(\langle z^{(h)}\rangle \),
-
each player \(P_j\) has sent \(z^{(h)}_j\) to \(P_k\);
-
\(\tilde{z}^{(h)}_j\) denotes the value received by \(P_k\) from \(P_j\) (so if \(P_j\) is honest, \(\tilde{z}^{(h)}_j=z^{(h)}_j\));
-
\(P_k\) has computed and sent to each \(P_j\) the value \(z^{(h)}\);
-
\(\tilde{z}^{(h)}(j)\) denotes the value received by each \(P_j\) from \(P_k\) (so if \(P_k\) is honest, \(\tilde{z}^{(h)}(j)=z^{(h)}\)).
The goal of BlockCheck is to detect errors in this process; as we have seen, the first step of the check consists in computing a (quasi-) random linear combination of the values to be checked. This is performed by generating a seed via the subroutine Rand, and then using the powers of the seed as coefficients of the linear combination. We first define Rand, which assumes that players have access to a commitment scheme (as in standard SPDZ):
We now show that any error that occurred during the opening of the values \(\langle z^{(1)}\rangle ,\cdots ,\langle z^{(t)}\rangle \) will affect their linear combination as well (with high probability); the proof is a standard argument, and is omitted here. We refer to the full version of the paper for the details.
Lemma 1
Let e be a seed generated by Rand; consider the following linear combination with coefficients given by the powers of e:
Assume that for a given index \(h\in \{1,\cdots ,t\}\) the value received by a given player \(P_j\) is incorrect, i.e. \(\tilde{z}^{(h)}(j)\ne z^{(h)}\); then \(\tilde{z}(j)\ne z\) except with probability t / q.
Similarly, if the values received by two players \(P_j\) and \(P_i\) for an index h are different (i.e. \(\tilde{z}^{(h)}(j)\ne \tilde{z}^{(h)}(i)\)), then the same will hold for the corresponding linear combinations, i.e. \(\tilde{z}(j)\ne \tilde{z}(i)\) except with probability t / q.
The next step of BlockCheck is the “public opening and conflict” phase; it has already been defined in previous sections, but we will re-write it here in order to make this chapter as self-contained as possible:
The following lemma is a direct consequence of the definition of the algorithm, and states that the public opening routine is correct and sound:
Lemma 2
Let \(\left( \mathbf {b},L,\tilde{z} \right) \) be the output of \(\texttt {PublicOpening}\left( [z] \right) \) with king player \(P_k\); we then have the following properties:
-
(correctness) if \(\langle z \rangle \) has been correctly reconstructed and players follow the instructions of the protocol, then \(\mathbf {b}=\top \) and \(L=\emptyset \).
-
(soundness) if \(\tilde{z}(j)\ne \tilde{z}(i)\) for some honest players \(P_j\) and \(P_i\), then \(\mathbf {b}=\bot \) and \(L\ne \emptyset \).
Furthermore, in this case either \(P_k\) or all players in \(L\) are dishonest.
The last step consists in checking the tags of the value \(\langle z \rangle =([z],[\gamma (z)])\); as we have previously discussed, this is performed by using the subroutine ZeroTest to check that the value \([\gamma (z)]-\tilde{z}[\alpha ]\) opens to 0.
As hinted in Sect. 3.2, we need to be careful when checking the tags via ZeroTest, as this can increase the adversary’s guessing probability of \(\alpha \). We introduce the following definition to model the information on \(\alpha \) possessed by the adversary:
Definition 1
Given a distribution p(x, v), we say that the distribution of x given v is a list of size m if there exists a (conditional) distribution \(p(\ell |v)\), where the range of \(\ell \) consists of lists of m elements in the range of x, such that the following two properties hold for the joint distribution \(p(x,v,\ell ):= p(x,v)\cdot p(\ell |v)\):
-
(I)
\(p(x\in \ell )\le \max _{\hat{\ell }\in \text {Im}(\ell )}p(x\in \hat{\ell })\);
-
(II)
\(p(x|v=\hat{v}, \ell =\hat{\ell }, x\notin \hat{\ell })=p(x|x\notin \hat{\ell })\) for every \(\hat{v}\), \(\hat{\ell }\) such that the formula is well-defined.
In a nutshell, we use the above definition to formalize the following situation: let v denote the adversary’s view and \(x:=\alpha \); assume that the distribution of \(\alpha \) given v is a list of size m. This means that the adversary has tried to guess the value of \(\alpha \) for m consecutive times, and he has learned whether his guess was correct or not after each guess.
We now state the basic properties of ZeroTest, which will in turn imply the desired properties of the tag check; we assume that ZeroTest outputs a boolean value \(\mathbf {b}\in \{\top ,\bot \}\), marking whether the input opens to zero or not, and some extra data that will be omitted in the following lemma.
Lemma 3
Let \(\mathbf {b}\) be the output of \(\texttt {ZeroTest}\left( [x] \right) \); we then have the following properties:
-
(correctness): if \(x=0\) and players follow the instructions of the protocol, then \(\mathbf {b}=\top \) with probability 1.
-
(soundness): consider the joint distribution \(p(x,v_0)\), where \(v_0\) denotes the adversary’s view before the execution of ZeroTest. Then
$$\begin{aligned} p(\mathbf {b}=\top ) \le 1/q+p_{\texttt {guess}}(x|v_0). \end{aligned}$$Furthermore, if \(x=0\) but \(\mathbf {b}=\bot \), then a dishonest player has broadcast an incorrect version of a value to which he is committed by means of a linear combination of the commitments produced in the preprocessing phase.
-
(privacy): Assume that x is uniformly distributed and that the distribution of x given \(v_0\) is a list of size \(m_0\). Then after the execution of \(\texttt {ZeroTest}([x])\), the distribution of x given v is a list of guesses of size at most \(m:=m_0+1\), where v denotes the adversary’s view after the execution of \(\texttt {ZeroTest}\).
Now that we have fixed the notation for the subroutines, we can state the definition of BlockCheck in a more formal way:
We can now prove the properties of BlockCheck claimed in Sect. 3.3; we omit the proof here, as it can be easily derived from the definition of BlockCheck.
Proposition 5
BlockCheck satisfies the following:
Correctness of BlockCheck : if all players behave honestly and hence all \(\tilde{z}^{(j)}\) are correct and consistently announced by \(P_k\), then BlockCheck outputs “Success” with probability 1.
Soundness of BlockCheck : if at least one of the \(\tilde{z}^{(j)}\) is incorrect, i.e. \(\ne z^{(j)}\), or inconsistently announced by \(P_k\), then the following holds except with probability at most
where v is the adversary’s view before the execution of BlockCheck. BlockCheck outputs “Fail”; furthermore, if it outputs “Fail with Conflict”, then either the king player \(P_k\) or all of the accusing players are dishonest (or both), and if it outputs “Fail with Agreement”, then all \(\tilde{z}^{(j)}\) have been consistently announced by \(P_k\), and a dishonest player has broadcast as part of BlockCheck an incorrect version of a value to which he is committed by means of a linear combination (depending on the \(\tilde{z}^{(j)}\)’s) of the commitments produced in the preprocessing phase.
Finally, we can now prove the bound on the adversary’s guessing probability of the global key \(\alpha \):
Proposition 6
Throughout the entire protocol, the adversary’s guessing probability of \(\alpha \) is bounded by
Proof
Clearly, the adversary can increase his guessing probability only during the execution of ZeroTest; this, by definition of BlockCheck, is executed only when the value \(\tilde{z}\) is consistent among players, so that its input is equal to \([x]:=[\gamma (z)]-\tilde{z}[\alpha ]=(z-\tilde{z})[\alpha ]\). Hence we can assume as a worst-case scenario that \(z\ne \tilde{z}\), so that the adversary’s guessing probabilities of \(\alpha \) and of x coincide.
Notice that at the beginning of the computation, the distribution of \(\alpha \) given the adversary’s view is a list of guesses of size 0; hence we can inductively apply Lemma 3, so that during the execution of the protocol the distribution of \(\alpha \) given the adversary’s view is a list of guesses of size at most 2n (recall that BlockCheck, and hence ZeroTest, is executed at most 2n times). Hence according to Definition 1, and given that \(\alpha \) is uniformly distributed, there exists a distribution \(p(\ell |v)\) with the following properties:
-
(I)
\(p(\alpha \in \ell )\le 2n/q\);
-
(II)
\(\max _{\hat{\alpha },\hat{\ell }}p(\alpha =\hat{\alpha }|v=\hat{v}, \ell =\hat{\ell }, \alpha \notin \hat{\ell })=1/(q-m)\).
Now from this we can deduce the claimed upper bound on the guessing probability: indeed, by using the law of total probability with the events \((\alpha \in \ell )\) and \((\alpha \notin \ell )\), we obtain
\(\square \)
1.1 A.1 The Tag Checking in Detail
We discuss in this section the sub-routine ZeroTest, meant to check whether some shared value [x] is equal to zero or not. The key point is that we cannot simply open [x]: indeed, in the actual scenario this value will be equal to \([\gamma (z)]-\tilde{z}[\alpha ]\) for some shared value \(\langle z\rangle \); now the adversary could select any value \(\varDelta z\) and let \(\tilde{z}=z+\varDelta z\), so opening \([\gamma (z)]-\tilde{z}[\alpha ]=\varDelta z\cdot [\alpha ]\) will actually let the adversary learn the global key \(\alpha \). This is not a problem in the original SPDZ protocol, since it will abort if the value does not open to 0, but it is a problem for our protocol, which carries on even if the result is not zero. To avoid this, we will perform a multiplication of [x] with a random shared value:
From now on, we will adopt a slight abuse of notation by writing formulae such as \(\texttt {ZeroTest}([x])=\mathbf b \), i.e. considering only the boolean value among the outputs of the protocol. We first prove that the subprotocol is correct and sound:
Lemma 4
ZeroTest satisfies the following properties:
-
Correctness: if players follow the instructions of the protocol, \(\texttt {ZeroTest}([0])=\top \) with probability 1.
-
Soundness: consider the joint distribution \(p(x,v_0)\), where \(v_0\) denotes the adversary’s view before the execution of ZeroTest; then
$$\begin{aligned} p(\texttt {ZeroTest}([x])=\top )\le 1/q+p_{\texttt {guess}}(x|v_0). \end{aligned}$$Furthermore, if \(x=0\) but \(\mathbf {b}=\bot \), then a dishonest player has broadcast an incorrect version of a value to which he is committed by means of a linear combination of the commitments produced in the preprocessing phase.
Proof
-
Correctness: trivially, ZeroTest will open \([r\cdot 0]=[0]\).
-
Soundness: by definition of the protocol, the output of \(\texttt {ZeroTest}([x])\) is equal to \(\top \) if and only if \(\mathbf {b}=0\), where
$$\begin{aligned} \mathbf {b}:= (r-\tilde{r})(x-\tilde{x})-\tilde{y} \end{aligned}$$
where r is a variable uniformly distributed and independent of \(x, \tilde{x},\tilde{r}\) and \(\tilde{y}\), and the variables \(\tilde{r}, \tilde{x}\) and \(\tilde{y}\) are chosen by the adversary, and are thus determined by his current view (since we can assume without loss of generality that the adversary is deterministic).
Now notice that for any \(\hat{v}_0\) we have the following inequality:
In turn, by applying the law of total probability to \(p((r-\tilde{r})(x-\tilde{x})-\tilde{y}=0)\) with the events \((v_0=\hat{v}_0)\), we obtain the following inequality:
Finally, if \(x=0\) but \(\mathbf {b}=\bot \), then necessarily a player has communicated some incorrect values during ZeroTest; hence since all communications are performed by broadcast, he is committed to the incorrect value, so that the claim is proved. \(\square \)
Finally, we need to discuss the privacy of ZeroTest; we first remark that Definition 1, formalizing our privacy notion, yields the following consequences:
Remark 1
Assume that the uniform distribution x is a list of size m given v; we then have the following properties:
-
(i)
\(p(x\in \ell )\le m/q\) (immediate consequence of (I));
-
(ii)
\(p(x=y|x\notin \ell )\le 1/(q-m)\) for any \(y=y(v)\) (consequence of (II) via the law of total probability).
Furthermore, let r be a random variable independent of both v and x, and set \(v':=(v,r)\). Then it trivially holds that if p(x, v) satisfies the above definition, then so does \(p(x,v')\).
Lemma 5
Given a distribution \(p(x,v_0)\), where \(v_0\) denotes the adversary’s view, assume that the uniform distribution of x given \(v_0\) is a list of size \(m_0\). Then after the execution of \(\texttt {ZeroTest}([x])\), the distribution of x given v is a list of guesses of size at most \(m:=m_0+1\), where v denotes the adversary’s view after the execution of \(\texttt {ZeroTest}\).
Proof
By looking at the instructions to compute and open [xr] to \(P_i\), we see that what the adversary can learn the following values (plus random sharings of them): \(\gamma :=x-a\), \(\delta :=r-b\) and \(\pi :=(r-\tilde{r})(x-\tilde{x})\), where a, b and r are jointly uniformly distributed and independent of each other and of \(v,x,\tilde{x},\tilde{r}\).
\(\tilde{x}\) and \(\tilde{r}\) are chosen by the adversary, and are thus determined by his view (since we assume without loss of generality that the adversary is deterministic).
Now given the adversary’s view \(v_0\) before the execution of ZeroTest, the adversary’s current view is equal to \((v_0, \gamma ,\delta , \pi )\); notice that a and b are (jointly) random and independent of x, r, \(v_0\) and \(\pi \), and thus so are \(\gamma =x-a\) and \(\delta =r-b\), so that we may restrict the view to \(v:=(v_0, \pi )\) (cf. Remark 1)Footnote 10.
Now by inductive hypothesis, there exists a conditional distribution \(p(\ell _0|v_0)\) such that properties I and II hold for \(p(x,v_0,\ell _0):=p(x,v_0)\cdot p(\ell _0|v_0)\); in a natural way, we define the new distribution to be
We now prove that properties I and II hold for \(p(x,v,\ell )\): first of all, notice that \(p\left( x\in \ell \right) = p\left( x\in \ell _0\right) +p\left( x=\tilde{x}(v_0)| x\notin \ell _0\right) \cdot p\left( x\notin \ell _0\right) \). Hence thanks to Remark 1 we have that
Hence property I holds; we can thus focus on property II. As a first step, notice that
since if \(x\notin \hat{\ell }\), then in particular \(x\ne \tilde{x}(\hat{v}_0)\); hence we can re-write \(\pi =\hat{\pi }\) as \(r=\tilde{r}(v_0)+\hat{\pi }/(x-\tilde{x}(\hat{v}_0))\), and can be removed because r is independent of x, \(v_0\) and \(\ell _0\). We thus get the following equality:
which means that property II holds. \(\square \)
1.2 A.2 The Complexity of the Block Check
We briefly discuss in this section the complexity of BlockCheck, which was presented in Sect. 3.4. First notice that since each block contains at most |C| / n gates, there are at most 2|C| / n multiplication opening values to be checked in each block; we thus get the following complexity:
-
4n commitments need to be prepared, broadcast and opened (n to produce a random seed via Rand, and 3n during ZeroTest);
-
the computational complexity of a block check is in \(O\left( |C|+n^2 \right) \) field operations (excluding computation on commitments), essentially given by the cost of computing the linear combination of the values to be checked;
-
finally, the block check requires broadcasting 3n field elements for the dispute phase of PublicOpening. Notice that we do not use point-to-point communication.
B The Commitment Check
We now discuss how to authenticate shares of a value; as remarked in Sect. 2, for every shared value z that is \([\cdot ]\)-shared in the pre-processing phase each player \(P_i\) holds randomness \(\rho _{z_i}\) and the value \(e_{z_i}:=\texttt {Enc}(z_i,\rho _{z_i})\) has been broadcast. We give here the details on how to use these encryptions as a commitment scheme:
Trivially, if \(P_i\) behaves honestly, EncryptionCheck will output \(\top \); on the other hand, if the share \(\tilde{z}_i\) he submitted is not correct, then the output will be \(\bot \) since \(\texttt {Enc}\left( z_i, \rho _i\right) \ne \texttt {Enc}\left( \tilde{z}_i, \tilde{\rho }_i\right) \) for any possible randomness \(\tilde{\rho }_i\).
We are now ready to define the protocol CommitCheck, that simply applies EncryptionCheck to all shares submitted during the multiplication of two values:
The following proposition summarizes the security property of CommitCheck; we omit the proof, as it can be easily deduced from the definition of the protocol.
Proposition 7
Under the binding property of the underlying commitment scheme, if a dishonest player has broadcast as part of BlockCheck an incorrect value, then this player will be publicly identified by CommitCheck. Furthermore, no honest player will incorrectly be identified as being dishonest.
C Checking the Input and Output of the Computation
We show in this section how to secure the input-sharing and output-reconstruction phases; we use the main ideas and techniques of the multiplication check.
We first describe in more detail how the input sharing is performed in the original SPDZ protocol: each shared value \(\langle r\rangle \) produce in the preprocessing phase comes with another type of sharing, denoted by
where each player \(P_i\) holds \(r_i,\beta _i,\gamma (r)^i_1,\cdots ,\gamma (r)^i_n\) and \(r\beta _i=\sum _j \gamma (r)^j_i\) for any i. Now in classical SPDZ, whenever a player \(P_i\) holds input x, a random shared value \(\llbracket r \rrbracket \) is selected; then each player \(P_j\) communicates \(r_j\) and \(\gamma (r)^j_i\) to \(P_i\), who computes r and checks that \(r\beta _i=\sum _j \gamma (r)^j_i\); \(P_i\) can then broadcast either an error message or the value \(\varepsilon := x-r\). The input is then shared as \(\langle r \rangle + \varepsilon \).
We add to this protocol our system of accusations and, as a last resort, the commitment checks:
The following proposition follows from the definition of InputShare and proves that the protocol is secure:
Proposition 8
Let x be an input held by player \(P_i\); InputShare satisfies the following properties:
-
Correctness: if players behave honestly, \(\texttt {InputShare}(x)\) produces no accusations and players obtain a \(\langle \cdot \rangle \)-sharing of x.
-
Soundness: if a player different from \(P_i\) behaves dishonestly during the execution of InputShare, then except with probability 1 / q he will be deemed suspect or dishonest.
-
Privacy: if \(P_i\) is honest, the adversary’s guessing probability of x is equal to \(\max _{\hat{x}}p(x=\hat{x})\).
We now introduce an output-checking phase which makes use of the protocols introduced in the previous sections: it simply reconstructs the output, then checks its tag with ZeroTest and, if an error is detected, requires player to authenticate their shares via CommitCheck.
The following proposition proves that the protocol is correct and sound; we omit the proof here, as it can be easily obtained from the definition of OutputCheck, and refer to the full version of the paper for the details.
Proposition 9
OutputCheck satisfies the following properties:
-
Correctness: if players submit the correct shares of [z] and behave honestly during ZeroTest, then OutputCheck will output the correct value z;
-
Security: assume that \(\tilde{z}\ne z\) or that the adversary behaved dishonestly in the ZeroTest phase; then OutputCheck will produce an accusation to a dishonest player except with probability \(1/q+p_v\), where \(p_v\) is the adversary’s guessing probability of \(\alpha \) given his view v.
In the concrete setting, this error probability will be equal to \(1/q + 1/(q-2n)\).
Rights and permissions
Copyright information
© 2016 Springer International Publishing AG
About this paper
Cite this paper
Spini, G., Fehr, S. (2016). Cheater Detection in SPDZ Multiparty Computation. In: Nascimento, A., Barreto, P. (eds) Information Theoretic Security. ICITS 2016. Lecture Notes in Computer Science(), vol 10015. Springer, Cham. https://doi.org/10.1007/978-3-319-49175-2_8
Download citation
DOI: https://doi.org/10.1007/978-3-319-49175-2_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-49174-5
Online ISBN: 978-3-319-49175-2
eBook Packages: Computer ScienceComputer Science (R0)