Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

All papers in 1998 (26 results)

Last updated:  1998-12-24
Comparing Entropies in Statistical Zero-Knowledge with Applications to the Structure of SZK
Uncategorized
Oded Goldreich, Salil Vadhan
Uncategorized
We consider the following (promise) problem, denoted ED (for Entropy Difference): The input is a pairs of circuits, and YES instances (resp., NO instances) are such pairs in which the first (resp., second) circuit generates a distribution with noticeably higher entropy. On one hand we show that any language having a (honest-verifier) statistical zero-knowledge proof is Karp-reducible to ED. On the other hand, we present a public-coin (honest-verifier) statistical zero-knowledge proof for ED. Thus, we obtain an alternative proof of Okamoto's result by which HVSZK (i.e., Honest-Verifier Statistical Zero-Knowledge) equals public-coin HVSZK. The new proof is much simpler than the original one. The above also yields a trivial proof that HVSZK is closed under complementation (since ED easily reduces to its complement). Among the new results obtained is an equivalence of a weak notion of statistical zero-knowledge to the standard one.
Last updated:  1998-12-10
Secure Distributed Storage and Retrieval
Uncategorized
Juan A. Garay, Rosario Gennaro, Charanjit Jutla, Tal Rabin
Uncategorized
In his well-known Information Dispersal Algorithm paper, Rabin showed a way to distribute information in n pieces among n servers in such a way that recovery of the information is possible in the presence of up to t inactive servers. An enhanced mechanism to enable construction in the presence of malicious faults, which can intentionally modify their pieces of the information, was later presented by Krawczyk. Yet, these methods assume that the malicious faults occur only at reconstruction time. <P> In this paper we address the more general problem of secure storage and retrieval of information (SSRI), and guarantee that also the process of storing the information is correct even when some of the servers fail. Our protocols achieve this while maintaining the (asymptotical) space optimality of the above methods. <P> We also consider SSRI with the added requirement of confidentiality, by which no party except for the rightful owner of the information is able to learn anything about it. This is achieved through novel applications of cryptographic techniques, such as the distributed generation of receipts, distributed key management via threshold cryptography, and ``blinding.'' <P> An interesting byproduct of our scheme is the construction of a secret sharing scheme with shorter shares size in the amortized sense. An immediate practical application of our work is a system for the secure deposit of sensitive data. We also extend SSRI to a ``proactive'' setting, where an adversary may corrupt all the servers during the lifetime of the system, but only a fraction during any given time interval.
Last updated:  1999-02-01
The Disparity between Work and Entropy in Cryptology
Uncategorized
John Pliam
Uncategorized
A brief theory of work is developed. In it, the work-factor and guesswork of a random variable are linked to intuitive notions of time complexity in a brute-force attack. Bounds are given for a specific work-factor called the minimum majority. Tight bounds are given for the guesswork in terms of variation distance. Differences between work-factor, guesswork and the entropy of a random variable are pointed out, calling into question a common misconception about entropy indicating work.
Last updated:  1998-08-31
Security amplification by composition: The case of doubly-iterated, ideal ciphers
Uncategorized
William Aiello, Mihir Bellare, Giovanni Di Crescenzo, Ramarathnam Venkatesan
Uncategorized
We investigate, in the Shannon model, the security of constructions corresponding to double and (two-key) triple DES. That is, we consider F<sub>k1</sub>(F<sub>k2</sub>(.)) and F<sub>k1</sub>(F<sub>k2</sub><sup>-1</sup>(F<sub>k1</sub>(.))) with the component functions being ideal ciphers. This models the resistance of these constructions to ``generic'' attacks like meet in the middle attacks. We obtain the first proof that composition actually increases the security in some meaningful sense. We compute a bound on the probability of breaking the double cipher as a function of the number of computations of the base cipher made, and the number of examples of the composed cipher seen, and show that the success probability is the square of that for a single key cipher. The same bound holds for the two-key triple cipher. The first bound is tight and shows that meet in the middle is the best possible generic attack against the double cipher.
Last updated:  1998-08-12
Insecurity of Quantum Computations
Uncategorized
Hoi-Kwong Lo
Uncategorized
It had been widely claimed that quantum mechanics can protect private information during public decision in for example the so-called two-party secure computation. If this were the case, quantum smart-cards could prevent fake teller machines from learning the PIN (Personal Identification Number) from the customers' input. Although such optimism has been challenged by the recent surprising discovery of the insecurity of the so-called quantum bit commitment, the security of quantum two-party computation itself remains unaddressed. Here I answer this question directly by showing that all *one-sided* two-party computations (which allow only one of the two parties to learn the result) are necessarily insecure. As corollaries to my results, quantum one-way oblivious password identification and the so-called quantum one-out-of-two oblivious transfer are impossible. I also construct a class of functions that cannot be computed securely in any <i>two-sided</i> two-party computation. Nevertheless, quantum cryptography remains useful in key distribution and can still provide partial security in ``quantum money'' proposed by Wiesner.
Last updated:  1998-06-17
Relations among Notions of Security for Public-Key Encryption Schemes
Uncategorized
Mihir Bellare, Anand Desai, David Pointcheval, Phillip Rogaway
Uncategorized
We compare the relative strengths of popular notions of security for public key encryption schemes. We consider the goals of indistinguishability and non-malleability, each under chosen plaintext attack and two kinds of chosen ciphertext attack. For each of the resulting pairs of definitions we prove either an implication (every scheme meeting one notion must meet the other) or a separation (there is a scheme meeting one notion but not the other, assuming the first notion can be met at all). We similarly treat plaintext awareness, a notion of security in the random oracle model. An additional contribution of this paper is a new definition of non-malleability which we believe is simpler than the previous one.
Last updated:  1998-06-06
Almost All Discrete Log Bits Are Simultaneously Secure
Uncategorized
Claus P. Schnorr
Uncategorized
Let G be a finite cyclic group with generator \alpha and with an encoding so that multiplication is computable in polynomial time. We study the security of bits of the discrete log x when given exp<sub>\alpha</sub>(x), assuming that the exponentiation function exp<sub>\alpha</sub>(x) = \alpha<sup>x</sup> is one-way. We reduce the general problem to the case that G has odd order q. If G has odd order q the security of the least-significant bits of x and of the most significant bits of the rational number x/q \in [0,1) follows from the work of Peralta [P85] and Long and Wigderson [LW88]. We generalize these bits and study the security of consecutive <i> shift bits</i> lsb(2<sup>-i</sup>x mod q) for i=k+1,...,k+j. When we restrict exp<sub>\alpha</sub> to arguments x such that some sequence of j consecutive shift bits of x is constant (i.e., not depending on x) we call it a 2<sup>-j</sup>-<i>fraction</i> of exp<sub>\alpha</sub>. For groups of odd group order q we show that every two 2<sup>-j</sup>-fractions of exp<sub>\alpha</sub> are equally one-way by a polynomial time transformation: Either they are all one-way or none of them. Our <i> key theorem</i> shows that arbitrary j consecutive shift bits of x are simultaneously secure when given exp<sub>\alpha</sub>(x) iff the 2<sup>-j</sup>-fractions of exp<sub>\alpha</sub> are one-way. In particular this applies to the j least-significant bits of x and to the j most-significant bits of x/q \in [0,1). For one-way exp<sub>\alpha</sub> the individual bits of x are secure when given exp<sub>\alpha</sub>(x) by the method of Hastad, Naslund [HN98]. For groups of even order 2<sup>s</sup>q we show that the j least-significant bits of \lfloor x/2<sup>s</sup>\rfloor, as well as the j most-significant bits of x/q \in [0,1), are simultaneously secure iff the 2<sup>-j</sup>-fractions of exp<sub>\alpha'</sub> are one-way for \alpha' := \alpha<sup>2<sup>s</sup> </sup>. We use and extend the models of generic algorithms of Nechaev (1994) and Shoup (1997). We determine the generic complexity of inverting fractions of exp<sub>\alpha</sub> for the case that \alpha has prime order q. As a consequence, arbitrary segments of (1-\varepsilon)\lg q consecutive shift bits of random x are for constant \varepsilon >0 simultaneously secure against generic attacks. Every generic algorithm using t generic steps (group operations) for distinguishing bit strings of j consecutive shift bits of x from random bit strings has at most advantage O((\lg q)j\sqrt{t} (2<sup>j</sup>/q)<sup>1/4</sup>).
Last updated:  1998-06-14
Many-to-one Trapdoor Functions and their Relation to Public-key Cryptosystems
Uncategorized
Mihir Bellare, Shai Halevi, Amit Sahai, Salil Vadhan
Uncategorized
The heart of the task of building public key cryptosystems is viewed as that of ``making trapdoors;'' in fact, public key cryptosystems and trapdoor functions are often discussed as synonymous. How accurate is this view? In this paper we endeavor to get a better understanding of the nature of ``trapdoorness'' and its relation to public key cryptosystems, by broadening the scope of the investigation: we look at general trapdoor functions; that is, functions that are not necessarily injective (ie., one-to-one). Our first result is somewhat surprising: we show that non-injective trapdoor functions (with super-polynomial pre-image size) can be constructed {from} any one-way function (and hence it is unlikely that they suffice for public key encryption). On the other hand, we show that trapdoor functions with polynomial pre-image size are sufficient for public key encryption. Together, these two results indicate that the pre-image size is a fundamental parameter of trapdoor functions. We then turn our attention to the converse, asking what kinds of trapdoor functions can be constructed from public key cryptosystems. We take a first step by showing that in the random-oracle model one can construct injective trapdoor functions from any public key cryptosystem.
Last updated:  1999-08-01
Security and Composition of Multi-party Cryptographic Protocols
Uncategorized
Ran Canetti
Uncategorized
We present general definitions of security for multiparty cryptographic protocols and show that, using these definitions, security is preserved under a natural composition method. The definitions follow the general paradigm of known definitions; yet some substantial modifications and simplifications are introduced. In particular, `black-box simulation' is no longer required. The composition method is essentially the natural `subroutine substitution' method suggested by Micali and Rogaway. We first present the general definitional approach. Next we consider several settings for multiparty protocols. These include the cases of non-adaptive and adaptive adversaries, as well as the information-theoretic and the computational models.
Last updated:  1998-05-28
Making An Empty Promise With A Quantum Computer (Or, A Brief Review on the Impossibility of Quantum Bit Commitment)
Uncategorized
H. F. Chau, H. -K. Lo
Uncategorized
Alice has made a decision in her mind. While she does not want to reveal it to Bob at this moment, she would like to convince Bob that she is committed to this particular decision and that she cannot change it at a later time. Is there a way for Alice to get Bob's trust? Until recently, researchers had believed that the above task can be performed with the help of quantum mechanics. And the security of the quantum scheme lies on the uncertainty principle. Nevertheless, such optimism was recently shattered by Mayers and by us, who found that Alice can always change her mind if she has a quantum computer. Here, we survey this dramatic development and its implications on the security of other quantum cryptographic schemes.
Last updated:  1999-01-01
Quantum Computers Render Quantum Key Distribution Unconditionally Secure Over Arbitrarily Long Distances
Uncategorized
Hoi-Kwong Lo, H. F. Chau
Uncategorized
Abstract: Quantum cryptography has long been claimed to be useful for achieving many tasks that are impossible from the perspective of conventional cryptography. Arguably, the most important problem in quantum cryptography has been a rigorous proof of the security of quantum key distribution, the most well-known application. This notoriously hard problem has eluded researchers for years and has become even more important after the recent surprising demonstration of the insecurity of many other quantum cryptographic schemes including quantum bit commitment. Here, we solve this long standing problem by showing that, given quantum computers, quantum key distribution over an arbitrarily long distance of a realistic noisy channel can be made unconditionally secure. The novel technique we use is reduction from a quantum scheme to a classical scheme. The security in realistic noisy environments is then proven by using the recent theory of fault-tolerant quantum computation.
Last updated:  1998-05-04
More on Proofs of Knowledge
Uncategorized
Shai Halevi, Silvio Micali
Uncategorized
The notion of proofs of knowledge is central to cryptographic protocols, and many definitions for it have been proposed. In this work we explore a different facet of this notion, not addressed by prior definitions. Specifically, prior definitions concentrate on capturing the properties of the verifier, and do not pay much attention to the properties of the prover. Our new definition is strictly stronger than previous ones, and captures new and desirable properties. In particular, it guarantees prover feasibility, that is, it guarantees that the time spent by the prover in a proof of knowledge is comparable to that it spends in an "extraction" of this knowledge. Our definition also enables one to consider meaningfully the case of a single, specific prover.
Last updated:  1998-04-30
Randomness versus Fault-Tolerance
Uncategorized
Ran Canetti, Eyal Kushilevitz, Rafail Ostrovsky, Adi Rosen
Uncategorized
We investigate the relations between two major requirements of multiparty protocols: {\em fault tolerance} (or {\em resilience}) and {\em randomness}. Fault-tolerance is measured in terms of the maximum number of colluding faulty parties, t, that a protocol can withstand and still maintain the privacy of the inputs and the correctness of the outputs (of the honest parties). Randomness is measured in terms of the total number of random bits needed by the parties in order to execute the protocol. Previously, the upper bound on the amount of randomness required by general constructions for securely computing any non-trivial function f was polynomial both in $n$, the total number of parties, and the circuit-size C(f). This was the state of knowledge even for the special case t=1 (i.e., when there is at most one faulty party). In this paper, we show that for any linear-size circuit, and for any number t < n/3 of faulty parties, O(poly(t) * log n) randomness is sufficient. More generally, we show that for any function f with circuit-size C(f), we need only O(poly(t) * log n + poly(t) * C(f)/n) randomness in order to withstand any coalition of size at most t. Furthermore, in our protocol only t+1 parties flip coins and the rest of the parties are deterministic. Our results generalize to the case of adaptive adversaries as well.
Last updated:  1998-04-30
A Random Server Model for Private Information Retrieval (or How to Achieve Information Theoretic PIR Avoiding Data Replication)
Uncategorized
Yael Gertner, Shafi Goldwasser, Tal Malkin
Uncategorized
Private information retrieval (PIR) schemes enable users to obtain information from databases while keeping their queries secret from the database managers. We propose a new model for PIR, utilizing auxiliary random servers to provide privacy services for database access. In this model, prior to any on-line communication where users request queries, the database engages in an initial preprocessing setup stage with the random servers. Using this model we achieve the first PIR information theoretic solution in which the database does not need to give away its data to be replicated, and with minimal on-line computation cost for the database. This solves privacy and efficiency problems inherent to all previous solutions. In particular, all previous information theoretic PIR schemes required multiple replications of the database into separate entities which are not allowed to communicate with each other; and in all previous schemes (including ones which do not achieve information theoretic security), the amount of computation performed by the database on-line for every query is at least linear in the size of the database. In contrast, in our solutions the database does not give away its contents to any other entity; and after the initial setup stage, which costs at most O(n log n) in computation, the database needs to perform only O(1) amount of computation to answer questions of users on-line. All the extra on-line computation is done by the auxiliary random servers.
Last updated:  1998-04-22
Maintaining Authenticated Communication in the Presence of Break-ins
Uncategorized
Ran Canetti, Shai Halevi, Amir Herzberg
Uncategorized
We study the problem of maintaining authenticated communication over untrusted communication channels, in a scenario where the communicating parties may be occasionally and repeatedly broken into for transient periods of time. Once a party is broken into, its cryptographic keys are exposed and perhaps modified. Yet, we want parties whose security is thus compromised to regain their ability to communicate in an authenticated way aided by other parties. In this work we present a mathematical model for this highly adversarial setting, exhibiting salient properties and parameters, and then describe a practically-appealing protocol for the task of maintaining authenticated communication in this model. A key element in our solution is devising {\em proactive distributed signature (PDS) schemes} in our model. Although PDS schemes are known in the literature, they are all designed for a model where authenticated communication and broadcast primitives are available. We therefore show how these schemes can be modified to work in our model, where no such primitives are available a-priori. In the process of devising the above schemes, we also present a new definition of PDS schemes (and of distributed signature schemes in general). This definition may be of independent interest.
Last updated:  2003-08-04
The Random Oracle Methodology, Revisited
Ran Canetti, Oded Goldreich, Shai Halevi
We take a critical look at the relationship between the security of cryptographic schemes in the Random Oracle Model, and the security of the schemes that result from implementing the random oracle by so called "cryptographic hash functions". The main result of this paper is a negative one: There exist signature and encryption schemes that are secure in the Random Oracle Model, but for which any implementation of the random oracle results in insecure schemes. In the process of devising the above schemes, we consider possible definitions for the notion of a "good implementation" of a random oracle, pointing out limitations and challenges.
Last updated:  1998-03-17
Chameleon Hashing and Signatures
Uncategorized
Hugo Krawczyk, Tal Rabin
Uncategorized
We introduce CHAMELEON SIGNATURES that provide with an undeniable commitment of the signer to the contents of the signed document (as regular digital signatures do) but, at the same time, do not allow the recipient of the signature to disclose the contents of the signed information to any third party without the signer's consent. These signatures are closely related to Chaum's "undeniable signatures", but chameleon signatures allow for simpler and more efficient realizations than the latter. In particular, they are essentially non-interactive and do not involve the design and complexity of zero-knowledge proofs on which traditional undeniable signatures are based. Instead, chameleon signatures are generated under the standard method of hash-then-sign. Yet, the hash functions which are used are CHAMELEON HASH FUNCTIONS. These hash functions are characterized by the non-standard property of being collision-resistant for the signer but collision tractable for the recipient. We present simple and efficient constructions of chameleon hashing and chameleon signatures. The former can be constructed based on standard cryptographic assumptions (such as the hardness of factoring or discrete logarithms) and have efficient realizations based on these assumptions. For the signature part we can use any digital signature (such as RSA or DSS) and prove the unforgeability property of the resultant chameleon signatures solely based on the unforgeability of the underlying digital signature in use.
Last updated:  1998-03-13
A Modular Approach to the Design and Analysis of Authentication and Key Exchange Protocols
Uncategorized
Mihir Bellare, Ran Canetti, Hugo Krawczyk
Uncategorized
We present a general framework for constructing and analyzing authentication protocols in realistic models of communication networks. This framework provides a sound formalization for the authentication problem and suggests simple and attractive design principles for general authentication and key exchange protocols. The key element in our approach is a modular treatment of the authentication problem in cryptographic protocols; this applies to the definition of security, to the design of the protocols, and to their analysis. In particular, following this modular approach, we show how to systematically transform solutions that work in a model of idealized authenticated communications into solutions that are secure in the realistic setting of communication channels controlled by an active adversary. Using these principles we construct and prove the security of simple and practical authentication and key-exchange protocols. In particular, we provide a security analysis of some well-known key exchange protocols (e.g. authenticated Diffie-Hellman key exchange), and of some of the techniques underlying the design of several authentication protocols that are currently being deployed on a large scale for the Internet Protocol and other applications.
Last updated:  1998-03-10
An Efficient Non-Interactive Statistical Zero-Knowledge Proof System for Quasi-Safe Prime Products
Uncategorized
Rosario Gennaro, Daniele Micciancio, Tal Rabin
Uncategorized
We present efficient zero-knowledge proof systems for quasi-safe prime products and other related languages. Quasi-safe primes are a relaxation of safe primes, a class of prime numbers useful in many cryptographic applications. Our proof systems achieve higher security and better efficiency than all previously known ones. In particular, all our proof systems are perfect or statistical zero-knowledge, meaning that even a computationally unbounded adversary cannot extract any information from the proofs. Moreover, our proof systems are extremely efficient because they do not use general reductions to NP-complete problems, can be easily parallelized preserving zero-knowledge, and are non-interactive for computationally unbounded provers. The prover can also be efficiently implemented given some trapdoor information and using very little interaction. We demonstrate the applicability of quasi-safe primes by showing how they can be effectively used in the context of RSA based undeniable signatures to enforce the use of ``good'' public keys, i.e., keys such that if a signer can convince a recipient of the validity of a signature, then he won't be able to subsequently deny the same signature in case of a dispute.
Last updated:  1998-06-16
Fast Batch Verification for Modular Exponentiation and Digital Signatures
Uncategorized
Mihir Bellare, Juan A. Garay, Tal Rabin
Uncategorized
Many tasks in cryptography (e.g., digital signature verification) call for verification of a basic operation like modular exponentiation in some group: given (g,x,y) check that g<sup>x</sup>=y. This is typically done by re-computing g<sup>x</sup> and checking we get y. We would like to do it differently, and faster. The approach we use is batching. Focusing first on the basic modular exponentiation operation, we provide some probabilistic batch verifiers, or tests, that verify a sequence of modular exponentiations significantly faster than the naive re-computation method. This yields speedups for several verification tasks that involve modular exponentiations. Focusing specifically on digital signatures, we then suggest a weaker notion of (batch) verification which we call ``screening.'' It seems useful for many usages of signatures, and has the advantage that it can be done very fast; in particular, we show how to screen a sequence of RSA signatures at the cost of one RSA verification plus hashing.
Last updated:  1998-03-04
A Practical Public Key Cryptosystem Provably Secure against Adaptive Chosen Ciphertext Attack
Uncategorized
Ronald Cramer, Victor Shoup
Uncategorized
A new public key cryptosystem is presented that is provably secure against adaptive chosen ciphertext attack. The scheme is quite practical, and the proof of security relies only on standard intractability assumptions.
Last updated:  1998-02-26
On the possibility of basing Cryptography on the assumption that $P \neq NP$
Uncategorized
Oded Goldreich, Shafi Goldwasser
Uncategorized
Recent works by Ajtai and by Ajtai and Dwork bring to light the old (general) question of whether it is at all possible to base the security of cryptosystems on the assumption that $\P\neq\NP$. We discuss this question and in particular review and extend a two-decade old result of Brassard regarding this question. Our conclusion is that the question remains open.
Last updated:  1998-02-22
Universal Service Providers for Database Private Information Retrieval
Uncategorized
Giovanni Di-Crescenzo, Yuval Ishai, Rafail Ostrovsky
Uncategorized
We consider the question of private information retrieval in the so-called ``commodity-based'' model. This model was recently proposed by Beaver for practically-oriented service-provider internet applications. In this paper, we show the following, somewhat surprising, results regarding this model for the problem of private-information retrieval: (1) the service-provider model allows to dramatically reduce the overall communication involving the user, using off-line pre-processing messages from ``service-providers'' to databases, where the service-providers do not need to know the database contents, nor the future user's requests; (2) our service-provider solutions are resilient against more than a majority (in fact, all-but-one) coalitions of service-providers; and (3) these results hold for {\em both} the computational and the information-theoretic setting. More specifically, we exhibit a service-provider algorithm which can ``sell'' (i.e., generate and send) ``commodities'' to users and databases, that subsequently allow users to retrieve database contents in a way which hides from the database which particular item the user retrieves. The service-providers need not know anything about the contents of the databases nor the nature of the user's requests in order to generate commodities. Our commodity-based solution significantly improves communication complexity of the users (i.e., counting both the size of commodities bought by the user from the service-providers and the subsequent communication with the databases) compared to all previously known on-line private information retrieval protocols (i.e., without the help of the service-providers). Moreover, we show how commodities from different service-providers can be {\em combined} in such a way that even if ``all-but-one'' of the service-providers collude with the database, the user's privacy remains intact. Finally, we show how to re-use commodities in case of multiple requests (i.e., in the amortized sense), how to ``check'' commodity-correctness, and how some of the solutions can be extended to the related problem of {\em Private Information Storage}.
Last updated:  1998-02-03
Private Information Retrieval by Keywords
Uncategorized
Benny Chor, Niv Gilboa, Moni Naor
Uncategorized
Private information retrieval (PIR) schemes enable a user to access one or more servers that hold copies of a database and {\em privately} retrieve parts of the $n$ bits of data stored in the database. This means that the queries give each individual database no partial information (in the information theoretic or computational sense) on the identity of the item retrieved by the user. All known PIR schemes assume that the user knows the {\em physical address} of the sought item. This is usually not the case when accessing a public database that is not managed by the user. Such databases are typically presented with keywords, which are then internally translated (at the database end) to physical addresses, using an appropriate search structure (for example, a hash table or a binary tree). In this note we describe a simple, modular way to privately access data by keywords. It combines {\em any} conventional search structure with {\em any} underlying PIR scheme (including single server schemes). The transformation requires no modification in the way that the search structure is maintained. Therefore the same database will support both private and regular (non private) searches.
Last updated:  1998-01-27
The Graph Clustering Problem has a Perfect Zero-Knowledge Proof
Uncategorized
A. De Santis, G. Di Crescenzo, O. Goldreich, G. Persiano.
Uncategorized
The input to the Graph Clustering Problem consists of a sequence of integers $m_1,...,m_t$ and a sequence of $\sum_{i=1}^{t}m_i$ graphs. The question is whether the equivalence classes, under the graph isomorphism relation, of the input graphs have sizes which match the input sequence of integers. In this note we show that this problem has a (perfect) zero-knowledge interactive proof system. This result improves over <a href="http:../1996/96-14.html">record 96-14</a>, where a parametrized (by the sequence of integers) version of the problem was studied.
Last updated:  1998-01-08
On Protocol Divertibility
Uncategorized
Gerrit Bleumer
Uncategorized
In this paper, we establish the notion of divertibility as a protocol property as opposed to the existing notion as a language property (see Okamoto, Ohta). We give a definition of protocol divertibility that applies to arbitrary 2-party protocols and is compatible with Okamoto and Ohta's definition in the case of interactive zero-knowledge proofs. Other important examples falling under the new definition are blind signature protocols. A sufficient criterion for divertibility is presented and found to be satisfied by many examples of protocols in the literature. The generality of the definition is further demonstrated by examples from protocol classes that have not been considered for divertibility before. We show diverted El-Gamal encryption and diverted Diffie-Hellman key exchange.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.