All papers in 2021 (1705 results)
GoUncle: A Blockchain Of, By, For Modest Computers
GoUncle is a blockchain for permissionless participation by modest computers. As in GHOST (Greedy Heaviest Observed SubTree, successfully implemented by and used in the Ethereum blockchain's Proofs-of-Work version), GoUncle blocks also record public-key identities of some forking blocks' finders who are dearly called ``uncles'' (poorly named ``orphans'' in Bitcoin). While GHOST uses uncles only for saving PoW mining electricity, GoUncle assigns jobs for uncles to do. In a so-called {\em payload screening} job, uncles choose from earlier payloads only data items complying with the blockchain database (DB) policy to announce for the blockchain's gossip protocol to diffuse. Now that the blockchain can readily append blocks containing incorrect payloads, each block's height as a globally known address becomes {\em deterministic} right upon appending the block. The deterministic blockchain addresses can index partition the distributed blockchain DB into small files to store in nowadays low-cost over provisioned external storage, for fast input, output, lookup, insert, update, manage, ..., etc., exactly the same as a standard DB management system (DBMS) is operated. It is that the blockchain DB becomes a standard DBMS for fast operable even by a modest computer, that secures the DBMS by a {\em hop-by-hop firewall} among vast {\em semantics gossipers} who each, upon receipting a gossip of the uncles' screening, looks up its local DBMS and judges to either deposit it in and gossip it on, or discard it. This hop-by-hop firewall works exactly as {\em correctness probability amplification} by repeated execution of a {\em randomized probabilistic} (RP) algorithm, and thus the following becomes newly known:
$$\mbox{Blockchains} \subset \mbox{RP}.$$
Also to be manifested is a more general and methodological use of uncles as No-Spam and No-Single-Point-of-Failure (No-SPOF) set of blockchain servers.
Verifiable Encryption from MPC-in-the-Head
Uncategorized
Uncategorized
Verifiable encryption (VE) is a protocol where one can provide assurance that an encrypted plaintext satisfies certain properties, or relations.
It is an important building block in cryptography with many useful applications, such as key escrow, group signatures, optimistic fair exchange, and others. However, the majority of previous VE schemes are restricted to instantiation with specific public-key encryption schemes or relations.
In this work, we propose a novel framework that realizes VE protocols using zero-knowledge proof systems based on the MPC-in-the-head paradigm (Ishai et al. STOC 2007). Our generic compiler can turn a large class of zero-knowledge proofs into secure VE protocols for any secure public-key encryption scheme with the undeniability property, a notion that essentially guarantees binding of encryption when used as a commitment scheme.
Our framework is versatile: because the circuit proven by the MPC-in-the-head prover is decoupled from a complex encryption function, the work of the prover is focused on proving the encrypted data satisfies the relation, not the proof of plaintext knowledge. Hence, our approach allows for instantiation with various combinations of properties about the encrypted data and encryption functions. We then consider concrete applications, to demonstrate the efficiency of our framework, by first giving a new approach and implementation to verifiably encrypt discrete logarithms in any prime order group more efficiently than was previously known. Then we give the first practical verifiable encryption scheme for AES keys with post-quantum security, along with an implementation and benchmarks.
The Maiorana-McFarland structure based cryptanalysis of Simon
In this paper we propose the linear hull construction for block ciphers with quadratic Maiorana-McFarland structure round functions. The search for linear trails with high squared correlations from our Maiorana-McFarland structure based constructive linear cryptanalysis is linear algebraic. Hence from this linear algebraic essence, the space of all linear trails has the structure such that good linear hulls can be constructed. Then for the Simon2n and its variants, we prove the lower bound $\frac{1}{2^n}$ on the potential of the linear hull with the fixed input and output masks at arbitrary long rounds, under independent assumptions. We argue that for Simon2n the potential of the realistic linear hull of the Simon2n with the linear key-schedule should be bigger than $\frac{1}{2^{2n}}$.\\
On the other hand we prove that the expected differential probability (EDP) is at least $\frac{1}{2^n}$ under the independence assumptions. It is argued that the lower bound of EDP of Simon2n of realistic differential trails is bigger than $\frac{1}{2^{2n}}$. It seems that at least theoretically the Simon2n is insecure for the key-recovery attack based on our new constructed linear hulls and key-recovery attack based on our constructed differential trails.\\
CheckShake: Passively Detecting Anomaly in Wi-Fi Security Handshake using Gradient Boosting based Ensemble Learning
Recently, a number of attacks have been demonstrated (like key reinstallation attack, called KRACK) on WPA2 protocol suite in
Wi-Fi WLAN. As the firmware of the WLAN devices in the context of IoT, industrial systems, and medical devices is often not patched, detecting and preventing such attacks is challenging. In this paper, we design and implement a system, called CheckShake, to passively detect anomalies in the handshake of Wi-Fi security protocols, in particular WPA2, between a client and an access point using COTS radios. Our proposed system works without decrypting any traffic. It passively monitors multiple wireless channels in parallel in the neighborhood and uses a state machine model to characterize and detect the attacks. In particular, we develop a state machine model for grouping Wi-Fi handshake packets and then perform deep packet inspection to identify the symptoms of the anomaly in specific stages of a handshake session. Our implementation of
CheckShake does not require any modification to the firmware of the client or the access point or the COTS devices, it only requires to be physically placed within the range of the access point and its clients. We use both the publicly available dataset and our own data set for performance analysis of CheckShake. Using gradient boosting-based supervised machine learning models, we show that an accuracy around 93.39% and a false positive rate of 5.08% can be achieved using CheckShake
Cryptanalysis of the Cryptosystems Based on the Generalized Hidden Discrete Logarithm Problem
In this paper, we will show the hidden discrete logarithm problem(HDLP) and the generalized form of HDLP(GHDLP) over non-commutative associative algebras (FNAAs) can be reduced to discrete logarithm problem(DLP) in a finite field through analyzing the eigenvalues of the representation matrix. Through the analysis of computational complexity, we will show that HDLP and GHDLP is not are not good improvements of DLP.With all the instruments in hand, we will show how some schemes based on GHDLP can be broken. Thus we can conclude that, all ideas of constructing cryptographic schemes based on the two problem are of no practical significance.
A Unified Framework for Non-Universal SNARKs
We propose a general framework for non-universal SNARKs. It contains (1) knowledge-sound and non-black-box any-simulation-extractable (ASE), (2) zero-knowledge and subversion-zero knowledge SNARKs for the well-known QAP, SAP, QSP, and QSP constraint languages that all by design have \emph{relatively} simple security proofs. The knowledge-sound zero-knowledge SNARK is similar to Groth's SNARK from EUROCRYPT 2016, except having fewer trapdoors, while the ASE subversion-zero knowledge SNARK relies on few additional conditions. We prove security in a weaker, more realistic version of the algebraic group model. We characterize SAP, SSP, and QSP in terms of QAP; this allows one to use a SNARK for QAP directly for other languages. Our results allow us to construct a family of SNARKs for different languages and with different security properties following the same proof template. Some of the new SNARKs are more efficient than prior ones. In other cases, the new SNARKs cover gaps in the landscape, e.g., there was no previous ASE or Sub-ZK SNARK for SSP or QSP.
A Compact Digital Signature Scheme Based on the Module-LWR problem*
We propose a lattice-based digital signature scheme MLWRSign by modifying Dilithium, which is one of the third-Round finalists of NIST’s call for post-quantum cryptographic standards. To the best of our knowledge, our scheme MLWRSign is the first signature scheme whose security is based on the (module) learning with rounding (LWR) problem. Due to the simplicity of the LWR, the secret key size is reduced by approximately 30% in our scheme compared to Dilithium, while achieving the same level of security. Moreover, we implemented MLWRSign and observed that the running time of MLWRSign is comparable to that of Dilithium.
Efficient Random Beacons with Adaptive Security for Ungrindable Blockchains
We describe and analyze a simple protocol for $n$ parties that
implements a randomness beacon: a sequence of high entropy
values, continuously emitted at regular intervals, with sub-linear
communication per value. The algorithm can tolerate a
$(1 - \epsilon)/2$ fraction of the $n$ players to be controlled by an
adaptive adversary that may deviate arbitrarily from the
protocol. The randomness mechanism relies on verifiable random
functions (VRF), modeled as random functions, and effectively
stretches an initial $\lambda$-bit seed to an arbitrarily long public
sequence so that (i) with overwhelming probability in $k$--the
security parameter--each beacon value has high min-entropy
conditioned on the full history of the algorithm, and (ii) the total
work and communication required per value is $O(k)$ cryptographic
operations.
The protocol can be directly applied to provide a qualitative
improvement in the security of several proof-of-stake blockchain
algorithms, rendering them safe from ``grinding'' attacks.
Where Star Wars Meets Star Trek: SABER and Dilithium on the Same Polynomial Multiplier
Secure communication often require both encryption and digital signatures to guarantee the confidentiality of the message and the authenticity of the parties. However, post-quantum cryptographic protocols are often studied independently. In this work, we identify a powerful synergy between two finalist protocols in the NIST standardization process. In particular, we propose a technique that enables SABER and Dilithium to share the exact same polynomial multiplier. Since polynomial multiplication plays a key role in each protocol, this has a significant impact on hardware implementations that support both SABER and Dilithium. We estimate that existing Dilithium implementations can add support for SABER with only a 4% increase in LUT count. A minor trade-off of the proposed multiplier is that it can produce inexact results with some limited inputs. We thus carry out a thorough analysis of such cases, where we prove that the probability of these events occurring is near zero, and we show that this characteristic does not affect the security of the implementation.
We then implement the proposed multiplier in hardware to obtain a design that offers competitive performance/area trade-offs. Our NTT implementation achieves a latency of 519 cycles while consuming 2,012 LUTs and only 331 flip-flops when implemented on an Artix-7 FPGA. We also propose a shuffling-based method to provide side-channel protection with low overhead during polynomial multiplication. Finally, we evaluate the side-channel security of the proposed design on a Sakura-X FPGA board.
Categorization of Faulty Nonce Misuse Resistant Message Authentication
A growing number of lightweight block ciphers are proposed for environments such as the Internet of Things. An important contribution to the reduced implementation cost is a block length n of 64 or 96 bits rather than 128 bits. As a consequence, encryption modes and message authentication code (MAC) algorithms require security beyond the 2^{n/2} birthday bound. This paper provides an extensive treatment of MAC algorithms that offer beyond birthday bound PRF security for both nonce-respecting and nonce-misusing adversaries. We study constructions that use two block cipher calls, one universal hash function call and an arbitrary number of XOR operations.
We start with the separate problem of generically identifying all possible secure n-to-n-bit pseudorandom functions (PRFs) based on two block cipher calls. The analysis shows that the existing constructions EDM, SoP, and EDMD are the only constructions of this kind that achieve beyond birthday bound security.
Subsequently we deliver an exhaustive treatment of MAC algorithms, where the outcome of a universal hash function evaluation on the message may be entered at any point in the computation of the PRF. We conclude that there are a total amount of nine schemes that achieve beyond birthday bound security, and a tenth construction that cannot be proven using currently known proof techniques. For these former nine MAC algorithms, three constructions achieve optimal n-bit security in the nonce-respecting setting, but are completely insecure if the nonce is reused. The remaining six constructions have 3n/4-bit security in the nonce-respecting setting, and only four out of these six constructions still achieve beyond the birthday bound security in the case of nonce misuse.
Invertible Quadratic Non-Linear Layers for MPC-/FHE-/ZK-Friendly Schemes over $\mathbb F_p^n$
Motivated by new applications such as secure Multi-Party Computation (MPC), Fully Homomorphic Encryption (FHE), and Zero-Knowledge proofs (ZK), many MPC-, FHE- and ZK-friendly symmetric-key primitives that minimize the number of multiplications over $\mathbb{F}_p$ for a large prime $p$ have been recently proposed in the literature. This goal is often achieved by instantiating the non-linear layer via power maps $x\mapsto x^d$.
In this paper, we start an analysis of new non-linear permutation functions over $\mathbb{F}_p^n$ that can be used as building blocks in such symmetric-key primitives. Given a local map $F:\mathbb{F}_p^m \rightarrow \mathbb{F}_p$, we limit ourselves to focus on S-Boxes over $\mathbb{F}_p^n$ for $n\ge m$ defined as $\mathcal{S}_F(x_0, x_1, \ldots, x_{n-1}) = y_0\| y_1\| \ldots \| y_{n-1}$ where $y_i := F(x_i, x_{i+1}, \ldots, x_{i+m-1} )$. As main results, we prove that
- given any quadratic function $F:\mathbb{F}_p^2 \rightarrow \mathbb{F}_p$, the corresponding S-Box $\mathcal{S}_F$ over $\mathbb{F}_p^n$ for $n\ge 3$ is never invertible;
- similarly, given any quadratic function $F:\mathbb{F}_p^3 \rightarrow \mathbb{F}_p$, the corresponding S-Box $\mathcal{S}_F$ over $\mathbb{F}_p^n$ for $n\ge 5$ is never invertible.
Moreover, for each $p\ge 3$, we present (1st) generalizations of the Lai-Massey construction over $\mathbb{F}_p^n$ defined as before via functions $F:\mathbb{F}_p^m \rightarrow \mathbb{F}_p$ for each $n=m\ge 2$ and (2nd) (non-trivial) quadratic functions $F:\mathbb{F}_p^3 \rightarrow \mathbb{F}_p$ such that $\mathcal{S}_F$ over $\mathbb{F}_p^n$ for $n\in \{3,4\}$ is invertible. As an open problem for future work, we conjecture that for each $m\ge 1$ there exists a \textit{finite} integer $n_{\text{max}}(m)$ such that $\mathcal{S}_F$ over $\mathbb{F}_p^n$ defined as before via a quadratic function $F:\mathbb{F}_p^m \rightarrow \mathbb{F}_p$ is \textit{not} invertible for each $n\ge n_{\text{max}}(m)$.
Finally, as a concrete application, we propose Neptune, a variant of the sponge hash function Poseidon, whose non-linear layer is designed by taking into account the results presented in this paper.
We show that this variant leads to a concrete multiplication reduction with respect to Poseidon.
RLWE-based distributed key generation and threshold decryption
Ever since the appearance of quantum computers, prime factoring and discrete logarithm based cryptography has been put in question, giving birth to the so called post-quantum cryptography. The most prominent field in post-quantum cryptography is lattice-based cryptography, protocols that are proved to be as difficult to break as certain difficult lattice problems like Learning With Errors (LWE) or Ring Learning With Errors (RLWE). Furthermore, the application of cryptographic techniques to different areas, like electronic voting, has also seen to a great interest in distributed cryptography. In this work we will give two original threshold protocols based in the lattice problem RLWE: one for key generation and one for decryption. We will prove them both correct and secure under the assumption of hardness of some well-known lattice problems and we will give a rough implementation of the protocols in C to give some tentative results about their viability.
Verifiable Decryption for BGV
In this work we present a direct construction for verifiable decryption for the BGV encryption scheme by combining existing zero-knowledge proofs for linear relations and bounded values. This is one of the first constructions of verifiable decryption protocols for lattice-based cryptography, and we give a protocol that is simpler and at least as efficient as the state of the art when amortizing over many ciphertexts.
To prove its practicality we provide concrete parameters, resulting in proof size of less than $44 \tau$ KB for $\tau$ ciphertexts with message space $2048$ bits. Furthermore, we provide an open source implementation showing that the amortized cost of the verifiable decryption protocol is only $76$ ms per message when batching over $\tau = 2048$ ciphertexts.
Private Lives Matter: A Differential Private Functional Encryption Scheme (extended version)
The use of data combined with tailored statistical analysis have presented a unique opportunity to organizations in diverse fields to observe users' behaviors and needs, and accordingly adapt and fine-tune their services.
However, in order to offer utilizable, plausible, and personalized alternatives to users, this process usually also entails a breach of their privacy.
The use of statistical databases for releasing data analytics is growing exponentially, and while many cryptographic methods are utilized to protect the confidentiality of the data -- a task that has been ably carried out by many authors over the years -- only a few %rudimentary number of
works focus on the problem of privatizing the actual databases.
Believing that securing and privatizing databases are two equilateral problems, in this paper, we propose a hybrid approach by combining Functional Encryption with the principles of Differential Privacy.
Our main goal is not only to design a scheme for processing statistical data and releasing statistics in a privacy-preserving way but also to provide a richer, more balanced, and comprehensive approach in which data analytics and cryptography go hand in hand with a shift towards increased privacy.
Quantum commitments and signatures without one-way functions
In the classical world, the existence of commitments is equivalent to the existence of one-way functions. In the quantum setting, on the other hand, commitments are not known to imply one-way functions, but all known constructions of quantum commitments use at least one-way functions. Are one-way functions really necessary for commitments in the quantum world?
In this work, we show that non-interactive quantum commitments (for classical messages) with computational hiding and statistical binding exist if pseudorandom quantum states exist. Pseudorandom quantum states are sets of quantum states that are efficiently generated but their polynomially many copies are computationally indistinguishable from the same number of copies of Haar random states [Ji, Liu, and Song, CRYPTO 2018]. It is known that pseudorandom quantum states exist even if BQP=QMA (relative to a quantum oracle) [Kretschmer, TQC 2021], which means that pseudorandom quantum states can exist even if no quantum-secure classical cryptographic primitive exists. Our result therefore shows that quantum commitments can exist even if no quantum-secure classical cryptographic primitive exists. In particular, quantum commitments can exist even if
no quantum-secure one-way function exists. In this work, we also consider digital signatures, which are other fundamental primitives in cryptography. We show that one-time secure digital signatures with quantum public keys exist if pseudorandom quantum states exist. In the classical setting, the existence of digital signatures is equivalent to the existence of one-way functions. Our result, on the other hand, shows that quantum signatures can exist even if no quantum-secure classical cryptographic primitive (including quantum-secure one-way functions) exists.
Rotational-Linear Attack: A New Framework of Cryptanalysis on ARX ciphers with Applications to Chaskey
In this paper, we formulate a new framework of cryptanalysis called rotational-linear attack on ARX ciphers. We firstly build an efficient distinguisher for the cipher $ E$ consisted of the rotational attack and the linear attack together with some intermediate variables. Then a key recovery technique is introduced with which we can recover some bits of the last whitening key in the related-key scenario. To decrease data complexity of our attack, we also apply a new method, called bit flipping, in the rotational cryptanalysis for the first time and the effective partitioning technique to the key-recovery part.
Applying the new framework of attack to the MAC algorithm Chaskey, we build a full-round distinguisher over it. Besides, we have recovered $21$ bits of information of the key in the related-key scenario, for keys belonging to a large weak-key class based on 6-round distinguisher. The data complexity is $2^{38.8}$ and the time complexity is $2^{46.8}$. Before our work, the rotational distinguisher can only be used to reveal key information by checking weak-key conditions. This is the first time it is applied in a last-rounds key-recovery attack. We build a 17-round rotational-linear distinguisher for ChaCha permutation as an improvement compared to single rotational cryptanalysis over it.
Proof of a conjecture on a special class of matrices over commutative rings of characteristic 2
In this note, we prove the conjecture posed by Keller and Rosemarin at Eurocrypt 2021 on the nullity of a matrix polynomial of a block matrix with Hadamard type blocks over commutative rings of characteristic 2. Therefore, it confirms the conjectural optimal bound on the dimension of invariant subspace of the Starkad cipher using the HADES design strategy. We also give characterizations of the algebraic structure formed by Hadamard matrices over commutative rings.
Low-Complexity Deep Convolutional Neural Networks on Fully Homomorphic Encryption Using Multiplexed Parallel Convolutions
Recently, the standard ResNet-20 network was successfully implemented on residue number system variant Cheon-Kim-Kim-Song (RNS-CKKS) scheme using bootstrapping, but the implementation lacks practicality due to high latency and low security level. To improve the performance, we first minimize total bootstrapping runtime using multiplexed parallel convolution that collects sparse output data for multiple channels compactly. We also propose the \emph{imaginary-removing bootstrapping} to prevent the deep neural networks from catastrophic divergence during approximate ReLU operations. In addition, we optimize level consumptions and use lighter and tighter parameters. Simulation results show that we have 4.67$\times$ lower inference latency and 134$\times$ less amortized runtime (runtime per image) for ResNet-20 compared to the state-of-the-art previous work, and we achieve standard 128-bit security. Furthermore, we successfully implement ResNet-110 with high accuracy on the RNS-CKKS scheme for the first time.
Computational Irrelevancy: Bridging the Gap between Pseudo- and Real Randomness in MPC Protocols
Due to the fact that classical computers cannot efficiently obtain random numbers, it is common practice to design cryptosystems in terms of real random numbers and then replace them with (cryptographically secure) pseudorandom ones for concrete implementations. However, as pointed out by [Nuida, PKC 2021], this technique may lead to compromise of security in secure multiparty computation (MPC) protocols. Although this work suggests using information-theoretically secure protocols and pseudorandom generators (PRGs) with high min-entropy to alleviate the problem, yet it is preferable to base the security on computational assumptions rather than the stronger information-theoretic ones. By observing that the contrived constructions in the aforementioned work use MPC protocols and PRGs that are closely related to each other, we notice that it may help to alleviate the problem by using protocols and PRGs that are "unrelated" to each other. In this paper, we propose a notion called "computational irrelevancy" to formalise the term "unrelated" and under this condition provide a security guarantee under computational assumptions.
Hecate: Abuse Reporting in Secure Messengers with Sealed Sender
End-to-end encryption provides strong privacy protections to billions of people, but it also complicates efforts to moderate content that can seriously harm people. To address this concern, Tyagi et al. [CRYPTO 2019] introduced the concept of asymmetric message franking (AMF), which allows people to report abusive content to a moderator, while otherwise retaining end-to-end privacy by default and even compatibility with anonymous communication systems like Signal’s sealed sender.
In this work, we provide a new construction for asymmetric message franking called Hecate that is faster, more secure, and introduces additional functionality compared to Tyagi et al. First, our construction uses fewer invocations of standardized crypto primitives and operates in the plain model. Second, on top of AMF’s accountability and deniability requirements, we also add forward and backward secrecy. Third, we combine AMF with source tracing, another approach to content moderation that has previously been considered only in the setting of non-anonymous networks. Source tracing allows for messages to be forwarded, and a report only identifies the original source who created a message. To provide anonymity for senders and forwarders, we introduce a model of "AMF with preprocessing" whereby every client authenticates with the moderator out-of-band to receive a token that they later consume when sending a message anonymously.
Divide and Funnel: a Scaling Technique for Mix-Networks
While many anonymous communication (AC) protocols have been proposed to provide anonymity over the internet, scaling to a large number of users while remaining provably secure is challenging. We tackle this challenge by proposing a new scaling technique to improve the scalability/anonymity of AC protocols that distributes the computational load over many nodes without completely disconnecting the paths different messages take through the network. We demonstrate that our scaling technique is useful and practical through a core sample anonymous broadcast protocol, Streams, that offers provable security guarantees and scales for a million messages. The scaling technique ensures that each node in the system does the computation-heavy public key operation only for a tiny fraction of the total messages routed through the Streams network while maximizing the mixing/shuffling in every round. Our experimental results show that Streams can scale well even if the system has a load of one million messages at any point in time, with a latency of 16 seconds while offering provable ``one-in-a-billion'' unlinkability, and can be leveraged for applications such as anonymous microblogging and network-level anonymity for blockchains. We also illustrate by examples that our scaling technique can be useful to other AC protocols to improve their scalability and privacy, and can be interesting to protocol developers.
Cryptanalysis of Candidate Obfuscators for Affine Determinant Programs
At ITCS 2020, Bartusek et al. proposed a candidate indistinguishability obfuscator (iO) for affine determinant programs (ADPs). The candidate is special since it directly applies specific randomization techniques to the underlying ADP, without relying on the hardness of traditional cryptographic assumptions like discrete-log or learning with errors. It is relatively efficient compared to the rest of the iO candidates. However, the obfuscation scheme requires further cryptanalysis since it was not known to be based on any well-formed mathematical assumptions.
In this paper, we show cryptanalytic attacks on the iO candidate provided by Bartusek et al. Our attack exploits the weakness of one of the randomization steps in the candidate. The attack applies to a fairly general class of programs. At the end of the paper we discuss plausible countermeasures to defend against our attacks.
PUBA: Privacy-Preserving User-Data Bookkeeping and Analytics
In this paper we propose Privacy-preserving User-data Bookkeeping & Analytics (PUBA), a building block destined to enable the implementation of business models (e.g., targeted advertising) and regulations (e.g., fraud detection) requiring user-data analysis in a privacy-preserving way.
In PUBA, users keep an unlinkable but authenticated cryptographic logbook containing their historic data on their device.
This logbook can only be updated by the operator while its content is not revealed.
Users can take part in a privacy-preserving analytics computation, where it is ensured that their logbook is up-to-date and authentic while the potentially secret analytics function is verified to be privacy-friendly.
Taking constrained devices into account, users may also outsource analytic computations (to a potentially malicious proxy not colluding with the operator).
We model our novel building block in the Universal Composability framework and provide a practical protocol instantiation.
To demonstrate the flexibility of PUBA, we sketch instantiations of privacy-preserving fraud detection and targeted advertising, although it could be used in many more scenarios, e.g. data analytics for multi-modal transportation systems.
We implemented our bookkeeping protocols and an exemplary outsourced analytics computation based on logistic regression using the MP-SPDZ MPC framework.
Performance evaluations using a smartphone as user device and more powerful hardware for operator and proxy suggest that PUBA for smaller logbooks can indeed be practical.
Making Private Function Evaluation Safer, Faster, and Simpler
In the problem of two-party \emph{private function evaluation} (PFE), one party $P_A$ holds a \emph{private function} $f$ and (optionally) a private input $x_A$, while the other party $P_B$ possesses a private input $x_B$. Their goal is to evaluate $f$ on $x_A$ and $x_B$, and one or both parties may obtain the evaluation result $f(x_A, x_B)$ while no other information beyond $f(x_A, x_B)$ is revealed.
In this paper, we revisit the two-party PFE problem and provide several enhancements. We propose the \emph{first} constant-round actively secure PFE protocol with linear complexity. Based on this result, we further provide the \emph{first} constant-round publicly verifiable covertly (PVC) secure PFE protocol with linear complexity to gain better efficiency. For instance, when the deterrence factor is $\epsilon = 1/2$, compared to the passively secure protocol, its communication cost is very close and its computation cost is around $2.6\times$. In our constructions, as a by-product, we design a specific protocol for proving that a list of ElGamal ciphertexts is derived from an \emph{extended permutation} performed on a given list of elements. It should be noted that this protocol greatly improves the previous result and may be of independent interest. In addition, a reusability property is added to our two PFE protocols. Namely, if the same function $f$ is involved in multiple executions of the protocol between $P_A$ and $P_B$, then the protocol could be executed more efficiently from the second execution. Moreover, we further extend this property to be \emph{global}, such that it supports multiple executions for the same $f$ in a reusable fashion between $P_A$ and \emph{arbitrary} parties playing the role of $P_B$.
On the security of OSIDH
The Oriented Supersingular Isogeny Diffie-Hellman is a post-quantum key exchange scheme recently introduced by Colò and Kohel. It is based on the group action of an ideal class group of a quadratic imaginary order on a subset of supersingular elliptic curves, and in this sense it can be viewed as a generalization of the popular isogeny based key exchange CSIDH. From an algorithmic standpoint, however, OSIDH is quite different from CSIDH. In a sense, OSIDH uses class groups which are more structured than in CSIDH, creating a potential weakness that was already recognized by Colò and Kohel. To circumvent the weakness, they proposed an ingenious way to realize a key exchange by exchanging partial information on how the class group acts in the neighborhood of the public curves, and conjectured that this additional information would not impact security.
In this work we revisit the security of OSIDH by presenting a new attack, building upon previous work of Onuki. Our attack has exponential complexity, but it practically breaks Colò and Kohel's parameters unlike Onuki's attack. We also discuss countermeasures to our attack, and analyze their impact on OSIDH, both from an efficiency and a functionality point of view.
Improved Constructions of Anonymous Credentials From Structure-Preserving Signatures on Equivalence Classes
Anonymous attribute-based credentials (ABCs) are a powerful tool allowing users to authenticate while maintaining privacy. When instantiated from structure-preserving signatures on equivalence classes (SPS-EQ) we obtain a controlled form of malleability, and hence increased functionality and privacy for the user.
Existing constructions consider equivalence classes on the message space, allowing the joint randomization of credentials and the corresponding signatures on them. In this work, we additionally consider equivalence classes on the signing-key space. In this regard, we obtain a signer-hiding notion, where the issuing organization is not revealed when a user
shows a credential. To achieve this, we instantiate the ABC framework of Fuchsbauer, Hanser, and Slamanig (FHS19, Journal of Cryptology '19) with a recent SPS-EQ scheme (ASIACRYPT '19) modified to support a fully adaptive NIZK from the framework of Couteau and Hartmann (CRYPTO '20). We also show how to obtain mercurial signatures (CT-RSA '19), extending the application of our construction to anonymous delegatable credentials.
To further increase functionality and efficiency, we augment the set-commitment scheme of FHS19 to support openings on attribute sets disjoint from those possessed by the user, while integrating a proof of exponentiation to allow for a more efficient verifier. Instantiating in the CRS model, we obtain an efficient credential system, anonymous under malicious organization keys, with increased expressiveness and privacy, proven secure in the standard model.
Incompressible Cryptography
Incompressible encryption allows us to make the ciphertext size flexibly large and ensures that an adversary learns nothing about the encrypted data, even if the decryption key later leaks, unless she stores essentially the entire ciphertext. Incompressible signatures can be made arbitrarily large and ensure that an adversary cannot produce a signature on any message, even one she has seen signed before, unless she stores one of the signatures essentially in its entirety.
In this work, we give simple constructions of both incompressible public-key encryption and signatures under minimal assumptions. Furthermore, large incompressible ciphertexts (resp. signatures) can be decrypted (resp. verified) in a streaming manner with low storage. In particular, these notions strengthen the related concepts of disappearing encryption and signatures, recently introduced by Guan and Zhandry (TCC 2021), whose previous constructions relied on sophisticated techniques and strong, non-standard assumptions. We extend our constructions to achieve an optimal "rate", meaning the large ciphertexts (resp. signatures) can contain almost equally large messages, at the cost of stronger assumptions.
Zero-Knowledge for Homomorphic Key-Value Commitments with Applications to Privacy-Preserving Ledgers
Commitments to key-value maps (or, authenticated dictionaries) are an important building block in cryptographic applications, including cryptocurrencies and distributed file systems.
In this work we study short commitments to key-value maps with two additional properties: double-hiding (both keys and values should be hidden) and homomorphism (we should be able to combine two commitments to obtain one that is the ``sum'' of their key-value openings). Furthermore, we require these commitments to be short and to support efficient transparent zero-knowledge arguments (i.e., without a trusted setup).
As our main contribution, we show how to construct commitments with the properties above as well as efficient zero-knowledge arguments over them.
We additionally discuss a range of practical optimizations that can be carried out depending on the application domain.
Finally, we formally describe a specific application of commitments to key-value maps to scalable anonymous ledgers. We show how to extend QuisQuis (Fauzi et al., ASIACRYPT 2019). This results in an efficient, confidential multi-type system with a state whose size is independent of the number of transactions.
Improving Support-Minors rank attacks: applications to G$e$MSS and Rainbow
The Support-Minors (SM) method has opened new routes to attack multivariate schemes with rank properties that were previ-
ously impossible to exploit, as shown by the recent attacks of Tao at al. (CRYPTO 2021) and Beullens (EUROCRYPT 2021) on the Round 3 NIST candidates GeMSS and Rainbow respectively.
In this paper, we study this SM approach more in depth and we propose a greatly improved attack on GeMSS based on this Support-Minors method. Even though GeMSS was already affected by Tao's attack, our attack affects it even more and makes it completely unfeasible to repair the scheme by simply increasing the size of its parameters or even applying the recent projection technique from Øygarden et al. (PQCrypto 2021) whose purpose was to make GeMSS immune to Tao's attack. For instance, our attack on the GeMSS128 parameter set has estimated time complexity 2^72 , and repairing the scheme by applying projection would result in a signature with slower signing time by an impractical factor of 2^14 .
Another contribution is to suggest optimizations that can reduce memory access costs for an XL strategy on a large SM system using the Block-Wiedemann algorithm as subroutine when these costs are a concern. In a memory cost model based on the one provided by Bernstein et al. (https://ntruprime.cr.yp.to/nist/ntruprime-20201007.pdf), we show that the rectangular MinRank attack of Beullens may indeed reduce the security for all Round 3 Rainbow parameter sets below their targeted security strengths, contradicting the lower bound claimed by the Rainbow team using the same memory cost model (https://troll.iis.sinica.edu.tw/by-publ/recent/response-ward.pdf).
Cryptographic Symmetric Structures Based on Quasigroups
In our paper we study the effect of changing the commutative group operation used in Feistel and Lai-Massey symmetric structures into a quasigroup operation. We prove that if the quasigroup operation is isotopic with a group $\mathbb G$, the complexity of mounting a differential attack against our generalization of the Feistel structure is the same as attacking the unkeyed version of the general Feistel iteration based on $\mathbb G$. Also, when $\mathbb G$ is non-commutative we show that both versions of the Feistel structure are equivalent from a differential point of view. For the Lai-Massey structure we introduce four non-commutative versions, we argue for the necessity of working over a group and we provide some necessary conditions for the differential equivalency of the four notions.
Traceable PRFs: Full Collusion Resistance and Active Security
The main goal of traceable cryptography is to protect against unauthorized redistribution of cryptographic functionalities. Such schemes provide a way to embed identities (i.e., a "mark") within cryptographic objects (e.g., decryption keys in an encryption scheme, signing keys in a signature scheme). In turn, the tracing guarantee ensures that any "pirate device" that successfully replicates the underlying functionality can be successfully traced to the set of identities used to build the device.
In this work, we study traceable pseudorandom functions (PRFs). As PRFs are the workhorses of symmetric cryptography, traceable PRFs are useful for augmenting symmetric cryptographic primitives with strong traceable security guarantees. However, existing constructions of traceable PRFs either rely on strong notions like indistinguishability obfuscation or satisfy weak security guarantees like single-key security (i.e., tracing only works against adversaries that possess a single marked key).
In this work, we show how to use fingerprinting codes to upgrade a single-key traceable PRF into a fully collusion resistant traceable PRF, where security holds regardless of how many keys the adversary possesses. We additionally introduce a stronger notion of security where tracing security holds even against active adversaries that have oracle access to the tracing algorithm. In conjunction with known constructions of single-key traceable PRFs, we obtain the first fully collusion resistant traceable PRF from standard lattice assumptions. Our traceable PRFs directly imply new lattice-based secret-key traitor tracing schemes that are CCA-secure and where tracing security holds against active adversaries that have access to the tracing oracle.
Efficient and Post-Quantum Zero-Knowledge Proofs for Blockchain Confidential Transaction Protocols
We propose new zero-knowledge proofs for efficient and post-quantum ring confidential transaction (RingCT) protocols based on lattice assumptions in Blockchain systems. First, we introduce an inner-product based linear equation satisfiability approach for balance proofs with a wide range (e.g. 64-bit precision). Unlike existing balance proofs that require additional proofs for some ''corrector values'' [CCS'19], our approach avoids the corrector values for better efficiency. Furthermore, we design a ring signature scheme to efficiently hide a user's identity in large anonymity sets. Different from existing approaches that adopt a one-out-of-many proof [CCS'19, Crypto'19], we show that a linear sum proof suffices in ring signatures which could avoid the costly binary proof part. We further use the idea of ''unbalanced'' relations to build a logarithmic-size ring signature scheme. Finally, we show how to adopt these techniques in RingCT protocols and implement a prototype to compare the performance with existing approaches. The results show our solutions can reduce about 25% proof size of Crypto'19, and up to 70% proof size, 30% proving time, and 20% verification time of CCS'19. We also believe our techniques are of independent interest for other privacy-preserving applications such as secure e-voting and are applicable in a generic setting.
Proving as Fast as Computing: Succinct Arguments with Constant Prover Overhead
Succinct arguments are proof systems that allow a powerful, but untrusted, prover to convince a weak verifier that an input $x$ belongs to a language $L \in NP$, with communication that is much shorter than the $NP$ witness. Such arguments, which grew out of the theory literature, are now drawing immense interest also in practice, where a key bottleneck that has arisen is the high computational cost of \emph{proving} correctness.
In this work we address this problem by constructing succinct arguments for general computations, expressed as Boolean circuits (of bounded fan-in), with a \emph{strictly linear} size prover. The soundness error of the protocol is an arbitrarily small constant. Prior to this work, succinct arguments were known with a \emph{quasi-}linear size prover for general Boolean circuits or with linear-size only for arithmetic circuits, defined over large finite fields.
In more detail, for every Boolean circuit $C=C(x,w)$, we construct an $O(\log |C|)$-round argument-system in which the prover can be implemented by a size $O(|C|)$ Boolean circuit (given as input both the instance $x$ and the witness $w$), with arbitrarily small constant soundness error and using $poly(\lambda,\log |C|)$ communication, where $\lambda$ denotes the security parameter. The verifier can be implemented by a size $O(|x|) + poly(\lambda, \log |C|)$ circuit following a size $O(|C|)$ private pre-processing step, or, alternatively, by using a purely public-coin protocol (with no pre-processing) with a size $O(|C|)$ verifier. The protocol can be made zero-knowledge using standard techniques (and with similar parameters). The soundness of our protocol is computational and relies on the existence of collision resistant hash functions that can be computed by linear-size circuits, such as those proposed by Applebaum et al. (ITCS, 2017).
At the heart of our construction is a new information-theoretic \emph{interactive oracle proof} (IOP), an interactive analog of a PCP, for circuit satisfiability, with constant prover overhead. The improved efficiency of our IOP is obtained by bypassing a barrier faced by prior IOP constructions, which needed to (either explicitly or implicitly) encode the entire computation using a multiplication code.
Succinct Zero-Knowledge Batch Proofs for Set Accumulators
Cryptographic accumulators are a common solution to proving information about a large set $S$.
They allow one to compute a short digest of $S$ and short certificates of some of its basic properties, notably membership of an element. Accumulators also allow one to track set updates: a new accumulator is obtained by inserting/deleting a given element.
In this work we consider the problem of generating membership and update proofs for {\em batches} of elements so that we can succinctly
prove additional properties of the elements (i.e., proofs are of constant size regardless of the batch size), and we can preserve privacy.
Solving this problem would allow obtaining blockchain systems with improved privacy and scalability.
The state-of-the-art approach to achieve this goal is to combine accumulators (typically Merkle trees) with zkSNARKs. This solution is however expensive for provers and does not scale for large batches of elements. In particular, there is no scalable solution for proving batch membership proofs when we require zero-knowledge (a standard definition of privacy-preserving protocols).
In this work we propose new techniques to efficiently use zkSNARKs with RSA accumulators. We design and implement two main schemes: 1) \harisa, which proves batch membership in zero-knowledge;
2) \insarisa, which proves batch updates.
For batch membership, the prover in \harisa is orders of magnitude faster than existing approaches based on Merkle trees (depending on the hash function). For batch updates we get similar cost savings compared to approaches based on Merkle trees; we also improve
over the recent solution of Ozdemir et al. [USENIX'20].
IronMask: Versatile Verification of Masking Security
Uncategorized
Uncategorized
This paper introduces IronMask, a new versatile verification tool for masking security. IronMask is the first to offer the verification of standard simulation-based security notions in the probing model as well as recent composition and expandability notions in the random probing model. It supports any masking gadgets with linear randomness (e.g. addition, copy and refresh gadgets) as well as quadratic gadgets (e.g. multiplication gadgets) that might include non-linear randomness (e.g. by refreshing their inputs), while providing complete verification results for both types of gadgets. We achieve this complete verifiability by introducing a new algebraic characterization for such quadratic gadgets and exhibiting a complete method to determine the sets of input shares which are necessary and sufficient to perform a perfect simulation of any set of probes. We report various benchmarks which show that IronMask is competitive with state-of-the-art verification tools in the probing model (maskVerif, scVerif, SILVER, matverif). IronMask is also several orders of magnitude faster than VRAPS --the only previous tool verifying random probing composability and expandability-- as well as SILVER --the only previous tool providing complete verification for quadratic gadgets with non-linear randomness. Thanks to this completeness and increased performance, we obtain better bounds for the tolerated leakage probability of state-of-the-art random probing secure compilers.
The complexity of solving Weil restriction systems
The solving degree of a system of multivariate polynomial equations provides an upper bound for the complexity of computing the solutions of the system via Groebner basis methods. In this paper, we consider polynomial systems that are obtained via Weil restriction of scalars. The latter is an arithmetic construction which, given a finite Galois field extension $k\hookrightarrow K$, associates to a system $\mathcal{F}$ defined over $K$ a system $\mathrm{Weil}(\mathcal{F})$ defined over $k$, in such a way that the solutions of $\mathcal{F}$ over $K$ and those of $\mathrm{Weil}(\mathcal{F})$ over $k$ are in natural bijection.
In this paper, we find upper bounds for the complexity of solving a polynomial system $\mathrm{Weil}(\mathcal{F})$ obtained via Weil restriction in terms of algebraic invariants of the system $\mathcal{F}$.
Multi-Issuer Anonymous Credentials Without a Root Authority
The rise of blockchain technology has boosted interest in privacy-enhancing technologies, in particular, anonymous transaction authentication. Permissionless blockchains realize transaction anonymity through one-time pseudonyms, whereas permissioned blockchains leverage anonymous credentials.
Earlier solutions of anonymous credentials assume a single issuer; as a result, they hide the identity of users but still reveal the identity of the issuer. A countermeasure is delegatable credentials, which support multiple issuers as long as a root authority exists. Assuming a root authority however, is unsuitable for blockchain technology and decentralized applications. This paper introduces a solution for anonymous credentials that guarantees user anonymity, even without a root authority. The proposed solution is secure in the universal composability framework and allows users to produce anonymous signatures that are logarithmic in the number of issuers and constant in the number of user attributes.
Secure Publish-Process-Subscribe System for Dispersed Computing
Publish-subscribe protocols enable real-time multi-point-to-multi-point communications for many dispersed computing systems like Internet of Things (IoT) applications. Recent interest has focused on adding processing to such publish-subscribe protocols to enable computation over real-time streams such that the protocols can provide functionalities such as sensor fusion, compression, and other statistical analysis on raw sensor data. However, unlike pure publish-subscribe protocols, which can be easily deployed with end-to-end transport layer encryption, it is challenging to ensure security in such publish-process-subscribe protocols when the processing is carried out on an untrusted third party. In this work, we present XYZ, a secure publish-process-subscribe system that can preserve the confidentiality of computations and support multi-publisher-multi-subscriber settings. Within XYZ, we design two distinct schemes: the first using Yao's garbled circuits (the GC-Based Scheme) and the second using homomorphic encryption with proxy re-encryption (the Proxy-HE Scheme). We build implementations of the two schemes as an integrated publish-process-subscribe system. We evaluate our system on several functions and also demonstrate real-world applications. The evaluation shows that the GC-Based Scheme can finish most tasks two orders of magnitude times faster than the Proxy-HE Scheme while Proxy-HE can still securely complete tasks within an acceptable time for most functions but with a different security assumption and a simpler system structure.
Using data compression and randomization to build an unconditionally secure short key cipher
We consider the problem of constructing an unconditionally secure cipher for the case when the key length is less than the length of the encrypted message. (Unconditional security means that a computationally unbounded adversary cannot obtain information about the encrypted message without the key.)
In this article, we propose data compression and randomization techniques combined with entropically-secure encryption. The resulting cipher can be used for encryption in such a way that the key length does not depend on the entropy or the length of the encrypted message; instead, it is determined by the required security level.
Approximate Distance-Comparison-Preserving Symmetric Encryption
We introduce distance-comparison-preserving symmetric encryption (DCPE), a new type of property-preserving encryption (PPE) that preserves relative distance between plaintext vectors. DCPE is naturally suited for nearest-neighbor search on encrypted data. To achieve meaningful security, we divert from prior work on PPE and ask for approximate correctness, which is natural given the prevalence of approximate nearest neighbor (ANN) search. We conduct a thorough study of what security approximate DCPE can provide and how to construct it.
Based on a relation we prove between approximate DCP and approximate distance-preserving functions, we design our core approximate DCPE scheme we call Scale-And-Perturb ($\mathsf{SAP}$). The encryption algorithm of $\mathsf{SAP}$ processes data on-the-fly. To boost security, we also introduce two preprocessing techniques: (1) normalizing the plaintext distribution, and (2) shuffling, wherein the component-wise encrypted dataset is randomly permuted. We prove (under suitable restrictions) that $\mathsf{SAP}$ achieves an indistinguishability-based security notion we call Real-or-Replaced ($\mathsf{RoR}$). In particular, our $\mathsf{RoR}$ result implies that our scheme prevents membership inference attacks by Yeom et al. (CSF 2018). Moreover, we show for i.i.d. multivariate normal plaintexts, we get security against approximate frequency-finding attacks, the main line of attacks against property-preserving encryption. This follows from a one-wayness $(\mathsf{OW})$ analysis. Finally, carefully combining our $\mathsf{OW}$ and $\mathsf{RoR}$ results, we are able characterize bit-security of $\mathsf{SAP}$.
Our overall findings are that our scheme not only has superior bit-security to OPE but resists specific attacks that even ideal order-revealing encryption (Boneh et al., EUROCRYPT 2015) does not. This suggests it could be sufficient for certain ANN applications, a subject on which we encourage further study.
Leakage-Resilient IBE/ABE with Optimal Leakage Rates from Lattices
We derive the first adaptively secure IBE and ABE for t-CNF, and selectively secure ABE for general circuits from lattices, with $1-o(1)$ leakage rates, in the both relative leakage model and bounded retrieval model (BRM).
To achieve this, we first identify a new fine-grained security notion for ABE -- partially adaptive/selective security, and instantiate this notion from LWE. Then, by using this notion, we design a new key compressing mechanism for identity-based/attributed-based weak hash proof system (IB/AB-wHPS) for various policy classes, achieving (1) succinct secret keys and (2) adaptive/selective security matching the existing non-leakage resilient lattice-based designs. Using the existing connection between weak hash proof system and leakage resilient encryption, the succinct-key IB/AB-wHPS can yield the desired leakage resilient IBE/ABE schemes with the optimal leakage rates in the relative leakage model. Finally, by further improving the prior analysis of the compatible locally computable extractors, we can achieve the optimal leakage rates in the BRM.
Towards a Simpler Lattice Gadget Toolkit
As a building block, gadgets and associated algorithms are widely used in advanced lattice cryptosystems. The gadget algorithms for power-of-base moduli are very efficient and simple, however the current algorithms for arbitrary moduli are still complicated and practically more costly despite several efforts. Considering the necessity of arbitrary moduli, developing simpler and more practical gadget algorithms for arbitrary moduli is crucial to improving the practical performance of lattice based applications.
In this work, we propose two new gadget sampling algorithms for arbitrary moduli. Our first algorithm is for gadget Gaussian sampling. It is simple and efficient. One distinguishing feature of our Gaussian sampler is that it does not need floating-point arithmetic, which makes it better compatible with constrained environments. Our second algorithm is for gadget subgaussian sampling. Compared with the existing algorithm, it is simpler, faster, and requires asymptotically less randomness. In addition, our subgaussian sampler achieves an almost equal quality for different practical parameters. Overall these two algorithms provide simpler options for gadget algorithms and enhance the practicality of the gadget toolkit.
Cryptography from Pseudorandom Quantum States
Pseudorandom states, introduced by Ji, Liu and Song (Crypto'18), are efficiently-computable quantum states that are computationally indistinguishable from Haar-random states.
One-way functions imply the existence of pseudorandom states, but Kretschmer (TQC'20) recently constructed an oracle relative to which there are no one-way functions but pseudorandom states still exist.
Motivated by this, we study the intriguing possibility of basing interesting cryptographic tasks on pseudorandom states.
We construct, assuming the existence of pseudorandom state generators that map a $\lambda$-bit seed to a $\omega(\log\lambda)$-qubit state, (a) statistically binding and computationally hiding commitments and (b) pseudo one-time encryption schemes. A consequence of (a) is that pseudorandom states are sufficient to construct maliciously secure multiparty computation protocols in the dishonest majority setting.
Our constructions are derived via a new notion called {\em pseudorandom function-like states} (PRFS), a generalization of pseudorandom states that parallels the classical notion of pseudorandom functions. Beyond the above two applications, we believe our notion can effectively replace pseudorandom functions in many other cryptographic applications.
Information Security in the Quantum Era. Threats to modern cryptography: Grover’s algorithm
Information security plays a major role in the dynamics of today’s interconnected world. Despite the successful implementation and effectiveness of modern cryptographic techniques, their inherent limitations can be exploited by quantum computers. In this article we discuss Grover’s quantum searching algorithm and its impact on the security of modern symmetric ciphers. More specifically, we present its formal description and give an implementation of the algorithm using IBM’s Qiskit framework, which allows us to simulate and run the program on a real device.
Waldo: A Private Time-Series Database from Function Secret Sharing
Applications today rely on cloud databases for storing and querying time-series data. While outsourcing storage is convenient, this data is often sensitive, making data breaches a serious concern. We present Waldo, a time-series database with rich functionality and strong security guarantees: Waldo supports multi-predicate filtering, protects data contents as well as query filter values and search access patterns, and provides malicious security in the 3-party honest-majority setting. In contrast, prior systems such as Timecrypt and Zeph have limited functionality and security: (1) these systems can only filter on time, and (2) they reveal the queried time interval to the server. Oblivious RAM (ORAM) and generic multiparty computation (MPC) are natural choices for eliminating leakage from prior work, but both of these are prohibitively expensive in our setting due to the number of roundtrips and bandwidth overhead, respectively. To minimize both, Waldo builds on top of function secret sharing, enabling Waldo to evaluate predicates without client interaction. We develop new techniques for applying function secret sharing to the encrypted database setting where there are malicious servers, secret inputs, and chained predicates. With 32-core machines, Waldo runs a query with 8 range predicates over $2^{18}$ records in 3.03s, compared to 12.88s for an MPC baseline and 16.56s for an ORAM baseline. Compared to Waldo, the MPC baseline uses 9 − 82× more bandwidth between servers (for different numbers of records), while the ORAM baseline uses 20 − 152× more bandwidth between the client and server(s) (for different numbers of predicates).
Identity-Based Matchmaking Encryption without Random Oracles
Identity-based matchmaking encryption (IB-ME) is a generalization of identity-based encryption where the sender and the receiver can both specify a target identity: if both the chosen target identities match the one of the other party, the plaintext is revealed, and otherwise the sender’s identity, the target identity, and the plaintext remain hidden. Previous work showed how to construct IB-ME in the random oracle model. We give the first construction in the plain model, based on standard assumptions over bilinear groups.
XTR and Tori
At the turn of the century, 80-bit security was the standard. When considering discrete-log based cryptosystems, it could be achieved using either subgroups of 1024-bit finite fields or using (hyper)elliptic curves. The latter would allow more compact and efficient arithmetic, until Lenstra and Verheul invented XTR. Here XTR stands for 'ECSTR', itself an abbreviation for Efficient and Compact Subgroup Trace Representation. XTR exploits algebraic properties of the cyclotomic subgroup of sixth degree extension fields, allowing representation only a third of their regular size, making finite field DLP-based systems competitive with elliptic curve ones.
Subsequent developments, such as the move to 128-bit security and improvements in finite field DLP, rendered the original XTR and closely related torus-based cryptosystems no longer competitive with elliptic curves. Yet, some of the techniques related to XTR are still relevant for certain pairing-based cryptosystems. This chapter describes the past and the present of XTR and other methods for efficient and compact subgroup arithmetic.
Identifiable Cheating Entity Flexible Round-Optimized Schnorr Threshold (ICE FROST) Signature Protocol
This paper presents an Identifiable Cheating Entity (ICE) FROST signature protocol that is an improvement over the FROST signature scheme (Komlo and Goldberg, SAC 2020) since it can identify cheating participants in its Key Generation protocol. The proposed threshold signature protocol achieves robustness in the Key Generation phase of the threshold signature protocol by introducing a cheating identification mechanism and then excluding cheating participants from the protocol. By enabling the cheating identification mechanism, we remove the need to abort the Key Generation protocol every time cheating activity is suspected. Our cheating identification mechanism allows every participant to individually check the validity of complaints issued against possibly cheating participants. Then, after all of the cheating participants are eliminated, the Key Generation protocol is guaranteed to finish successfully. On the other hand, the signing process only achieves a weak form of robustness, as in the original FROST.
We then introduce static public key variant of ICE FROST. Our work is the first to consider static private/public keys for a round-optimized Schnorr-based signature scheme. With static public keys, the group’s established public and private keys remain constant for the lifetime of signers, while the signing shares of each participant are updated overtime, as well as the set of group members, which ensures the long-term security of the static keys and facilitates the verification process of the generated threshold signature because a group of signers communicates their public key to the verifier only once during the group’s lifetime. Our implementation benchmarks demonstrate that the runtime of the protocol is feasible for real-world applications.
SoK: Blockchain Light Clients
Blockchain systems, as append-only ledgers, are typically associated with linearly growing participation costs. Therefore, for a blockchain client to interact with the system (query or submit a transaction), it can either pay these costs by downloading, storing and verifying the blockchain history, or forfeit blockchain security guarantees and place its trust on third party intermediary servers.
With this problem becoming apparent from early works in the blockchain space, the concept of a light client has been proposed, where a resource-constrained client such as a browser or mobile device can participate in the system by querying and/or submitting transactions without holding the full blockchain but while still inheriting the blockchain's security guarantees. A plethora of blockchain systems with different light client frameworks and implementations have been proposed, each with different functionalities, assumptions and efficiencies. In this work we provide a systematization of such light client designs. We unify the space by providing a set of definitions on their properties in terms of provided functionality, efficiency and security, and provide future research directions based on our findings.
Efficient Set Membership Proofs using MPC-in-the-Head
Set membership proofs are an invaluable part of privacy preserving systems. These proofs allow a prover to demonstrate knowledge of a witness $w$ corresponding to a secret element $x$ of a public set, such that they jointly satisfy a given NP relation, {\em i.e.} $\mathcal{R}(w,x)=1$ and $x$ is a member of a public set $\{x_1, \ldots, x_\ell\}$. This allows the identity of the prover to remain hidden, eg. ring signatures and confidential transactions in cryptocurrencies.
In this work, we develop a new technique for efficiently adding logarithmic-sized set membership proofs to any MPC-in-the-head based zero-knowledge protocol (Ishai et al. [STOC'07]). We integrate our technique into an open source implementation of the state-of-the-art, post quantum secure zero-knowledge protocol of Katz et al. [CCS'18]. We find that using our techniques to construct ring signatures results in signatures (based only on symmetric key primitives) that are between 5 and 10 times smaller than state-of-the-art techniques based on the same assumptions. We also show that our techniques can be used to efficiently construct post-quantum secure RingCT from only symmetric key primitives.
Grover on Present: Quantum Resource Estimation
Uncategorized
Uncategorized
In this work, we present cost analysis for mounting Grover's key search on Present block cipher. Reversible quantum circuits for Present are designed taking into consideration several decompositions of toffoli gate. This designs are then used to produce Grover oracle for Present and their implementations cost is compared using several metrics. Resource estimation for Grover's search is conducted by employing these Grover oracles. Finally, gate cost for these designs are estimated considering NIST's depth restrictions.
Quantifiable Assurance: From IPs to Platforms
Uncategorized
Uncategorized
Hardware vulnerabilities are generally considered more difficult to fix than software ones because of their persistent nature after fabrication. Thus, it is crucial to assess the security and fix the potential vulnerabilities in the earlier design phases, such as Register Transfer Level (RTL), gate-level or physical layout. The focus of the existing security assessment techniques is mainly twofold. First, they check the security of Intellectual Property (IP) blocks separately (they can be applied on a single module). Second, they aim to assess the security against individual threats considering the threats are orthogonal. We argue that IP-level security assessment is not sufficient. Eventually, the IPs are placed in a platform, such as a system-on-chip (SoC), where each IP is surrounded by other IPs connected through glue logic and shared/private buses. This has a substantial impact on the platform's security. Hence, we must develop a methodology to assess the platform-level security by considering both the IP-level security and the impact of the additional parameters introduced during the transition from IP to the platform. Another important factor to consider is that the threats are not always orthogonal. Improving security against one threat may affect the security against other threats. Hence, to build a secure platform, we must first fully understand the impact of IP communications on security while considering the following questions: What type of additional parameters are introduced during the platform integration? How to define and characterize the impact of these parameters on security? How do the mitigation techniques of one threat impact others? This paper aims to answer these important questions and proposes techniques for quantifiable assurance by quantitatively estimating and measuring the security of a platform at pre-silicon stages. We also touch upon the term security optimization and present the challenges towards future research directions.
A Note on Non-Interactive Key Exchange from Code Equivalence
Uncategorized
Uncategorized
A recent paper by Zhang and Zhang claims to construct the first code-based non-interactive key exchange protocol, using a modified version of the Code Equivalence problem. We explain why this approach is flawed, and consequently debunk this claim. A simple Magma script confirms our results.
Financially Backed Covert Security
The security notion of covert security introduced by Aumann and Lindell (TCC'07) allows the adversary to successfully cheat and break security with a fixed probability $1-\epsilon$, while with probability $\epsilon$, honest parties detect the cheating attempt. Asharov and Orlandi (ASIACRYPT'12) extend covert security to enable parties to create publicly verifiable evidence about misbehavior that can be transferred to any third party. This notion is called publicly verifiable covert security (PVC) and has been investigated by multiple works. While these two notions work well in settings with known identities in which parties care about their reputation, they fall short in Internet-like settings where there are only digital identities that can provide some form of anonymity.
In this work, we propose the notion of financially backed covert security (FBC), which ensures that the adversary is financially punished if cheating is detected. Next, we present three transformations that turn PVC protocols into FBC protocols. Our protocols provide highly efficient judging, thereby enabling practical judge implementations via smart contracts deployed on a blockchain. In particular, the judge only needs to non-interactively validate a single protocol message while previous PVC protocols required the judge to emulate the whole protocol. Furthermore, by allowing an interactive punishment procedure, we can reduce the amount of validation to a single program instruction, e.g., a gate in a circuit. An interactive punishment, additionally, enables us to create financially backed covert secure protocols without any form of common public transcript, a property that has not been achieved by prior PVC protocols.
A compiler for multi-key homomorphic signatures for Turing machines
At SCN 2018, Fiore and Pagnin proposed a generic compiler (called ``Matrioska'') allowing to transform sufficiently expressive single-key homomorphic signatures (SKHSs) into multi-key homomorphic signatures (MKHSs) under falsifiable assumptions in the standard model. Matrioska is designed for homomorphic signatures that support programs represented as circuits. The MKHS schemes obtained through Matrioska support the evaluation and verification of arbitrary circuits over data signed from multiple users, but they require the underlying SKHS scheme to work with circuits whose size is exponential in the number of users, and thus can only support a constant number of users.
In this work, we propose a new generic compiler to convert an SKHS scheme into an MKHS scheme. Our compiler is a generalization of Matrioska for homomorphic signatures that support programs in any model of computation. When instantiated with SKHS for circuits, we recover the Matrioska compiler of Fiore and Pagnin. As an additional contribution, we show how to instantiate our generic compiler in the Turing Machines (TM) model and argue that this instantiation allows to overcome some limitations of Matrioska:
- First, the MKHS we obtain require the underlying SKHS to support TMs whose size depends only linearly in the number of users.
- Second, when instantiated with an SKHS with succinctness $poly(\lambda)$ and fast enough verification time, e.g., $S \cdot \log T + n \cdot poly(\lambda)$ or $T +n \cdot poly(\lambda)$ (where $T$, $S$, and $n$ are the running time, description size, and input length of the program to verify, respectively), our compiler yields an MKHS in which the time complexity of both the prover and the verifier remains $poly(\lambda)$ even if executed on programs with inputs from $poly(\lambda)$ users.
While we leave constructing an SKHS with these efficiency properties as an open problem, we make one step towards this goal by proposing an SKHS scheme with verification time $poly(\lambda) \cdot T$ under falsifiable assumptions in the standard model.
“They’re not that hard to mitigate”: What Cryptographic Library Developers Think About Timing Attacks
Timing attacks are among the most devastating side-channel attacks, allowing remote attackers to retrieve secret
material, including cryptographic keys, with relative ease. In principle, “these attacks are not that hard to mitigate”: the basic intuition, captured by the constant-time criterion, is that control-flow and memory accesses should be independent from secrets. Furthermore, there is a broad range of tools for automatically checking adherence to this intuition. Yet, these attacks still plague popular cryptographic libraries twenty-five years after their discovery, reflecting a dangerous gap between academic research and cryptographic engineering. This gap can potentially undermine the emerging shift towards high-assurance, formally verified cryptographic libraries. However, the causes for this gap remain uninvestigated.
To understand the causes of this gap, we conducted a survey with 44 developers of 27 prominent open-source cryptographic libraries. The goal of the survey was to analyze if and how the developers ensure that their code executes in constant time. Our main findings are that developers are aware of timing attacks and of their potentially dramatic consequences and yet often prioritize other issues over the perceived huge investment of time and resources currently needed to make their code resistant to timing attacks. Based on the survey, we identify several shortcomings in existing analysis tools for constant-time, and issue recommendations that can make writing constant- time libraries less difficult. Our recommendations can inform future development of analysis tools, security-aware compilers, and cryptographic libraries, not only for constant-timeness, but in the broader context of side-channel attacks, in particular for micro-architectural side-channel attacks, which are a younger topic and too recent as focus for this survey.
A New Security Notion for PKC in the Standard Model: Weaker, Simpler, and Still Realizing Secure Channels
Encryption satisfying CCA2 security is commonly known to be unnecessarily strong for realizing secure channels. Moreover, CCA2 constructions in the standard model are far from being competitive practical alternatives to constructions via random oracle. A promising research area to alleviate this problem are weaker security notions—like IND-RCCA secure encryption or IND-atag-wCCA secure tag-based encryption—which are still able to facilitate secure message transfer (SMT) via authenticated channels.
In this paper we introduce the concept of sender-binding encryption (SBE), unifying prior approaches of SMT construction in the universal composability (UC) model. We furthermore develop the corresponding non-trivial security notion of IND-SB-CPA and formally prove that it suffices for realizing SMT in conjunction with authenticated channels. Our notion is the weakest so far in the sense that it generically implies the weakest prior notions—RCCA and atag-wCCA—without additional assumptions, while the reverse is not true. A direct consequence is that IND-stag-wCCA, which is strictly weaker than IND-atag-wCCA but stronger than our IND-SB-CPA, can be used to construct a secure channel. Finally, we give an efficient IND-SB-CPA secure construction in the standard model from IND-CPA secure double receiver encryption (DRE) based on McEliece. This shows that IND-SB-CPA security yields simpler and more efficient constructions in the standard model than the weakest prior notions, i.e., IND-atag-wCCA and IND-stag-wCCA.
A Scalable SIMD RISC-V based Processor with Customized Vector Extensions for CRYSTALS-Kyber
SHA-3 is considered to be one of the most secure standardized hash functions. It relies on the Keccak-f[1,600] permutation, which operates on an internal state of 1,600 bits, mostly represented as a $5\times5\times64{-}bit$ matrix. While existing implementations process the state sequentially in chunks of typically 32 or 64 bits, the Keccak-f[1,600] permutation can benefit a lot from speedup through parallelization. This paper is the first to explore the full potential of parallelization of Keccak-f[1,600] in RISC-V based processors through custom vector extensions on 32-bit and 64-bit architectures.
We analyze the Keccak-f[1,600] permutation, composed of five different step mappings, and propose ten custom vector instructions to speed up the computation. We realize these extensions in a SIMD processor described in SystemVerilog. We compare the performance of our designs to existing architectures based on vectorized application-specific instruction set processors (ASIP). We show that our designs outperform all related work thanks to our carefully selected custom vector instructions.
Privacy-Preserving Authenticated Key Exchange for Constrained Devices
In this paper we investigate the field of privacy-preserving authenticated key exchange protocols (PPAKE). First we make a cryptographic analysis of a previous PPAKE protocol. We show that most of its security properties, including privacy, are broken, despite the security proofs that are provided. Then we describe a strong security model which captures the security properties of a PPAKE: entity authentication, key indistinguishability, forward secrecy, and privacy. Finally, we present a PPAKE protocol in the symmetric-key setting which is suitable for constrained devices. We formally prove the security of this protocol in our model.
A Method for Securely Comparing Integers using Binary Trees
In this paper, we propose a new protocol for secure integer comparison which consists of parties having each a private integer. The goal of the computation is to compare both integers securely and reveal to the parties a single bit that tells which integer is larger. Nothing more should be revealed. To achieve a low communication overhead, this can be done by using homomorphic encryption (HE). Our protocol relies on binary decision trees that is a special case of branching programs and can be implemented using HE. We assume a client-server setting where each party holds one of the integers, the client also holds the private key of a homomorphic encryption scheme and the evaluation is done by the server. In this setting, our protocol outperforms the original DGK protocol of Damgård et al. and reduces the running time by at least 45%. In the case where both inputs are encrypted, our scheme reduces the running time of a variant of DGK by 63%.
Sequential Indifferentiability of Confusion-Diffusion Networks
A large proportion of modern symmetric cryptographic building blocks are designed using the Substitution-Permutation Networks (SPNs), or more generally, Shannon's confusion-diffusion paradigm. To justify its theoretical soundness, Dodis et al. (EUROCRYPT 2016) recently introduced the theoretical model of confusion-diffusion networks, which may be viewed as keyless SPNs using random permutations as S-boxes and combinatorial primitives as permutation layers, and established provable security in the plain indifferentiability framework of Maurer, Renner, and Holenstein (TCC 2004).
We extend this work and consider Non-Linear Confusion-Diffusion Networks (NLCDNs), i.e., networks using non-linear permutation layers, in weaker indifferentiability settings. As the main result, we prove that 3-round NLCDNs achieve the notion of sequential indifferentiability of Mandal et al. (TCC 2012). We also exhibit an attack against 2-round NLCDNs, which shows the tightness of our positive result on 3 rounds. It implies correlation intractability of 3-round NLCDNs, a notion strongly related to known-key security of block ciphers and secure hash functions. Our results provide additional insights on understanding the complexity for known-key security, as well as using confusion-diffusion paradigm for designing cryptographic hash functions.
Pushing the Limits: Searching for Implementations with the Smallest Area for Lightweight S-Boxes
The area is one of the most important criteria for an S-box in hardware implementation when designing lightweight cryptography primitives. The area can be well estimated by the number of gate equivalent (GE). However, to our best knowledge, there is no efficient method to search for an S-box implementation with the least GE. Previous approaches can be classified into two categories, one is a heuristic that aims at finding an implementation with a satisfying but not necessarily the smallest GE number; the other one is SAT-based focusing on only the smallest number of gates while it ignored that the areas of different gates vary. Implementation with the least gates would usually not lead to the smallest number of GE.
In this paper, we propose an improved SAT-based tool targeting optimizing the number of GE of an S-box implementation. Given an S-box, our tool can return the implementation of this S-box with the smallest number of GE. We speed up the search process of the tool by bit-sliced technique. Additionally, our tool supports 2-, 3-, and 4-input gates, while the previous tools cover only 2-input gates. To highlight the strength of our tool, we apply it to some 4-bit and 5-bit S-boxes of famous ciphers. We obtain a better implementation of RECTANGLE's S-box with the area of 18.00GE. What's more, we prove that the implementations of S-boxes of PICCOLO, SKINNY, and LBLOCK in the current literature have been optimal. When using the DC synthesizer on the circuits produced by our tool, the area are much better than the circuits converted by DC synthesizers from the lookup tables (LUT). At last, we use our tool to find implementations of 5-bit S-boxes, such as those used in KECCAK and ASCON.
STROBE: Stake-based Threshold Random Beacons
We revisit decentralized random beacons with a focus on practical distributed applications. Decentralized random beacons (Beaver and So, Eurocrypt 1993) provide the functionality for $n$ parties to generate an unpredictable sequence of bits in a way that cannot be biased, which is useful for any decentralized protocol requiring trusted randomness.
Existing beacon constructions are highly inefficient in practical settings where protocol parties need to rejoin after crashes or disconnections, and more significantly where smart contracts may rely on arbitrary index points in high-volume streams. For this, we introduce a new notion of history-generating decentralized random beacons (HGDRBs).
Roughly, the history-generation property of HGDRBs allows for previous beacon outputs to be efficiently generated knowing only the current value and the public key. At application layers, history-generation supports registering a sparser set of on-chain values if desired, so that apps like lotteries can utilize on-chain values without incurring high-frequency costs, enjoying all the benefits of DRBs implemented off-chain or with decoupled, special-purpose chains. Unlike rollups, HG is tailored specifically to recovering and verifying pseudorandom bit sequences and thus enjoys unique optimizations investigated in this work.
We introduce STROBE: an efficient HGDRB construction which generalizes the original squaring-based RSA approach of Beaver and So. STROBE enjoys several useful properties that make it suited for practical applications that use beacons:
- history-generating: it can regenerate and verify high-throughput beacon streams, supporting sparse (thus cost-effective) ledger entries;
- concisely self-verifying: NIZK-free, with state and validation employing a single ring element;
- eco-friendly: stake-based rather than work based;
- unbounded: refresh-free, addressing limitations of Beaver and So;
- delay-free: results are immediately available.
SecNDP: Secure Near-Data Processing with Untrusted Memory
Today's data-intensive applications increasingly suffer from significant performance bottlenecks due to the limited memory bandwidth of the classical von Neumann architecture. Near-Data Processing (NDP) has been proposed to perform computation near memory or data storage to reduce data movement for improving performance and energy consumption. However, the untrusted NDP processing units (PUs) bring in new threats to workloads that are private and sensitive, such as private database queries and private machine learning inferences. Meanwhile, most existing secure hardware designs do not consider off-chip components trustworthy. Once data leaving the processor, they must be protected, e.g., via block cipher encryption. Unfortunately, current encryption schemes do not support computation over encrypted data stored in memory or storage, hindering the adoption of NDP techniques for sensitive workloads.
In this paper, we propose SecNDP, a lightweight encryption and verification scheme for untrusted NDP devices to perform computation over ciphertext and verify the correctness of linear operations. Our encryption scheme leverages arithmetic secret sharing in secure Multi-Party Computation (MPC) to support operations over ciphertext, and uses counter-mode encryption to reduce the decryption latency. The security of the scheme is formally proven. Compared with a non-NDP baseline, secure computation with SecNDP significantly reduces the memory bandwidth usage while providing security guarantees. We evaluate SecNDP for two workloads of distinct memory access patterns. In the setting of eight NDP units, we show a speedup up to 7.46x and energy savings of 18% over an unprotected non-NDP baseline, approaching the performance gain attained by native NDP without protection.Furthermore, SecNDP does not require any security assumption on NDP to hold, thus, using the same threat model as existing secure processors. SecNDP can be implemented without changing the NDP protocols and their inherent hardware design.
Differential Cryptanalysis of WARP
WARP is an energy-efficient lightweight block cipher that is currently the smallest 128-bit block cipher in terms of hardware. It was proposed by Banik et al. in SAC 2020 as a lightweight replacement for AES-128 without changing the mode of operation. This paper proposes key-recovery attacks on WARP based on differential cryptanalysis in single and related-key settings. We searched for differential trails for up to 20 rounds of WARP, with the first 19 having optimal differential probabilities. We also found that the cipher has a strong differential effect, whereby 16 to 20-round differentials have substantially higher probabilities than their corresponding individual trails. A 23-round key-recovery attack was then realized using an 18-round differential distinguisher. Next, we formulated an automatic boomerang search using SMT that relies on the Feistel Boomerang Connectivity Table to identify valid switches. We designed the search as an add-on to the CryptoSMT tool, making it applicable to other Feistel-like ciphers such as TWINE and LBlock-s. For WARP, we found a 21-round boomerang distinguisher which was used in a 24-round rectangle attack. In the related-key setting, we describe a family of 2-round iterative differential trails, which we used in a practical related-key attack on the full 41-round WARP.
New Differential Cryptanalysis Results for the Lightweight Block Cipher BORON
BORON is a 64-bit lightweight block cipher based on the substitution-permutation network that supports an 80-bit (BORON-80) and 128-bit (BORON-128) secret key. In this paper, we revisit the use of differential cryptanalysis on BORON in the single-key model. Using an SAT/SMT approach, we look for differentials that consist of multiple differential characteristics with the same input and output differences. Each characteristic that conforms to a given differential improves its overall probability. We also implemented the same search using Matsui's algorithm for verification and performance comparison purposes. We identified high-probability differentials which were then used in key recovery attacks against BORON-80/128. We first show that the previous differential cryptanalysis attack against 9-round of BORON was at most an 8.5 round attack due to the omission of the final block XOR layer. Then, we used 8-round differentials with a probability of $2^{-58.156}$ and $2^{-62.415}$ in key recovery attacks against 9 and 10 rounds of BORON-80 and BORON-128 with time/data/memory complexities of {$2^{63.63}/2^{62}/2^{55}$ and $2^{100.28}/2^{64}/2^{71}$} respectively. Our key recovery framework provides a more accurate estimate of the attack complexity as compared to previous work. The attacks proposed in this paper are the best differential attacks against BORON-80/128 in the single-key model to date.
A Simple Deterministic Algorithm for Systems of Quadratic Polynomials over $\mathbb{F}_2$
This article discusses a simple deterministic algorithm for solving quadratic
Boolean systems which is essentially a special case of more sophisticated
methods. The main idea fits in a single sentence: guess enough variables so
that the remaining quadratic equations can be solved by linearization
(i.e. by considering each remaining monomial as an independent
variable and solving the resulting linear system) and restart until the solution
is found. Under strong heuristic
assumptions, this finds all the solutions of $m$ quadratic polynomials in $n$
variables with $\mathcal{\tilde O}({2^{n-\sqrt{2m}}})$ operations. Although the best
known algorithms require exponentially less time, the present technique has
the advantage of being simpler to describe and easy to implement. In strong
contrast with the state-of-the-art, it is also quite efficient in practice.
What is the funniest number in cryptography (Episode 2 )? 0 . The reason is that ∀x, x ∗ 0 = 0, i.e., the equation is always satisfied no matter what x is. We’ll use zero to attack zero-knowledge proof (ZKP). In particular, we’ll discuss a critical issue in a cutting-edge ZKP PLONK C++ implementation which allows an attacker to create a forged proof that all verifiers will accept. We’ll show how theory guides the attack’s direction. In practice, the attack works like a charm and we’ll show how the attack falls through a chain of perfectly aligned software cracks.
In the same codebase, there is an independent critical ECDSA bug where (r, s) = (0, 0) is a valid signature for arbitrary keys and messages, but we won’t discuss it further because it’s a known ECDSA attack vector in the Google Wycheproof project that I worked on a few years ago.
All bugs have been responsibly disclosed through the vendor’s bug bounty program with total reward ~ $15,000 (thank you).
Internet Security and Quantum Computing
The cryptographic algorithms that we rely on for Internet trust and security are based on the computational difficulty of solving particular mathematical problems. Sufficiently powerful quantum computers could solve those problems in a day or less, rendering their protection largely useless. When this hypothetical future is on the horizon, Internet software suppliers should undertake the massive project of changing the fundamental cryptographic algorithms to completely different kinds of computations. This paper calls attention to how today's algorithms could be vulnerable, to the factors that impede the realization of quantum computing, and to how technologists might measure the distance to the quantum horizon.
Does Fully Homomorphic Encryption Need Compute Acceleration?
Uncategorized
Uncategorized
The emergence of cloud-computing has raised important privacy questions about the data that users share with remote servers. While data in transit is protected using standard techniques like Transport Layer Security (TLS), most cloud providers have unrestricted plaintext access to user data at the endpoint. Fully Homomorphic Encryption (FHE) offers one solution to this problem by allowing for arbitrarily complex computations on encrypted data without ever needing to decrypt it. Unfortunately, all known implementations of FHE require the addition of noise during encryption which grows during computation. As a result, sustaining deep computations requires a periodic noise reduction step known as bootstrapping. The cost of the bootstrapping operation is one of the primary barriers to the wide-spread adoption of FHE.In this paper, we present an in-depth architectural analysis of the bootstrapping step in FHE. First, we observe that se-cure implementations of bootstrapping exhibit a low arithmetic intensity (<1Op/byte), require large caches (>100MB) and as such, are heavily bound by the main memory bandwidth.Consequently, we demonstrate that existing workloads observe marginal performance gains from the design of bespoke high-throughput arithmetic units tailored to FHE. Secondly, we propose several cache-friendly algorithmic optimizations that improve the throughput in FHE bootstrapping by enabling upto3.2×higher arithmetic intensity and4.6×lower memory bandwidth. Our optimizations apply to a wide range of structurally similar computations such as private evaluation and training of machine learning models. Finally, we incorporate these optimizations into an architectural tool which, given a cache size, memory subsystem, the number of functional units and a desired security level, selects optimal cryptosystem parameters to maximize the bootstrapping throughput.Our optimized bootstrapping implementation represents a best-case scenario for compute acceleration of FHE. We show that despite these optimizations, bootstrapping (as well as other applications with similar computational structure) continue to remain bottlenecked by main memory bandwidth. We thus conclude that secure FHE implementations need to look beyond accelerated compute for further performance improvements and to that end, we propose new research directions to address the underlying memory bottleneck. In summary, our answer to the titular question is: yes, but only after addressing the memory bottleneck!
Synchronous Distributed Key Generation without Broadcasts
Distributed key generation (DKG) is an important building block in designing many efficient distributed protocols. In this work, we initiate the study of communication complexity and latency of distributed key generation protocols under a synchronous network in a point-to-point network. Our key result is the first synchronous DKG protocol for discrete log-based cryptosystems with $O(\kappa n^3)$ communication complexity ($\kappa$ denotes a security parameter) that tolerates $t < n/2$ Byzantine faults among $n$ parties. We show two variants of the protocol: a deterministic protocol with $O(t\Delta)$ latency and randomized protocol with $O(\Delta)$ latency in expectation where $\Delta$ denotes the bounded synchronous delay. In the process of achieving our results, we design (1) a gradecast protocol with optimal communication complexity of $O(\kappa n^2)$ for linear-sized inputs and latency of $O(\Delta)$, (2) a primitive called ``recoverable set of shares'' for ensuring recovery of shared secrets, (3) an oblivious leader election protocol with $O(\kappa n^3)$ communication and $O(\Delta)$ latency, and (4) a multi-valued validated Byzantine agreement (MVBA) protocol with $O(\kappa n^3)$ communication complexity for linear-sized inputs and $O(\Delta)$ latency in expectation. Each of these primitives may be of independent interest.
McEliece needs a Break -- Solving McEliece-1284 and Quasi-Cyclic-2918 with Modern ISD
With the recent shift to post-quantum algorithms it becomes increasingly important to provide precise bit-security estimates for code-based cryptography such as McEliece and quasi-cyclic schemes like BIKE and HQC. While there has been significant progress on information set decoding (ISD) algorithms within the last decade, it is still unclear to which extent this affects current cryptographic security estimates.
We provide the first concrete implementations for representation-based ISD, such as May-Meurer-Thomae (MMT) or Becker-Joux-May-Meurer (BJMM), that are parameter-optimized for the McEliece and quasi-cyclic setting. Although MMT and BJMM consume more memory than naive ISD algorithms like Prange, we demonstrate that these algorithms lead to significant speedups for practical cryptanalysis on medium-sized instances (around 60 bit). More concretely, we provide data for the record computations of McEliece-1223 and McEliece-1284 (old record: 1161), and for the quasi-cyclic setting up to code length 2918 (before: 1938).
Based on our record computations we extrapolate to the bit-security level of the proposed BIKE, HQC and McEliece parameters in NIST's standardization process.
For BIKE/HQC, we also show how to transfer the Decoding-One-Out-of-Many (DOOM) technique to MMT/BJMM. Although we achieve significant DOOM speedups, our estimates confirm the bit-security levels of BIKE and HQC.
For the proposed McEliece round-3 parameter sets of 192 and 256 bit, however, our extrapolation indicates a security level overestimate by roughly 20 and 10 bits, respectively, i.e., the high-security McEliece instantiations may be a bit less secure than desired.
Zero Knowledge Proofs towards Verifiable Decentralized AI Pipelines
We are witnessing the emergence of decentralized AI pipelines wherein different organisations are involved in the different steps of the pipeline. In this paper, we introduce a comprehensive framework for verifiable provenance for decentralized AI pipelines with support for confidentiality concerns of the owners of data and model assets. Although some of the past works address different aspects of provenance, verifiability, and confidentiality, none of them address all the aspects under one uniform framework. We present an efficient and scalable approach for verifiable provenance for decentralized AI pipelines with support for confidentiality based on zero-knowledge proofs (ZKPs). Our work is of independent interest to the fields of verifiable computation (VC) and verifiable model inference. We present methods for basic computation primitives like read only memory access and operations on datasets that are an order of magnitude better than the state of the art. In the case of verifiable model inference, we again improve the state of the art for de- cision tree inference by an order of magnitude. We present an extensive experimental evaluation of our system.
Cryptanalysis of RSA Variants with Primes Sharing Most Significant Bits
We consider four variants of the RSA cryptosystem with an RSA modulus $N=pq$ where the public exponent $e$ and the private exponent $d$ satisfy an equation of the form $ed-k\left(p^2-1\right)\left(q^2-1\right)=1$. We show that, if the prime numbers $p$ and $q$ share most significant bits, that is, if the prime difference $|p-q|$ is sufficiently small, then one can solve the equation for larger values of $d$, and factor the RSA modulus, which makes the systems insecure.
Secure Sampling of Constant-Weight Words – Application to BIKE
The pseudorandom sampling of constant weight words, as it is currently implemented in cryptographic schemes like BIKE or HQC, is prone to the leakage of information on the seed being used for the pseudorandom number generation. This creates a vulnerability when the semantic security conversion requires a deterministic re-encryption. This observation was first made in [HLS21] about HQC and a timing attack was presented to recover the secret key. As suggested in [HLS21] a similar attack applies to BIKE and instances of such an attack were presented in an earlier version of this work [Sen21] and independently in [GHJ+22].
The timing attack stems from the variation of the amount of pseudorandom data to draw and process for sampling uniformly a constant weight word. We give here the exact distribution of this amount for BIKE. This will allow us to estimate precisely the cost of the natural countermeasure which consists in drawing always the same (large enough) amount of randomness for the sampler to terminate with probability overwhelmingly close to one.
The main contribution of this work is to suggest a new approach for fixing the issue. It is first remarked that, contrary to what is done currently, the sampling of constant weight words doesn't need to produce a uniformly distributed output. If the distribution is close to uniform in the appropriate metric, the impact on security is negligible. Also, a new variant of the Fisher-Yates shuffle is proposed which is (1) very well suited for secure implementations against timing and cache attacks, and (2) produces constant weight words with a distribution close enough to uniform.
Exponential Increment of RSA Attack Range via Lattice Based Cryptanalysis
The RSA cryptosystem comprises of two important features that are needed for encryption process known as the public parameter $e$ and the modulus $N$. In 1999, a cryptanalysis on RSA which was described by Boneh and Durfee focused on the key equation $ed-k\phi(N)=1$ and $e$ of the same magnitude to $N$. Their method was applicable for the case of $d<N^{0.292}$ via Coppersmith’s technique. In 2012, Kumar et al. presented an improved Boneh-Durfee attack using the same equation which is valid for any e with arbitrary size. In this paper, we present an exponential increment of the two former attacks using the variant equation $ea-\phi(N)b=c$. The new attack breaks the RSA system when $a$ and $|c|$ are suitably small integers. Moreover, the new attack shows that the Boneh-Durfee attack and the attack of Kumar et al. can be derived using a single attack. We also showed that our bound manage to improve the bounds of Ariffin et al. and Bunder and Tonien.
Increment of Insecure RSA Private Exponent Bound Through Perfect Square RSA Diophantine Parameters Cryptanalysis
The public parameters of the RSA cryptosystem are represented by the pair of integers $N$ and $e$. In this work, first we show that if $e$ satisfies the Diophantine equation of the form $ex^2-\phi(N)y^2=z$ for appropriate values of $x, y$ and $z$ under certain specified conditions, then one is able to factor $N$. That is, the unknown $\frac{y}{x}$ can be found amongst the convergents of $\frac{\sqrt{e}}{\sqrt{N}}$ via continued fractions algorithm. Consequently, Coppersmith's theorem is applied to solve for prime factors $p$ and $q$ in polynomial time. We also report a second weakness that enabled us to factor $k$ instances of RSA moduli simultaneously from the given $(N_i,e_i)$ for $i=1,2,\cdots,k$ and a fixed $x$ that fulfills the Diophantine equation $e_{i}x^{2}-y_{i}^{2}\phi(N_{i})=z_{i}$. This weakness was identified by solving the simultaneous Diophantine approximations using the lattice basis reduction technique. We note that this work extends the bound of insecure RSA decryption exponents.
SoK: Mitigation of Front-running in Decentralized Finance
Front-running is the malicious, and often illegal, act of both manipulating the order of pending trades and injecting additional trades to make a profit at the cost of other users. In decentralized finance (DeFi), front-running strategies exploit both public knowledge of user trades from transactions pending on the network and the miner's ability to determine the final transaction order. Given the financial loss and increased transaction load resulting from adversarial front-running in decentralized finance, novel cryptographic protocols have been proposed to mitigate such attacks in the permission-less blockchain setting. We systematize and discuss the state-of-the-art of front-running mitigation in decentralized finance, and illustrate remaining attacks and open challenges.
A PKI-based Framework for Establishing Efficient MPC Channels
The Transport Layer Security (TLS) protocol is a fundamental building block for ensuring security on Internet. It provides an easy to use framework for the purposes of establishing an authenticated and secure channel between two parties that have never physically met. Nevertheless, TLS only provides a simple cryptographic functionality compared to more advanced protocols such as protocols for secure multiparty computation (MPC).
In this work, we provide a framework for efficiently establishing channels for MPC over the Internet. We focus on MPC protocols in the oblivious transfer (OT) hybrid model such that it is sufficient to establish OT correlations for such a channel. We revisit and combine different notions of UC security proposed in both the MPC and authenticated key exchange settings. Through this work, we show how an OT protocol can be composed with a secure authenticator to ensure the authenticity of messages sent during the OT.
In addition, we adapt and analyse non-interactive OTs based on dense key encapsulation mechanisms (KEMs) in the random oracle model, where the first message, i.e. public key, can be reused. These KEMs can be instantiated based on CDH, RSA and LWE and after a performance and security evaluation, it turns out that the resulting OT protocols are very competitive with the state of the art and are able to leverage existing PKIs.
A Successful Subfield Lattice Attack on a Fully Homomorphic Encryption Scheme
We present the application of a known subfield lattice attack on a fully homomorphic encryption scheme based on NTRU. We show that the scheme is vulnerable to the attack due to a particular parameter having to satisfy a derived lower bound. We also show that, due to the structure of the scheme, the attack is successful in all practical instantiations of the scheme.
A note on the QFT randomness spectral test a new approach of DST
Quantum computers provide a new way of solving problems even in cryptography in which digital signature make an important role. In this paper, we describe a comparison between the spectral test in classical mode and quantum mode through Fourier Transform. A comparison of the results in the two cases was made. Applications of the proposed techniques are from the field of statistical testing of the pseudorandom bit generators used for cryptographic applications. The proposed statistical test is an extension of the Discrete Fourier Transform statistical test proposed in NIST SP 800-22.
On the IND-CCA1 Security of FHE Schemes
Uncategorized
Uncategorized
Fully homomorphic encryption (FHE) is a powerful tool in cryptography that allows one to perform arbitrary computations on encrypted material without having to decrypt it first. There are numerous FHE schemes, all of which are expanded from somewhat homomorphic encryption (SHE) schemes, and some of which are considered viable in practice. However, while these FHE schemes are semantically (IND-CPA) secure, the question of their IND-CCA1 security is much less studied. In this paper, we group SHE schemes into broad categories based on their similarities and underlying hardness problems. For each category, we show that the SHE schemes are susceptible to either known adaptive key recovery attacks, a natural extension of known attacks, or our proposed attacks. Finally, we discuss the known techniques to achieve IND-CCA1 secure FHE and SHE schemes.
On the Short Principal Ideal Problem over some real Kummer fields
Several cryptosystems using structured lattices have been believed to be quantum resistant. Their
security can be linked to the hardness of solving the Shortest Vector Problem over module or ideal lattices. During
the past few years it has been shown that the related problem of finding a short generator of a principal ideal
can be solved in quantum polynomial time over cyclotomic fields, and classical polynomial time over a range of
multiquadratic and multicubic fields. Hence, it is important to study as many as possible other number fields, to
improve our knowledge of the aformentioned problems. In this paper we generalise the work done over multiquadratic
and multicubic fields to a larger range of real Kummer extensions of prime exponent p. Moreover, we extend the
analysis by studying the Log-unit lattice over these number fields, in comparison to already studied fields.
Roulette: A Diverse Family of Feasible Fault Attacks on Masked Kyber
At Indocrypt 2021, Hermelink, Pessl, and Pöppelmann presented a fault attack against Kyber in which a system of linear inequalities over the private key is generated and solved. The attack requires a laser and is, understandably, demonstrated with simulations—not actual equipment. We facilitate and diversify the attack in four ways, thereby admitting cheaper and more forgiving fault-injection setups. Firstly, the attack surface is enlarged: originally, the two input operands of the ciphertext comparison are covered, and we additionally cover re-encryption modules such as binomial sampling and butterflies in the last layer of the inverse number-theoretic transform (INTT). This extra surface also allows an attacker to bypass the custom countermeasure that was proposed in the Indocrypt paper. Secondly, the fault model is relaxed: originally, precise bit flips are required, and we additionally support set-to-0 faults, random faults, arbitrary bit flips, and instruction skips. Thirdly, masking and blinding methods that randomize intermediate variables kindly help our attack, whereas the IndoCrypt attack is like most other fault attacks either hindered or unaltered by countermeasures against passive side-channel analysis (SCA). Randomization helps because we randomly fault intermediate prime-field elements until a desired set of values is hit. If these prime-field elements are represented on a circle, which is a common visualization, our attack is analogous to spinning a roulette wheel until the ball lands in a desired set of pockets. Hence, the nickname. Fourthly, we accelerate and improve the error tolerance of solving the system of linear inequalities: run times of roughly 100 minutes are reduced to roughly one minute, and inequality error rates of roughly 1% are relaxed to roughly 25%. Benefiting from the four advances above, we use a reasonably priced ChipWhisperer board to break a masked implementation of Kyber running on an ARM Cortex-M4 through clock glitching.
Alpha-Rays: Key Extraction Attacks on Threshold ECDSA Implementations
In this paper we provide technical details on two new attack vectors, relevant to implementations of [GG18] and [GG20] threshold ECDSA protocols. Both attacks lead to a complete secret key extraction by exploiting different parts of the Multiplicative-to-Additive (MtA) sub-protocol the parties run during signing. Our first attack applies to the setting of ”fast” MtA, which runs the protocol with no range proofs. We leverage a powerful oracle, much stronger than originally anticipated in [GG18], to reveal a part of the secret key with each signature we run. The number of required signatures depends on the implementation under attack and the number of parties controlled by the attacker. Our proof of concept demonstrates a full key extraction by a single malicious party using eight signatures. Our second attack deals with the more common setting of “full” MtA, that is, including ZK proofs. The only requirement for mounting a successful attack is to use a small Paillier encryption key. The key size check was not specified in the protocol and therefore missing from most existing threshold ECDSA implementations, making them vulnerable. As we show, choosing a small key completely eliminates a specific hiding property in one of the values sent from the victim to the attacker during one of ZK proofs. This allows a single malicious party to extract the full secret key after a single valid signature. We provide a proof of concept for this attack as well.
Unicity distance of the Zodiac-340 cipher
In December 2020, David Oranchak, Jarl Van Eycke, and Sam Blake solved a 51-year old mystery: the Zodiac cipher of 340 symbols. Blake (2021) explains their solution. The correctness of their solution has not been seriously doubted, and here we give a further argument in its favor: the unicity distance of the cipher's system is at most 152.
Last updated: 2021-12-28
Code-Based Non-Interactive Key Exchange Can Be Made
Code-based cryptography plays an important role in post-quantum cryptography. While many crypto-primitives such as public-key encryption, digital signature have been proposed from codes, there is no non-interactive code-based key exchange protocol. We solve the opening problem of constructing a non-interactive key exchange protocol from coding theory in this work. To prove the security of this protocol, we propose a new hard problem called sub-LE problem, which is a sub-problem of code equivalence problem. We prove its hardness by reducing the well-known code linearly equivalence problem to the sub-LE problem. This new hard problem provides many good properties such as partly commutativity. This excites us most because it allows not only the construction of key exchange protocol, but also many other primitives such as a new public-key encryption scheme.
Succinct Publicly-Certifiable Proofs (or: Can a Blockchain Verify a Designated-Verifier Proof?)
Uncategorized
Uncategorized
We study zero-knowledge arguments where proofs are: of knowledge, short, publicly-verifiable and produced without interaction. While zkSNARKs satisfy these requirements, we build such proofs in a constrained theoretical setting: in the standard-model---i.e., without a random oracle---and without assuming public-verifiable SNARKs (or even NIZKs, for some of our constructions) or primitives currently known to imply them.
We model and construct a new primitive, SPuC (Succinct Publicly-Certifiable System), where: a party can prove knowledge of a witness $w$ by publishing a proof $\pi_0$; the latter can then be certified non-interactively by a committee sharing a secret; any party in the system can now verify the proof through its certificates; the total communication complexity should be sublinear in $|w|$. We construct SPuCs generally from (leveled) Threshold FHE, homomorphic signatures and linear-only encryption, all instantiatable from lattices and thus plausibly quantum-resistant. We also construct them in the two-party case replacing TFHE with the simpler primitive of homomorphic secret-sharing.
Our model has practical applications in blockchains and in other protocols where there exist committees sharing a secret and it is necessary for parties to prove knowledge of a solution to some puzzle.
We show that one can construct a version of SPuCs with robust proactive security from similar assumptions. In a proactively secure model the committee reshares its secret from time to time. Such a model is robust if the committee members can prove they performed this resharing step correctly. Along the way to our goal we define and build Proactive Universal Thresholdizers, a proactive version of the Universal Thresholdizer defined in Boneh et al. [Crypto 2018].
Richelot Isogenies, Pairings on Squared Kummer Surfaces and Applications
Isogeny-based cryptosystem from elliptic curves has been well studied for several years, but there are fewer works about isogenies on hyperelliptic curves to this date. In this work, we make the first step to explore isogenies and pairings on generic squared Kummer surfaces, which is believed to be a better type of Kummer surfaces. The core of our work is the Richelot isogeny having two kernels together with each dual onto the squared Kummer surfaces, then a chain of Richelot isogenies is constructed simply. Besides, with the coordinate system on the Kummer surface, we modify the squared pairings, so as to propose a self-contained pairing named squared symmetric pairing, which can be evaluated with arithmetic on the same squared Kummer surface. In the end, as applications, we present a Verifiable Delay Function and a Delay Encryption on squared Kummer surfaces.
A Note on the Post-Quantum Security of (Ring) Signatures
Uncategorized
Uncategorized
This work revisits the security of classical signatures and ring signatures in a quantum world. For (ordinary) signatures, we focus on the arguably preferable security notion of blind-unforgeability recently proposed by Alagic et al. (Eurocrypt'20). We present two short signature schemes achieving this notion: one is in the quantum random oracle model, assuming quantum hardness of SIS; and the other is in the plain model, assuming quantum hardness of LWE with super-polynomial modulus. Prior to this work, the only known blind-unforgeable schemes are Lamport's one-time signature and the Winternitz one-time signature, and both of them are in the quantum random oracle model.
For ring signatures, the recent work by Chatterjee et al. (Crypto'21) proposes a definition trying to capture adversaries with quantum access to the signer. However, it is unclear if their definition, when restricted to the classical world, is as strong as the standard security notion for ring signatures. They also present a construction that only partially achieves (even) this seeming weak definition, in the sense that the adversary can only conduct superposition attacks over the messages, but not the rings. We propose a new definition that does not suffer from the above issue. Our definition is an analog to the blind-unforgeability in the ring signature setting. Moreover, assuming the quantum hardness of LWE, we construct a compiler converting any blind-unforgeable (ordinary) signatures to a ring signature satisfying our definition.
High-order Polynomial Comparison and Masking Lattice-based Encryption
The main protection against side-channel attacks consists in computing every function with multiple shares via the masking countermeasure. For IND-CCA secure lattice-based encryption schemes, the masking of the decryption algorithm requires the high-order computation of a polynomial comparison. In this paper, we describe and evaluate a number of different techniques for such high-order comparison, always with a security proof in the ISW probing model. As an application, we describe the full high-order masking of the NIST finalists Kyber and Saber, with a concrete implementation.
Last updated: 2021-12-15
PEPFL: A Framework for a Practical and Efficient Privacy-Preserving Federated Learning
As an emerging joint learning model, federated deep learning is a promising way to combine model parameters of different users for training and inference without collecting users’ original data. However, a practical and efficient solution has not been established in previous work due to the absence of effcient matrix computation and cryptography schemes in the privacy-preserving federated learning model, especially in partially homomorphic cryptosystems. In this paper, we propose a practical and efficient privacy-preserving federated learning framework (PEPFL). First, we present a lifted distributed ElGamal cryptosystem that can be applied to federated learning and solve the multi-key problem in federated learning. Secondly, we develop a practical partially single instruction multiple data (PSIMD) parallelism scheme that can encode a plaintext matrix into single plaintext to conduct the encryption, improving effectiveness and reducing communication cost in partially homomorphic cryptosystems. In addition, a novel privacy-preserving federated learning framework is designed by using momentum gradient descent (MGD) with a convolutional neural network (CNN) and the designed cryptosystem. Finally, we evaluate the security and performance of PEPFL. The experiment results demonstrate that the scheme is practicable, effective, and secure with low communication and computational costs.
Last updated: 2021-12-15
Privacy-preserving Federated Learning with Lightweight and Heterogeneity in IoT
Federated learning (FL), as an emerging distributed learning framework, can combine training from different users
without collecting users’ original data, protecting privacy to a certain extent. However, there are no efficient privacy protection technologies applicable to IoT. One challenge in IoT is to reduce the client-server communication cost and solve communication failure questions. Another challenge is how to utilize highquality data to guarantee training performance. To solve these challenges, we present a privacy-preserving and optimal fraction FL framework based on elliptic curve cryptosystem (ECC) and k-nearest neighbor (KNN) method in an ad-hoc network. Firstly, we propose an improved multiple key EC-ElGamal cryptosystem (MEEC), which can reduce computation overhead and improve
the encryption efficiency owing to the lightweight EC-ElGamal cryptosystem with shorter keys and ciphertext. Secondly, we propose the first ad-hoc FL framework with an ad-hoc quit and join algorithm, solving the communication failure questions, guaranteeing the optimal power computation. Thirdly, we raise a Euclidean fraction scheme based on an improved KNN method, which can quickly obtain the optimal training data from the heterogeneity data, avoiding low-quality data or malicious data to join the training. Finally, security analysis and performance evaluation have been performed. Compared with the existing solutions, our scheme is secure, practicable, efficient with low communication and computational costs in IoT.
Universal Atomic Swaps: Secure Exchange of Coins Across All Blockchains
Trading goods lies at the backbone of the modern economy and the recent advent of cryptocurrencies has opened the door for trading decentralized (digital) assets: A large fraction of the value of cryptocurrencies comes from the inter-currency exchange and trading, which has been arguably the most successful application of decentralized money. The security issues observed with centralized, custodial cryptocurrency exchanges have motivated the design of atomic swaps, a protocol for coin exchanges between any two users. Yet, somewhat surprisingly, no atomic swap protocol exists that simultaneously satisfies the following simple but desired properties: (i) non-custodial, departing from a third party trusted holding the coins from users during the exchange; (ii) universal, that is, compatible with all (current and future) cryptocurrencies; (iii) multi-asset, supporting the exchange of multiple coins in a single atomic swap.
From a theoretical standpoint, in this work we show a generic protocol to securely swap $n$ coins from any (possible multiple) currencies for $\tilde{n}$ coins of any other currencies, for any $n$ and $\tilde{n}$. We do not require any custom scripting language supported by the corresponding blockchains, besides the bare minimum ability to verify signatures on transactions. For the special case when the blockchains use ECDSA or Schnorr signatures, we design a practically efficient protocol based on adaptor signatures and time-lock puzzles. As a byproduct of our approach, atomic swaps transactions no longer include custom scripts and are identical to standard one-to-one transactions. We also show that our protocol naturally generalizes to any cycle of users, i.e., atomic swaps with more than two participants. To demonstrate the practicality of our approach, we have evaluated a prototypical implementation of our protocol for Schnorr/ECDSA signatures and observed that an atomic swap requires below one second on commodity machines. Even on blockchains with expressive smart contract support (e.g., Ethereum), our approach reduces the on-chain cost both in terms of transaction size and gas cost.
Solving degree, last fall degree, and related invariants
In this paper we study and relate several invariants connected to the solving degree of a polynomial system. This provides a rigorous framework for estimating the complexity of solving a system of polynomial equations via Groebner bases methods. Our main results include a connection between the solving degree and the last fall degree and one between the degree of regularity and the Castelnuovo-Mumford regularity.
Factoring Primes to Factor Moduli: Backdooring and Distributed Generation of Semiprimes
We describe a technique to backdoor a prime factor of a composite odd integer $N$, so that an attacker knowing a possibly secret factor base $\mathcal{B}$, can efficiently retrieve it from $N$. Such method builds upon Complex Multiplication theory for elliptic curves, by generating primes $p$ associated to $\mathcal{B}$-smooth order elliptic curves over $\mathbb{F}_p$. When such primes $p$ divide an integer $N$, the latter can be efficiently factored using a generalization of Lenstra's Factorization Method over rings bigger than $\mathbb{Z}_N$, and with no knowledge other than $N$ and $\mathcal{B}$.
We then formalize semiprimality certificates that, based on a result by Goldwasser and Kilian, allow to prove semiprimality of an integer with no need to reveal any of its factors. We show how our prime generation procedure can be used to efficiently produce semiprimality certificates, ultimately allowing us to sketch a multi-party distributed protocol to generate semiprimes with unknown factorisation, particularly relevant in the setting of distributed RSA modulus generation.
We provide and discuss implementations of all proposed protocols and we address security of semiprimality certificates by showing that semiprimes generated within our methods result at least as secure as random semiprimes of same size.
Polynomial XL: A Variant of the XL Algorithm Using Macaulay Matrices over Polynomial Rings
Solving a system of $m$ multivariate quadratic equations in $n$ variables over finite fields (the MQ problem) is one of the important problems in the theory of computer science. The XL algorithm (XL for short) is a major approach for solving the MQ problem with linearization over a coefficient field. Furthermore, the hybrid approach with XL (h-XL) is a variant of XL guessing some variables beforehand. In this paper, we present a variant of h-XL, which we call the polynomial XL (PXL). In PXL, the whole $n$ variables are divided into $k$ variables to be fixed and the remaining $n-k$ variables as ``main variables'', and we generate a Macaulay matrix with respect to the $n-k$ main variables over a polynomial ring of the $k$ (sub-)variables. By eliminating some columns of the Macaulay matrix over the polynomial ring before guessing $k$ variables, the amount of operations required for each guessed value can be reduced compared with h-XL. Our complexity analysis of PXL (under some practical assumptions and heuristics) gives a new theoretical bound, and it indicates that PXL could be more efficient than other algorithms in theory on the random system with $n=m$, which is the case of general multivariate signatures. For example, on systems over the finite field with ${2^8}$ elements with $n=m=80$, the numbers of operations deduced from the theoretical bounds of the hybrid approaches with XL and Wiedemann XL, Crossbred, and PXL with optimal $k$ are estimated as $2^{252}$, $2^{234}$, $2^{237}$, and $2^{220}$, respectively.
An Optimized Quantum Implementation of ISD on Scalable Quantum Resources
The security of code based constructions is usually assessed by Information Set Decoding (ISD) algorithms. In the quantum setting, amplitude amplification yields an asymptotic square root gain over the classical analogue. However, it is still unclear whether a real quantum circuit could yield actual improvements or suffer an enormous overhead due to its implementation. This leads to different considerations of these quantum attacks in the security analysis of code based proposals. In this work we clarify this doubt by giving the first quantum circuit design of the fully-fledged ISD procedure, an implementation in the quantum simulation library Qibo as well as precise estimates of its complexities. We show that against common belief, Prange's ISD algorithm can be implemented rather efficiently on a quantum computer, namely with only a logarithmic overhead in circuit depth compared to a classical implementation.
As another major contribution, we leverage the idea of classical co-processors to design hybrid classical-quantum trade-offs, that allow to tailor the necessary qubits to any available amount, while still providing quantum speedups. Interestingly, when constraining the width of the circuit instead of its depth we are able to overcome previous optimality results on constraint quantum search.
Efficient and Extensive Search Linear Approximations with High for Precise Correlations of Full SNOW-V
SNOW-V is a stream cipher recently designed for 5G communication system.
In this paper, we propose two efficient algorithms to evaluate the precise correlation of SNOW-V's two main nonlinear components
with linear hull effects fully considered.
Based on these algorithms, we could efficiently and extensively search much more linear masks than before.
The ideas of these algorithms can be generalized to other similar nonlinear components in symmetric cipher.
We apply our algorithms to full SNOW-V to search different types of linear approximations with high correlations.
Our results depict more linear approximations with higher correlations than those proposed for full SNOW-V and SNOW-$\text{V}_{\boxplus_{32},\boxplus_8}$ recently.
The best linear approximation we found has absolute correlation $2^{-47.567}$.
There are at least 8, 135 and 1092 linear approximations with absolute correlation greater than $2^{-47.851}$, $2^{-49}$ and $2^{-50}$ respectively,
which would derive a fast correlation attack with time/memory/data complexities $2^{240.86}$, $2^{240.37}$ and $2^{236.87}$.
It is better than all the previous results of fast correlation attack against full SNOW-V.
Moreover, we propose some properties for linear trails with 3 active S-boxes,
which give a theoretical explanation that automatic search method lacks of.
Our work provides a more comprehensive description for the linear approximation properties of full SNOW-V.
An Enhanced Long-term Blockchain Scheme Against Compromise of Cryptography
Blockchain is a decentralized ledger applying the peer-to-peer (P2P) network, cryptography and consensus mechanism over distributed network. Especially, the underlying cryptographic algorithms protect the blockchain integrity and data authenticity. However, it is well-known that every single algorithm is associated with a limited lifespan due to the increasing computational power of attackers. The compromise of algorithms directly leads to the compromise of blockchain validity. There are two existing long-term blockchain schemes dealing with this problem, but we observe that in these schemes: 1) the calculation of block hash values is not compatible with existing blockchains; 2) the hash transition procedure is only specified from the first algorithm to the second one, there are multiple possibilities to implement the scheme for a longer time, some of them may lead to the failure of the scheme; 3) the security of their schemes are not formally analyzed and proved. In this paper, we propose an enhanced long-term blockchain scheme as a solution to issue 1 and 2, and we formally prove that our scheme is secure without the limitation of cryptographic algorithms. Besides, we implement our scheme, the results show that our hash transition procedure can be completed between 20 minutes (best case) and several hours (worst case) for a current Bitcoin and Ethereum blockchain, which is very efficient.