Mahdi Cheraghchi is an Assistant Professor of EECS at the University of Michigan—Ann Arbor. Before joining U of M in 2019, he was on the faculty of Imperial College London, UK, and prior to that held post-doctoral appointments at UC Berkeley, MIT, CMU, UT Austin, as well as a visiting engineer position at Qualcomm. He obtained his Ph.D. in 2010 from EPFL, where he received the Patrick Denantes Memorial Prize for outstanding doctoral thesis. Mahdi is broadly interested in all theoretical aspects of computer science, especially the role of information and coding theory in cryptography, complexity, algorithms, and high-dimensional geometry. He is a senior member of the ACM and IEEE.
We derive improved and easily computable upper bounds on the capacity of the discrete-time Poisso... more We derive improved and easily computable upper bounds on the capacity of the discrete-time Poisson channel under an average-power constraint and an arbitrary constant dark current term. This is accomplished by combining a general convex duality framework with a modified version of the digamma distribution considered in previous work of the authors (Cheraghchi, J. ACM 2019; Cheraghchi, Ribeiro, IEEE Trans. Inf. Theory 2019). For most choices of parameters, our upper bounds improve upon previous results even when an additional peak-power constraint is imposed on the input.
The basic goal of threshold group testing is to identify up to d defective items among a populati... more The basic goal of threshold group testing is to identify up to d defective items among a population of n items, where d is usually much smaller than n. The outcome of a test on a subset of items is positive if the subset has at least u defective items, negative if it has up to ℓ defective items, where 0 ≤ℓ < u, and arbitrary otherwise. This is called threshold group testing with a gap. There are a few reported studies on test designs and decoding algorithms for identifying defective items. Most of the previous studies have not been feasible because there are numerous constraints on their problem settings or the decoding complexities of their proposed schemes are relatively large. Therefore, it is compulsory to reduce the number of tests as well as the decoding complexity, i.e., the time for identifying the defective items, for achieving practical schemes. The work presented here makes five contributions. The first is a corrected theorem for a non-adaptive algorithm for threshold ...
We show that all non-negative submodular functions have high noise-stability. As a con-sequence, ... more We show that all non-negative submodular functions have high noise-stability. As a con-sequence, we obtain a polynomial-time learning algorithm for this class with respect to any product distribution on {−1, 1}n (for any constant accuracy parameter ). Our algorithm also succeeds in the agnostic setting. Previous work on learning submodular functions required ei-ther query access or strong assumptions about the types of submodular functions to be learned (and did not hold in the agnostic setting). Additionally we give simple algorithms that efficiently release differentially private answers to all Boolean conjunctions and to all halfspaces with constant average error, subsuming and improving the recent work due to Gupta, Hardt, Roth and Ullman (STOC 2011). 1
AC0 ◦ MOD2 circuits are AC0 circuits augmented with a layer of parity gates just above the input ... more AC0 ◦ MOD2 circuits are AC0 circuits augmented with a layer of parity gates just above the input layer. We study AC0 ◦MOD2 circuit lower bounds for computing the Boolean Inner Product functions. Recent works by Servedio and Viola (ECCC TR12-144) and Akavia et al. (ITCS 2014) have highlighted this problem as a frontier problem in circuit complexity that arose both as a first step towards solving natural special cases of the matrix rigidity problem and as a candidate for constructing pseudorandom generators of minimal complexity. We give the first superlinear lower bound for the Boolean Inner Product function against AC0 ◦MOD2 of depth four or greater. Specifically, we prove a superlinear lower bound for circuits of arbitrary constant depth, and an Ω̃(n2) lower bound for the special case of depth-4 AC0 ◦MOD2. Our proof of the depth-4 lower bound employs a new “moment-matching ” inequality for bounded, nonnegative integer-valued random variables that may be of independent interest: we ...
Identification of defective members of large populations has been widely studied in the statistic... more Identification of defective members of large populations has been widely studied in the statistics community under the name of group testing. It involves grouping subsets of items into different pools and detecting defective members based on the set of test results obtained for each pool. In a classical noiseless group testing setup, it is assumed that the sampling procedure is fully known to the reconstruction algorithm, in the sense that the existence of a defective member in a pool results in the test outcome of that pool to be positive. However, this may not be always a valid assumption in some cases of interest. In particular, we consider the case where the defective items in a pool can become independently inactive with a certain probability. Hence, one may obtain a negative test result in a pool despite containing some defective items. As a result, any sampling and reconstruction method should be able to cope with two different types of uncertainty, i.e., the unknown set of d...
We study constraint satisfaction problems on the domain {−1, 1}, where the given constraints are ... more We study constraint satisfaction problems on the domain {−1, 1}, where the given constraints are homogeneous linear threshold predicates. That is, predicates of the form sgn(w1x1 + · · · + wnxn) for some positive integer weights w1,..., wn. Despite their simplicity, current techniques fall short of providing a classification of these predicates in terms of approximability. In fact, it is not easy to guess whether there exists a homogeneous linear threshold predicate that is approximation resistant or not. The focus of this paper is to identify and study the approximation curve of a class of threshold predicates that allow for non-trivial approximation. Arguably the simplest such predicate is the majority predicate sgn(x1 + · · · + xn), for which we obtain an almost complete understanding of the asymptotic approximation curve, assuming the Unique Games Conjecture. Our techniques extend to a more general class of “majority-like ” predicates and we obtain parallel results for them. In ...
For every fixed constant α> 0, we design an algorithm for computing the k-sparse Walsh-Hadamar... more For every fixed constant α> 0, we design an algorithm for computing the k-sparse Walsh-Hadamard transform of an N-dimensional vector x ∈ RN in time k1+α(logN)O(1). Specifically, the algorithm is given query access to x and computes a k-sparse x ̃ ∈ RN satisfying ‖x̃ − x̂‖1 ≤ c‖x̂−Hk(x̂)‖1, for an absolute constant c> 0, where x ̂ is the transform of x and Hk(x̂) is its best k-sparse approximation. Our algorithm is fully deterministic and only uses non-adaptive queries to x (i.e., all queries are determined and performed in parallel when the algorithm starts). An important technical tool that we use is a construction of nearly optimal and linear lossless condensers which is a careful instantiation of the GUV condenser (Guruswami, Umans, Vadhan, JACM 2009). Moreover, we design a deterministic and non-adaptive `1/`1 compressed sensing scheme based on general lossless condensers that is equipped with a fast reconstruction algorithm running in time k1+α(logN)O(1) (for the GUV-based...
Abstract—We give a general framework for construction of small ensembles of capacity achieving li... more Abstract—We give a general framework for construction of small ensembles of capacity achieving linear codes for a wide range of (not necessarily memoryless) discrete symmetric channels, and in particular, the binary erasure and symmetric channels. The main tool used in our constructions is the notion of randomness extractors and lossless condensers that are regarded as central tools in theoretical computer science. Same as random codes, the resulting ensembles preserve their capacity achieving properties under any change of basis. Our methods can potentially lead to polynomial-sized ensembles; however, using known explicit constructions of randomness conductors we obtain specific ensembles whose size is as small as quasipolynomial in the block length. By applying our construction to Justesen’s concatenation scheme (Justesen, 1972) we obtain explicit capacity achieving codes for BEC (resp., BSC) with almost linear time encoding and almost linear time (resp., quadratic time) decoding ...
For a size parameter s : N → N, the Minimum Circuit Size Problem (denoted by MCSP[s(n)]) is the p... more For a size parameter s : N → N, the Minimum Circuit Size Problem (denoted by MCSP[s(n)]) is the problem of deciding whether the minimum circuit size of a given function f : {0, 1} → {0, 1} (represented by a string of length N := 2) is at most a threshold s(n). A recent line of work exhibited “hardness magnification” phenomena for MCSP: A very weak lower bound for MCSP implies a breakthrough result in complexity theory. For example, McKay, Murray, and Williams (STOC 2019) implicitly showed that, for some constant μ1 > 0, if MCSP[2μ1·n] cannot be computed by a one-tape Turing machine (with an additional one-way read-only input tape) running in time N1.01, then P 6= NP. In this paper, we present the following new lower bounds against one-tape Turing machines and branching programs: 1. A randomized two-sided error one-tape Turing machine (with an additional one-way read-only input tape) cannot compute MCSP[2μ2·n] in time N1.99, for some constant μ2 > μ1. 2. A non-deterministic (or...
2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS), 2020
In the long-studied problem of combinatorial group testing, one is asked to detect a set of $k$ d... more In the long-studied problem of combinatorial group testing, one is asked to detect a set of $k$ defective items out of a population of size $n$, using $m\ll n$ disjunctive measurements. In the non-adaptive setting, the most widely used combinatorial objects are disjunct and list-disjunct matrices, which define incidence matrices of test schemes. Disjunct matrices allow the identification of the exact set of defectives, whereas list disjunct matrices identify a small superset of the defectives. Apart from the combinatorial guarantees, it is often of key interest to equip measurement designs with efficient decoding algorithms. The most efficient decoders should run in sublinear time in $n$, and ideally near-linear in the number of measurements $m$. In this work, we give several constructions with an optimal number of measurements and near-optimal decoding time for the most fundamental group testing tasks, as well as for central tasks in the compressed sensing and heavy hitters literat...
Non-malleable secret sharing was recently studied by Goyal and Kumar in independent tampering and... more Non-malleable secret sharing was recently studied by Goyal and Kumar in independent tampering and joint tampering models for threshold scheme (STOC18) and secret sharing with general access structure (CRYPTO18). We study non-malleable secret sharing in a natural adaptive tampering model, where the share vector is tampered using a function, in a given tampering family, chosen adaptively according to any unauthorised set of shares. Intuitively, the passive privacy adversary of secret sharing and the active adversary characterized by the given tampering family collude. We then focus on the tampering family of affine functions and construct non-malleable secret sharing in the adaptive tampering model. The constructions are modular with an erasure code and an extractor that provides both privacy and non-malleability. We make use of randomness extractors of various flavours, including the seeded/seedless non-malleable extractors. We discuss our results and open problems.
Shamir's celebrated secret sharing scheme provides an efficient method for encoding a secret ... more Shamir's celebrated secret sharing scheme provides an efficient method for encoding a secret of arbitrary length $\ell$ among any $N \leq 2^\ell$ players such that for a threshold parameter $t$, (i) the knowledge of any $t$ shares does not reveal any information about the secret and, (ii) any choice of $t+1$ shares fully reveals the secret. It is known that any such threshold secret sharing scheme necessarily requires shares of length $\ell$, and in this sense Shamir's scheme is optimal. The more general notion of ramp schemes requires the reconstruction of secret from any $t+g$ shares, for a positive integer gap parameter $g$. Ramp secret sharing scheme necessarily requires shares of length $\ell/g$. Other than the bound related to secret length $\ell$, the share lengths of ramp schemes can not go below a quantity that depends only on the gap ratio $g/N$. In this work, we study secret sharing in the extremal case of bit-long shares and arbitrarily small gap ratio $g/N$, whe...
Mean-based reconstruction is a fundamental, natural approach to worst-case trace reconstruction o... more Mean-based reconstruction is a fundamental, natural approach to worst-case trace reconstruction over channels with synchronization errors. It is known that exp(Θ(n)) traces are necessary and sufficient for mean-based worst-case trace reconstruction over the deletion channel, and this result was also extended to certain channels combining deletions and geometric insertions of uniformly random bits. In this work, we use a simple extension of the original complex-analytic approach to show that these results are examples of a much more general phenomenon. We introduce oblivious synchronization channels, which map each input bit to an arbitrarily distributed sequence of replications and insertions of random bits. This general class captures all previously considered synchronization channels. We show that for any oblivious synchronization channel whose output length follows a sub-exponential distribution either mean-based trace reconstruction is impossible or exp(O(n)) traces suffice for ...
The first part of the paper presents a review of the gold-standard testing protocol for Covid-19,... more The first part of the paper presents a review of the gold-standard testing protocol for Covid-19, real-time, reverse transcriptase PCR, and its properties and associated measurement data such as amplification curves that can guide the development of appropriate and accurate adaptive group testing protocols. The second part of the paper is concerned with examining various off-the-shelf group testing methods for Covid-19 and identifying their strengths and weaknesses for the application at hand. The third part of the paper contains a collection of new analytical results for adaptive semiquantitative group testing with probabilistic and combinatorial priors, including performance bounds, algorithmic solutions, and noisy testing protocols. The probabilistic setting is of special importance as it is designed to be simple to implement by nonexperts and handle heavy hitters. The worst-case paradigm extends and improves upon prior work on semiquantitative group testing with and without spec...
Locally decodable codes (LDCs) are error correcting codes with the extra property that it is suci... more Locally decodable codes (LDCs) are error correcting codes with the extra property that it is sucient to read just a small number of positions of a possibly corrupted codeword in order to recover any one position of the input. To achieve this, it is necessary to use randomness in the decoding procedures. We refer to the probability of returning the correct answer as the correctness of the decoding algorithm. Thus far, the study of LDCs has focused on the question of the tradeo between their length and the query complexity of the decoders. Another natural question is what is the largest possible correctness, as a function of the amount of codeword corruption and the number of queries, regardless of the length of the codewords. Goldreich et al. (Computational Complexity 15(3), 2006) observed that for a given number of queries and fraction of errors, the correctness probability cannot be arbitrarily close to 1. However, the quantitative dependence between the largest possible correctnes...
2021 IEEE International Symposium on Information Theory (ISIT), 2021
Mean-based reconstruction is a fundamental, natural approach to worst-case trace reconstruction o... more Mean-based reconstruction is a fundamental, natural approach to worst-case trace reconstruction over channels with synchronization errors. It is known that $\exp(O(n^{1/3}))$ traces are necessary and sufficient for mean-based worst-case trace reconstruction over the deletion channel, and this result was also extended to certain channels combining deletions and geometric insertions of uniformly random bits. In this work, we use a simple extension of the original complex-analytic approach to show that these results are examples of a much more general phenomenon: $\exp(O(n^{1/3}))$ traces suffice for mean-based worst-case trace reconstruction over any memoryless channel that maps each input bit to an arbitrarily distributed sequence of replications and insertions of random bits, provided the length of this sequence follows a sub-exponential distribution.
ACM Transactions on Computation Theory (TOCT), 2020
The Minimum Circuit Size Problem (MCSP) asks if a given truth table of a Boolean function f can b... more The Minimum Circuit Size Problem (MCSP) asks if a given truth table of a Boolean function f can be computed by a Boolean circuit of size at most θ, for a given parameter θ. We improve several circuit lower bounds for MCSP, using pseudorandom generators (PRGs) that are local; a PRG is called local if its output bit strings, when viewed as the truth table of a Boolean function, can be computed by a Boolean circuit of small size. We get new and improved lower bounds for MCSP that almost match the best-known lower bounds against several circuit models. Specifically, we show that computing MCSP, on functions with a truth table of length N, requires • N3−o(1)-size de Morgan formulas, improving the recent N2−o(1) lower bound by Hirahara and Santhanam (CCC, 2017), • N2−o(1)-size formulas over an arbitrary basis or general branching programs (no non-trivial lower bound was known for MCSP against these models), and • 2Ω(N1/(d+1.01))-size depth-d AC0 circuits, improving the (implicit, in their...
The goal of combinatorial group testing is to efficiently identify up to $d$ defective items in a... more The goal of combinatorial group testing is to efficiently identify up to $d$ defective items in a large population of $n$ items, where $d \ll n$. Defective items satisfy certain properties while the remaining items in the population do not. To efficiently identify defective items, a subset of items is pooled and then tested. In this work, we consider complex group testing (CmplxGT) in which a set of defective items consists of subsets of positive items (called \textit{positive complexes}). CmplxGT is classified into two categories: classical CmplxGT (CCmplxGT) and generalized CmplxGT (GCmplxGT). In CCmplxGT, the outcome of a test on a subset of items is positive if the subset contains at least one positive complex, and negative otherwise. In GCmplxGT, the outcome of a test on a subset of items is positive if the subset has a certain number of items of some positive complex, and negative otherwise. For CCmplxGT, we present a scheme that efficiently identifies all positive complexes i...
We derive improved and easily computable upper bounds on the capacity of the discrete-time Poisso... more We derive improved and easily computable upper bounds on the capacity of the discrete-time Poisson channel under an average-power constraint and an arbitrary constant dark current term. This is accomplished by combining a general convex duality framework with a modified version of the digamma distribution considered in previous work of the authors (Cheraghchi, J. ACM 2019; Cheraghchi, Ribeiro, IEEE Trans. Inf. Theory 2019). For most choices of parameters, our upper bounds improve upon previous results even when an additional peak-power constraint is imposed on the input.
The basic goal of threshold group testing is to identify up to d defective items among a populati... more The basic goal of threshold group testing is to identify up to d defective items among a population of n items, where d is usually much smaller than n. The outcome of a test on a subset of items is positive if the subset has at least u defective items, negative if it has up to ℓ defective items, where 0 ≤ℓ < u, and arbitrary otherwise. This is called threshold group testing with a gap. There are a few reported studies on test designs and decoding algorithms for identifying defective items. Most of the previous studies have not been feasible because there are numerous constraints on their problem settings or the decoding complexities of their proposed schemes are relatively large. Therefore, it is compulsory to reduce the number of tests as well as the decoding complexity, i.e., the time for identifying the defective items, for achieving practical schemes. The work presented here makes five contributions. The first is a corrected theorem for a non-adaptive algorithm for threshold ...
We show that all non-negative submodular functions have high noise-stability. As a con-sequence, ... more We show that all non-negative submodular functions have high noise-stability. As a con-sequence, we obtain a polynomial-time learning algorithm for this class with respect to any product distribution on {−1, 1}n (for any constant accuracy parameter ). Our algorithm also succeeds in the agnostic setting. Previous work on learning submodular functions required ei-ther query access or strong assumptions about the types of submodular functions to be learned (and did not hold in the agnostic setting). Additionally we give simple algorithms that efficiently release differentially private answers to all Boolean conjunctions and to all halfspaces with constant average error, subsuming and improving the recent work due to Gupta, Hardt, Roth and Ullman (STOC 2011). 1
AC0 ◦ MOD2 circuits are AC0 circuits augmented with a layer of parity gates just above the input ... more AC0 ◦ MOD2 circuits are AC0 circuits augmented with a layer of parity gates just above the input layer. We study AC0 ◦MOD2 circuit lower bounds for computing the Boolean Inner Product functions. Recent works by Servedio and Viola (ECCC TR12-144) and Akavia et al. (ITCS 2014) have highlighted this problem as a frontier problem in circuit complexity that arose both as a first step towards solving natural special cases of the matrix rigidity problem and as a candidate for constructing pseudorandom generators of minimal complexity. We give the first superlinear lower bound for the Boolean Inner Product function against AC0 ◦MOD2 of depth four or greater. Specifically, we prove a superlinear lower bound for circuits of arbitrary constant depth, and an Ω̃(n2) lower bound for the special case of depth-4 AC0 ◦MOD2. Our proof of the depth-4 lower bound employs a new “moment-matching ” inequality for bounded, nonnegative integer-valued random variables that may be of independent interest: we ...
Identification of defective members of large populations has been widely studied in the statistic... more Identification of defective members of large populations has been widely studied in the statistics community under the name of group testing. It involves grouping subsets of items into different pools and detecting defective members based on the set of test results obtained for each pool. In a classical noiseless group testing setup, it is assumed that the sampling procedure is fully known to the reconstruction algorithm, in the sense that the existence of a defective member in a pool results in the test outcome of that pool to be positive. However, this may not be always a valid assumption in some cases of interest. In particular, we consider the case where the defective items in a pool can become independently inactive with a certain probability. Hence, one may obtain a negative test result in a pool despite containing some defective items. As a result, any sampling and reconstruction method should be able to cope with two different types of uncertainty, i.e., the unknown set of d...
We study constraint satisfaction problems on the domain {−1, 1}, where the given constraints are ... more We study constraint satisfaction problems on the domain {−1, 1}, where the given constraints are homogeneous linear threshold predicates. That is, predicates of the form sgn(w1x1 + · · · + wnxn) for some positive integer weights w1,..., wn. Despite their simplicity, current techniques fall short of providing a classification of these predicates in terms of approximability. In fact, it is not easy to guess whether there exists a homogeneous linear threshold predicate that is approximation resistant or not. The focus of this paper is to identify and study the approximation curve of a class of threshold predicates that allow for non-trivial approximation. Arguably the simplest such predicate is the majority predicate sgn(x1 + · · · + xn), for which we obtain an almost complete understanding of the asymptotic approximation curve, assuming the Unique Games Conjecture. Our techniques extend to a more general class of “majority-like ” predicates and we obtain parallel results for them. In ...
For every fixed constant α> 0, we design an algorithm for computing the k-sparse Walsh-Hadamar... more For every fixed constant α> 0, we design an algorithm for computing the k-sparse Walsh-Hadamard transform of an N-dimensional vector x ∈ RN in time k1+α(logN)O(1). Specifically, the algorithm is given query access to x and computes a k-sparse x ̃ ∈ RN satisfying ‖x̃ − x̂‖1 ≤ c‖x̂−Hk(x̂)‖1, for an absolute constant c> 0, where x ̂ is the transform of x and Hk(x̂) is its best k-sparse approximation. Our algorithm is fully deterministic and only uses non-adaptive queries to x (i.e., all queries are determined and performed in parallel when the algorithm starts). An important technical tool that we use is a construction of nearly optimal and linear lossless condensers which is a careful instantiation of the GUV condenser (Guruswami, Umans, Vadhan, JACM 2009). Moreover, we design a deterministic and non-adaptive `1/`1 compressed sensing scheme based on general lossless condensers that is equipped with a fast reconstruction algorithm running in time k1+α(logN)O(1) (for the GUV-based...
Abstract—We give a general framework for construction of small ensembles of capacity achieving li... more Abstract—We give a general framework for construction of small ensembles of capacity achieving linear codes for a wide range of (not necessarily memoryless) discrete symmetric channels, and in particular, the binary erasure and symmetric channels. The main tool used in our constructions is the notion of randomness extractors and lossless condensers that are regarded as central tools in theoretical computer science. Same as random codes, the resulting ensembles preserve their capacity achieving properties under any change of basis. Our methods can potentially lead to polynomial-sized ensembles; however, using known explicit constructions of randomness conductors we obtain specific ensembles whose size is as small as quasipolynomial in the block length. By applying our construction to Justesen’s concatenation scheme (Justesen, 1972) we obtain explicit capacity achieving codes for BEC (resp., BSC) with almost linear time encoding and almost linear time (resp., quadratic time) decoding ...
For a size parameter s : N → N, the Minimum Circuit Size Problem (denoted by MCSP[s(n)]) is the p... more For a size parameter s : N → N, the Minimum Circuit Size Problem (denoted by MCSP[s(n)]) is the problem of deciding whether the minimum circuit size of a given function f : {0, 1} → {0, 1} (represented by a string of length N := 2) is at most a threshold s(n). A recent line of work exhibited “hardness magnification” phenomena for MCSP: A very weak lower bound for MCSP implies a breakthrough result in complexity theory. For example, McKay, Murray, and Williams (STOC 2019) implicitly showed that, for some constant μ1 > 0, if MCSP[2μ1·n] cannot be computed by a one-tape Turing machine (with an additional one-way read-only input tape) running in time N1.01, then P 6= NP. In this paper, we present the following new lower bounds against one-tape Turing machines and branching programs: 1. A randomized two-sided error one-tape Turing machine (with an additional one-way read-only input tape) cannot compute MCSP[2μ2·n] in time N1.99, for some constant μ2 > μ1. 2. A non-deterministic (or...
2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS), 2020
In the long-studied problem of combinatorial group testing, one is asked to detect a set of $k$ d... more In the long-studied problem of combinatorial group testing, one is asked to detect a set of $k$ defective items out of a population of size $n$, using $m\ll n$ disjunctive measurements. In the non-adaptive setting, the most widely used combinatorial objects are disjunct and list-disjunct matrices, which define incidence matrices of test schemes. Disjunct matrices allow the identification of the exact set of defectives, whereas list disjunct matrices identify a small superset of the defectives. Apart from the combinatorial guarantees, it is often of key interest to equip measurement designs with efficient decoding algorithms. The most efficient decoders should run in sublinear time in $n$, and ideally near-linear in the number of measurements $m$. In this work, we give several constructions with an optimal number of measurements and near-optimal decoding time for the most fundamental group testing tasks, as well as for central tasks in the compressed sensing and heavy hitters literat...
Non-malleable secret sharing was recently studied by Goyal and Kumar in independent tampering and... more Non-malleable secret sharing was recently studied by Goyal and Kumar in independent tampering and joint tampering models for threshold scheme (STOC18) and secret sharing with general access structure (CRYPTO18). We study non-malleable secret sharing in a natural adaptive tampering model, where the share vector is tampered using a function, in a given tampering family, chosen adaptively according to any unauthorised set of shares. Intuitively, the passive privacy adversary of secret sharing and the active adversary characterized by the given tampering family collude. We then focus on the tampering family of affine functions and construct non-malleable secret sharing in the adaptive tampering model. The constructions are modular with an erasure code and an extractor that provides both privacy and non-malleability. We make use of randomness extractors of various flavours, including the seeded/seedless non-malleable extractors. We discuss our results and open problems.
Shamir's celebrated secret sharing scheme provides an efficient method for encoding a secret ... more Shamir's celebrated secret sharing scheme provides an efficient method for encoding a secret of arbitrary length $\ell$ among any $N \leq 2^\ell$ players such that for a threshold parameter $t$, (i) the knowledge of any $t$ shares does not reveal any information about the secret and, (ii) any choice of $t+1$ shares fully reveals the secret. It is known that any such threshold secret sharing scheme necessarily requires shares of length $\ell$, and in this sense Shamir's scheme is optimal. The more general notion of ramp schemes requires the reconstruction of secret from any $t+g$ shares, for a positive integer gap parameter $g$. Ramp secret sharing scheme necessarily requires shares of length $\ell/g$. Other than the bound related to secret length $\ell$, the share lengths of ramp schemes can not go below a quantity that depends only on the gap ratio $g/N$. In this work, we study secret sharing in the extremal case of bit-long shares and arbitrarily small gap ratio $g/N$, whe...
Mean-based reconstruction is a fundamental, natural approach to worst-case trace reconstruction o... more Mean-based reconstruction is a fundamental, natural approach to worst-case trace reconstruction over channels with synchronization errors. It is known that exp(Θ(n)) traces are necessary and sufficient for mean-based worst-case trace reconstruction over the deletion channel, and this result was also extended to certain channels combining deletions and geometric insertions of uniformly random bits. In this work, we use a simple extension of the original complex-analytic approach to show that these results are examples of a much more general phenomenon. We introduce oblivious synchronization channels, which map each input bit to an arbitrarily distributed sequence of replications and insertions of random bits. This general class captures all previously considered synchronization channels. We show that for any oblivious synchronization channel whose output length follows a sub-exponential distribution either mean-based trace reconstruction is impossible or exp(O(n)) traces suffice for ...
The first part of the paper presents a review of the gold-standard testing protocol for Covid-19,... more The first part of the paper presents a review of the gold-standard testing protocol for Covid-19, real-time, reverse transcriptase PCR, and its properties and associated measurement data such as amplification curves that can guide the development of appropriate and accurate adaptive group testing protocols. The second part of the paper is concerned with examining various off-the-shelf group testing methods for Covid-19 and identifying their strengths and weaknesses for the application at hand. The third part of the paper contains a collection of new analytical results for adaptive semiquantitative group testing with probabilistic and combinatorial priors, including performance bounds, algorithmic solutions, and noisy testing protocols. The probabilistic setting is of special importance as it is designed to be simple to implement by nonexperts and handle heavy hitters. The worst-case paradigm extends and improves upon prior work on semiquantitative group testing with and without spec...
Locally decodable codes (LDCs) are error correcting codes with the extra property that it is suci... more Locally decodable codes (LDCs) are error correcting codes with the extra property that it is sucient to read just a small number of positions of a possibly corrupted codeword in order to recover any one position of the input. To achieve this, it is necessary to use randomness in the decoding procedures. We refer to the probability of returning the correct answer as the correctness of the decoding algorithm. Thus far, the study of LDCs has focused on the question of the tradeo between their length and the query complexity of the decoders. Another natural question is what is the largest possible correctness, as a function of the amount of codeword corruption and the number of queries, regardless of the length of the codewords. Goldreich et al. (Computational Complexity 15(3), 2006) observed that for a given number of queries and fraction of errors, the correctness probability cannot be arbitrarily close to 1. However, the quantitative dependence between the largest possible correctnes...
2021 IEEE International Symposium on Information Theory (ISIT), 2021
Mean-based reconstruction is a fundamental, natural approach to worst-case trace reconstruction o... more Mean-based reconstruction is a fundamental, natural approach to worst-case trace reconstruction over channels with synchronization errors. It is known that $\exp(O(n^{1/3}))$ traces are necessary and sufficient for mean-based worst-case trace reconstruction over the deletion channel, and this result was also extended to certain channels combining deletions and geometric insertions of uniformly random bits. In this work, we use a simple extension of the original complex-analytic approach to show that these results are examples of a much more general phenomenon: $\exp(O(n^{1/3}))$ traces suffice for mean-based worst-case trace reconstruction over any memoryless channel that maps each input bit to an arbitrarily distributed sequence of replications and insertions of random bits, provided the length of this sequence follows a sub-exponential distribution.
ACM Transactions on Computation Theory (TOCT), 2020
The Minimum Circuit Size Problem (MCSP) asks if a given truth table of a Boolean function f can b... more The Minimum Circuit Size Problem (MCSP) asks if a given truth table of a Boolean function f can be computed by a Boolean circuit of size at most θ, for a given parameter θ. We improve several circuit lower bounds for MCSP, using pseudorandom generators (PRGs) that are local; a PRG is called local if its output bit strings, when viewed as the truth table of a Boolean function, can be computed by a Boolean circuit of small size. We get new and improved lower bounds for MCSP that almost match the best-known lower bounds against several circuit models. Specifically, we show that computing MCSP, on functions with a truth table of length N, requires • N3−o(1)-size de Morgan formulas, improving the recent N2−o(1) lower bound by Hirahara and Santhanam (CCC, 2017), • N2−o(1)-size formulas over an arbitrary basis or general branching programs (no non-trivial lower bound was known for MCSP against these models), and • 2Ω(N1/(d+1.01))-size depth-d AC0 circuits, improving the (implicit, in their...
The goal of combinatorial group testing is to efficiently identify up to $d$ defective items in a... more The goal of combinatorial group testing is to efficiently identify up to $d$ defective items in a large population of $n$ items, where $d \ll n$. Defective items satisfy certain properties while the remaining items in the population do not. To efficiently identify defective items, a subset of items is pooled and then tested. In this work, we consider complex group testing (CmplxGT) in which a set of defective items consists of subsets of positive items (called \textit{positive complexes}). CmplxGT is classified into two categories: classical CmplxGT (CCmplxGT) and generalized CmplxGT (GCmplxGT). In CCmplxGT, the outcome of a test on a subset of items is positive if the subset contains at least one positive complex, and negative otherwise. In GCmplxGT, the outcome of a test on a subset of items is positive if the subset has a certain number of items of some positive complex, and negative otherwise. For CCmplxGT, we present a scheme that efficiently identifies all positive complexes i...
Uploads
Papers by Mahdi Cheraghchi