Frequency estimation plays a critical role in many applications involving personal and private categorical data. Such data are often collected sequentially over time, making it valuable to estimate their distribution online while preserving privacy. We propose AdOBEst-LDP, a new algorithm for adaptive, online Bayesian estimation of categorical distributions under local differential privacy (LDP). The key idea behind AdOBEst-LDP is to enhance the utility of future privatized categorical data by leveraging inference from previously collected privatized data. To achieve this, AdOBEst-LDP uses a new adaptive LDP mechanism to collect privatized data. This LDP mechanism constrains its output to a subset of categories that “predicts” the next user’s data. By adapting the subset selection process to the past privatized data via Bayesian estimation, the algorithm improves the utility of future privatized data. To quantify utility, we explore various well-known information metrics, including (but not limited to) the Fisher information matrix, total variation distance, and information entropy. For Bayesian estimation, we utilize posterior sampling through stochastic gradient Langevin dynamics, a computationally efficient approximate Markov chain Monte Carlo (MCMC) method.
We provide a theoretical analysis showing that (i) the posterior distribution of the category probabilities targeted with Bayesian estimation converges to the true probabilities even for approximate posterior sampling, and (ii) AdOBEst-LDP eventually selects the optimal subset for its LDP mechanism with high probability if posterior sampling is performed exactly. We also present numerical results to validate the estimation accuracy of AdOBEst-LDP. Our comparisons show its superior performance against non-adaptive and semi-adaptive competitors across different privacy levels and distributional parameters.
1 Introduction
Frequency estimation is the focus of many applications that involve personal and private categorical data. Suppose a type of sensitive information is represented as a random variable \(X\) with a categorical distribution denoted by \(\text{Cat}(\theta)\), where \(\theta\) is a \(K\)-dimensional probability vector. As real-life examples, this could be the distribution of the types of a product bought by the customers of an online shopping company, responses to a poll question like “Which party will you vote for in the next elections?”, occupational affiliations of the people who visit the Web site of a governmental agency, and so on.
In this article, we propose an adaptive and online algorithm to estimate \(\theta\) in a Local Differential Privacy(LDP) framework where \(X\) is unobserved and instead, we have access to a randomized response \(Y\) derived from \(X\). In the LDP framework, a central aggregator receives each user’s randomized (privatized) data to be used for inferential tasks. In that sense, LDP differs from global DP [7] where the aggregator privatizes operations on the sensitive dataset after it collects the sensitive data without noise. Hence, LDP can be said to provide a stricter form of privacy and is used in cases where the aggregator may not be trustable [14]. Below, we give a more formal definition of \(\epsilon\)-LDP as a property that concerns a randomized mechanism.
The definition of LDP is almost the same as that of global DP. The main difference is that, in the global DP, inputs \(x,x^{\prime}\) are two datasets that differ in only one individual’s record, whereas in LDP, \(x,x^{\prime}\) are two different data points from \(\mathcal{X}\).
In Definition 1, \(\epsilon\geq 0\) is the privacy parameter. A smaller \(\epsilon\) value provides stronger privacy. One main challenge in most differential privacy settings is to decide on the randomized mechanism. In the case of LDP, this is how an individual data point \(X\) should be randomized. For a given randomized algorithm, too little randomization may not guarantee the privacy of individuals, whereas too severe randomization deteriorates the utility of the output of the randomized algorithm. Balancing these conflicting objectives (privacy vs. utility) is the main goal of the research on estimation under privacy constraints.
In many cases, individuals’ data points are collected sequentially. A basic example is opinion polling, where data are collected typically in time intervals of lengths in the order of hours or days. Personal data entered during registration are another example. For example, a hospital can collect patients’ categorical data as they visit the hospital for the first time.
While sequential collection of individual data may make the estimation task under the LDP constraint harder, it may also offer an opportunity to adapt the randomized mechanism in time to improve the estimation quality. Motivated by that, in this article, we address the problem of online Bayesian estimation of a categorical distribution (\(\theta\)) under \(\epsilon\)-LDP, while at the same time choosing the randomization mechanism adaptively so that the utility is improved continually in time.
Contribution:
This article presents Adaptive Online Bayesian Frequency Estimation with LDP (AdOBEst-LDP), a new methodological framework. A flowchart diagram of AdOBEst-LDP is given in Figure 1 to expose the reader to the main idea of the framework. The main idea of AdOBEst-LDP is to collect future privatized categorical data with high estimation utility based on the knowledge extracted from the previously collected privatized categorical data. To achieve this goal, AdOBEst-LDP continually adapts its randomized response mechanism to the estimat ion of \(\theta\).
Fig. 1.
The development of AdOBEst-LDP offers three main contributions to the LDP literature.
–
A New Randomized Response Mechanism: AdOBEst-LDP uses a new adaptive Randomly Restricted Randomized Response(RRRR) mechanism to produce randomized responses under \(\epsilon\)-LDP. RRRR is a generalization of the Standard Randomized Response (SRR)mechanism in that it restricts the response to a subset of categories. This subset is selected such that the sensitive information \(X\) of the next individual is likely contained in that subset. To ensure this, the subset selection step uses two inputs: (i) a sample for \(\theta\) drawn from the posterior distribution of \(\theta\) conditional on the past data, (ii) a utility function that scores the informativeness of the randomized response obtained from RRRR when it is run with a given subset. To that end, we propose several utility functions to score the informativeness of the randomized response. The utility functions are based on well-known tools and metrics from probability and statistics, such as Fisher information [2, 18, 22, 32], entropy, Total Variation (TV) distance, expected squared error, and probability of honest response, i.e., \(Y=X\). We provide some insight into those utility functions both theoretically and numerically. Moreover, we also provide a computational complexity analysis for the proposed utility functions.
–
Posterior Sampling: We equip AdOBEst-LDP with a scalable posterior sampling method for parameter estimation. Bayesian estimation is a natural choice for inference when the data is corrupted or censored [15–17] and such modification can be statistically modeled. In differential privacy settings, too, Bayesian inference is widely employed [2, 8, 13, 31] when the input data are shared with privacy-preserving noise. Standard Markov chain Monte Carlo (MCMC) methods, such as Gibbs sampling, have a computation complexity quadratic in the number of individuals whose data have been collected. As a remedy to this, similar to Mazumdar et al. [19], we propose a Stochastic Gradient Langevin Dynamics(SGLD)-based algorithm to obtain approximate posterior samples [30]. By working on subsets of data, SGLD scales in time.
–
The numerical experiments show that AdOBEst-LDP outperforms its non-adaptive counterpart when run with SGLD for posterior sampling. The results also suggest that the utility functions considered in this article are robust and perform well. The MATLAB code at https://github.com/soneraydin/AdOBEst_LDP can be used to reproduce the results obtained in this article.
–
Convergence Results: Finally, we provide a theoretical analysis of AdOBEst-LDP. We prove two main results:
(i)
The targeted posterior distribution conditional on the generated observations by the adaptive scheme converges to the true parameter in probability in the number of observations, \(n\). This convergence result owes mainly to the smoothness and a special form of concavity of the marginal log-likelihood function of the randomized responses. Another key factor is that the second moment of the sum up to time \(n\) of the gradient of this log-marginal likelihood is increases linearly with \(n\).
(ii)
If posterior sampling is performed exactly, the expected frequency of the algorithm choosing the best subset (according to the utility function) converges to \(1\) as \(n\) goes to \(\infty\).
The theoretical results require fairly weak, realistic, and verifiable assumptions.
Outline:
In Section 2, we discuss the earlier work related to ours. Section 3 presents LDP and the frequency estimation problem and introduces AdOBEst-LDP as a general framework. In Section 4, we delve deeper into the details of AdOBEst-LDP by first presenting RRRR, the proposed randomized response mechanism, then explaining how it chooses an “optimal” subset of categories adaptively at each iteration. Section 4 also presents the utility metrics considered for choosing these subsets in this article. In Section 5, we provide the details of the posterior sampling methods considered in this article, particularly SGLD. The theoretical analysis of AdOBEst-LDP is provided in Section 6. Section 7 contains the numerical experiments. Finally, Section 8 provides some concluding remarks. All the proofs of the theoretical results are given in the appendices.
2 Related Literature
Frequency estimation under the LDP setting has been an increasingly popular research area in recent years. Along with its basic application (estimation of discrete probabilities from locally privatized data), it is also used for a wide range of other estimation and learning purposes such as estimation of CIs and confidence sets for a population mean [28], estimation or identification of heavy hitters [10, 25, 34], estimation of quantiles [5], frequent itemset mining [33], estimation of degree distribution in social networks [23], distributed training of graph neural networks with categorical features and labels [4]. The methods that are proposed for \(\epsilon\)-LDP frequency estimation also form the basis of more complex inferential tasks (with some modifications on these methods), such as the release of “marginals” (contingency tables) between multiple categorical features and their correlations, as in the work of Cormode et al. [6].
AdOBEst-LDP employs RRRR as its randomized mechanism to produce randomized responses. RRRR is a modified version of the SRR mechanism (also known as generalized randomized response, \(k\)-randomized response, and direct encoding in the literature.) Given \(X\) as its input, SRR outputs \(X\) with probability \(\frac{e^{\epsilon}}{e^{\epsilon}+K-1}\), otherwise outputs one of the other categories at random. This is a well-studied mechanism in the DP literature, and the statistical properties of its basic version (such as its estimation variance) can be found in the works by [26] and [27]. When \(K\) is large, the utility of SRR can be too low. RRRR in AdOBEst-LDP is designed to circumvent this problem by constraining its output to a subset of categories. Unlike SRR, the perturbation probability of responses in our algorithm changes adaptively, depending on the cardinality of the selected subset of categories (which we explain in detail in Section 4) for the privatization of \(X\), and the cardinality of its complementary set.
The use of information metrics as utility functions in LDP protocols has been an active line of research in recent years. In the work of Kairouz et al. [12], information metrics like \(f\)-divergence and mutual information are used for selecting optimal LDP protocols. In the same vein, Steinberger [22] uses Fisher Information as the utility metric for finding a nearly optimal LDP protocol for the frequency estimation problem, and Lopuhaä-Zwakenberg et al. [18] use it for comparing the utility of various LDP protocols for frequency estimation and finding the optimal one. In these works, the mentioned information metrics are used statically, i.e., to choose a protocol once and for all, for a given estimation task. The approaches in these works suffer from computational complexity for large values of \(K\) because the search space for optimal protocols there grows in the order of \(2^{K}\). In some other works, such as Wang et al. [24], a randomly sampled subset of size \(k\leq K\) is used to improve the efficiency of this task, where the optimal \(k\) is determined by maximizing the mutual information between real data and the privatized data. However, this approach is also static as the optimal subset size \(k\) is selected only once, and the optimization procedure only determines \(k\) and not the subset itself. Unlike those static approaches, AdOBEst-LDP dynamically uses the information metric (such as the Fisher Information Matrix (FIM) and the other alternatives in Section 4.3) to select the optimal subset at each timestep. In addition, in the subset selection step of AdOBEst-LDP, only \(K\) candidate subsets are compared in terms of their utilities at each iteration, enabling computational tractability. This way of tackling the problem requires computing the given information metric for only \(K\) times at each iteration. We will provide further details of this approach in Section 4.3 and provide a computational complexity analysis in Section 4.4.
Another use of the Fisher Information in the LDP literature is for bounding the estimation error for a given LDP protocol. For example, Barnes et al. [3] use Fisher Information inside van Trees inequality, the Bayesian version of the Cramér-Rao bound [9], for bounding the estimation error of various LDP protocols for Gaussian mean estimation and frequency estimation. Again, their work provides rules for choosing optimal protocols for a given \(\epsilon\) in a static way. As a similar example, Acharya et al. [1] derive a general information contraction bound for parameter estimation problems under LDP and show its relation to van Trees inequality as its special case. To our knowledge, our approach is the first one that adaptively uses a utility metric to dynamically update the inner workings of an LDP protocol for estimating categorical distributions.
The idea of building adaptive mechanisms for improved estimation under the LDP has been studied in the literature, although the focus and methodology of those works differ from ours. For example, Joseph et al. [11] proposed a two-step adaptive method to estimate the unknown mean parameter of data from Gaussian distribution. In this method, the users are split into two groups, an initial mean estimate is obtained from the perturbed data of the first group and the data from the second group are transformed adaptively according to that initial estimate. Similarly, Wei et al. [29] proposed another two-step adaptive method for the mean estimation problem, in which the aggregator first computes a rough distribution estimate from the noisy data of a small sample of users, which is then used for adjusting the amount of perturbation for the data of remaining users. While Joseph et al. [11], Wei et al. [29] consider a two-stage method, AdOBEst-LDP seeks to adapt continually by updating its LDP mechanism each time an individual’s information is collected. Similar to our work, Yıldırım [32] has recently proposed an adaptive LDP mechanism for online parameter estimation for continuous distributions. The LDP mechanism of Yıldırım [32] contains a truncation step with boundaries adapted to the estimate from the past data according to a utility function based on the Fisher information. Unfortunately, the parameter estimation step of Yıldırım [32] does not scale in time. Differently from Yıldırım [32], AdOBEst-LDP focuses on categorical distributions, considers several other utility functions to update its LDP mechanism, employs a scalable parameter estimation step, and its performance is backed up with theoretical results.
3 Problem Definition and General Framework
Suppose we are interested in a discrete probability distribution \(\mathcal{P}\) of a certain form of sensitive categorical information \(X\in[K]:=\{1,\ldots,K\}\) of individuals in a population. Hence, \(\mathcal{P}\) is a categorical distribution \(\text{Cat}(\theta^{\ast})\) with a probability vector
where \(\Delta\) is the \((K-1)\)-dimensional probability simplex,
\begin{align*}\Delta:=\left\{\theta\in\mathbb{R}^{K}:\sum_{k=1}^{K}\theta_{k}=1\text { and }\theta_{k}\geq 0\text{ for }k\in[K]\right\}.\end{align*}
We assume a setting where individuals’ sensitive data are collected privately and sequentially in time. The privatization is performed via a randomized algorithm that, upon taking a category index in \([K]\) as an input, returns a random category index in \([K]\) such that the whole data collection process is \(\epsilon\)-LDP (see Definition 1). Let \(X_{t}\) and \(Y_{t}\) be the private information and randomized responses of individual \(t\), respectively. According to Definition 1 for LDP, the following inequality must be satisfied for all triples \((x,x^{\prime},y)\in[K]^{3}\) for the randomized mechanism to be \(\epsilon\)-LDP.
The inferential goal is to estimate \(\theta^{\ast}\) sequentially based on the responses \(Y_{1},Y_{2},\ldots\), and the mechanisms \(\mathcal{M}_{1},\mathcal{M}_{2},\ldots\) that are used to generate those responses. Specifically, Bayesian estimation is considered, whereby the target is the posterior distribution, denoted by \(\Pi(\mathrm{d}\theta|Y_{1:n},\mathcal{M}_{1:n})\), given a prior probability distribution with pdf \(\eta(\theta)\) on \(\Delta\).
This article concerns the Bayesian estimation of \(\theta\) while adapting the randomized mechanism to improve the estimation utility continually. We propose a general framework called AdOBEst-LDP, in which the randomized mechanism at time \(t\) is adapted to the data collected until time \(t-1\). AdOBEst-LDP is outlined in Algorithm 1.
Algorithm 1 is fairly general, and it does not describe how to choose the \(\epsilon\)-LDP mechanism \(\mathcal{M}_{t}\) at time \(t\), nor does it provide the details of the posterior sampling. However, it is still worth making some critical observations about the nature of the algorithm. First, at time \(t\) the selection of the \(\epsilon\)-LDP mechanism in Step 1 relies on the posterior sample \(\Theta_{t-1}\), which serves as an estimator of the true parameter \(\theta^{\ast}\) based on the past observations. As we shall see in Section 4, at Step 1 the “best” \(\epsilon\)-LDP mechanism is chosen from a set of candidate LDP mechanisms according to a utility function. This step is relevant only when \(\Theta_{t-1}\) is a reliable estimator of \(\theta^{\ast}\). In other words, Step 1 “exploits” the estimator \(\Theta_{t-1}\). Moreover, the random nature of posterior sampling prevents having too much confidence in the current estimator \(\Theta_{t-1}\) and enables a certain degree of “exploration.” In conclusion, Algorithm 1 utilizes an “exploration-exploitation” approach reminiscent of reinforcement learning. In particular, posterior sampling in Step 3 suggests a strong parallelism between AdOBEst-LDP and the well-known exploration-exploitation approach called Thompson sampling [21].
In this section, we describe Steps 1–2 of AdOBEst-LDP in Algorithm 1 where the \(\epsilon\)-LDP mechanism \(\mathcal{M}_{t}\) is selected at time \(t\) based on the posterior sample \(\Theta_{t-1}\) and a randomized response is generated using \(\mathcal{M}_{t}\). For ease of exposition, we will drop the time index \(t\) throughout the section and let \(\Theta_{t-1}=\theta\).
Recall from Definition 1 that an \(\epsilon\)-LDP randomized mechanism is associated with a conditional probability distribution that satisfies (1). An \(\epsilon\)-LDP mechanism is not unique. One such mechanism is the SRR mechanism. For subsequent use, it is convenient to define SRR generally: We let \(\texttt{{{SRR}}}(X;\Omega,\epsilon)\) the output of SRR which operates on the set \(\Omega\) with LDP parameter \(\epsilon\) when the input is \(X\in\Omega\). Then, we have
We aim to develop an alternative randomized mechanism whose response \(Y\) is more informative about \(\theta^{\ast}\) than the one generated as \(Y=\text{SRR}(X;[K],\epsilon)\). The main idea is as follows. Supposing that the posterior sample \(\Theta_{t-1}=\theta\) is an accurate estimate of \(\theta^{\ast}\), it is reasonable to aim for the “best” \(\epsilon\)-LDP mechanism (among a range of candidates) which would maximize the (estimation) utility of \(Y\) if the true parameter were \(\theta^{\ast}=\theta\). We follow this main idea to develop the proposed \(\epsilon\)-LDP mechanism.
4.1 The RRRR Mechanism
Given \(\Theta_{t-1}=\theta\in\Delta\), an informative randomized response mechanism can be constructed by considering a high-probability set\(S\subset[K]\) and a low-probability set\(S^{c}=[K]/S\) for \(X\) (according to \(\theta\)). Then, a sensible alternative to \(\text{SRR}(X;[K],\epsilon)\) would be to confine the randomized response to the set \(S\) (unioned by a random element from \(S^{c}\) to remain LDP). The expected benefit of this approach is due to (i) using less amount of randomization since \(|S|K\), and thus (ii) having an informative response when \(X\in S\), which happens with a high probability. Based on this approach, we propose RRRR, whose precise steps are given in Algorithm 2.
RRRR has three algorithmic parameters: a subset \(S\) of \([K]\) and two privacy parameters \(\epsilon_{1}\) and \(\epsilon_{2}\) which operates on \(S\) and \(S^{c}\), respectively. Theorem 1 states the necessary conditions for \(\epsilon_{1}\) and \(\epsilon_{2}\) for RRRR to be \(\epsilon\)-LDP. A proof of Theorem 1 is given in Appendix A.1.
Note that when \(S=\emptyset\) and \(\epsilon_{2}=\epsilon\), RRRR reduces to SRR.
4.2 Choosing the Privacy Parameters \(\boldsymbol{\epsilon}_{\textbf{1}}\), \(\boldsymbol{\epsilon}_{\textbf{2}}\)
We elaborate on the choice of \(\epsilon_{1}\) and \(\epsilon_{2}\) in the light of Theorem 1. In RRRR, the probability of an honest response, i.e., \(X=Y\), given \(X\in S\), is
which should be contrasted to \(e^{\epsilon}/(e^{\epsilon}+K-1)\), which would be the probability if \(Y=\text{SRR}(X;[K],\epsilon)\). Anticipating that \(\{X\in S\}\) is likely, one should at least aim for \(\epsilon_{1}\) that satisfies \(\mathbb{P}(X=Y|X\in S)\geq e^{\epsilon}/(e^{\epsilon}+K-1)\) for RRRR to be relevant. This is equivalent to
Taking into account also the constraint that \(\epsilon_{1}\leq\epsilon\) (by Theorem 1), we suggest \(\epsilon_{1}=\kappa\epsilon\), where \(\kappa\in(0,1)\) is a number close to \(1\), such as \(0.9\), to ensure (4) with a significant margin. (It is possible to choose \(\kappa=1\); however, again by Theorem 1, this requires that \(\epsilon_{2}=0\), which renders \(Y\) completely uninformative when \(X\notin S\).) In Section 7, we discuss the choice of \(\kappa\) in more detail.
For the next section, we assume a fixed \(\kappa\in(0,1)\), and set \(\epsilon_{1}=\kappa\epsilon\); and we focus on the selection of \(S\).
4.3 Subset Selection for RRRR
Let \(\texttt{RRRR}(X;S,\epsilon)\) be the random output of RRRR that achieves \(\epsilon\)-LDP by using the subset \(S\) and the privacy parameters \(\epsilon_{1}=\kappa\epsilon\) and \(\epsilon_{2}\) as in (3) when the input is \(X\). Furthermore, let \(U(\theta,S,\epsilon)\) be the (inferential) “utility” of \(Y=\texttt{RRRR}(X;S,\epsilon)\) when \(X\sim\text{Cat}(\theta)\). One would like to choose \(S\) that maximizes \(U(\theta,S,\epsilon)\). (One could also seek to optimize \(\kappa\) in \(\epsilon_{1}=\kappa\epsilon\), too, however with the expense of additional computation.)
However, since there are \(2^{K}-1\) feasible choices for \(S\), one must confine the search space for \(S\) in practice. As discussed above, RRRR becomes most relevant when the set \(S\) is a high-probability set. Therefore, for a given \(\theta\), we confine the choices for \(S\) to
where \(\sigma_{\theta}:=(\sigma_{\theta}(1),\ldots,\sigma_{\theta}(K))\) be the permutation vector for \(\theta\) so that \(\theta_{\sigma_{\theta}(1)}\geq\ldots\geq\theta_{\sigma_{\theta}(K)}\).
Then the subset selection problem can be formulated as finding
The alternatives in (5) can be justified. Since \(S_{\theta,k}\) contains the indices of the \(k\) highest-valued components of \(\theta^{\ast}\), it is expected to cover a large portion of the total probability for \(X\). This can be the case even for a small value of \(k\) relative to \(K\) when the components of \(\theta^{\ast}\) are not evenly distributed. Also, the alternatives cover the basic SRR, which is obtained with \(k=0\) (leading to \(S=\emptyset\) and \(\epsilon_{2}=\epsilon\)).
In the subsequent sections, we present six different utility functions \(U(\theta,S,\epsilon)\) and justify their relevance to estimation; the usefulness of the proposed functions is also demonstrated in the numerical experiments.
4.3.1 FIM.
The first utility function under consideration is based on the FIM at \(\theta\) according to the distribution of \(Y\) given \(\theta\). It is well-known that the inverse of the FIM sets the Cramer-Rao lower bound for the variance of an unbiased estimator. Hence, the Fisher information can be regarded as a reasonable metric to quantify the information contained in \(Y\) about \(\theta\). This approach is adopted in Lopuhaä-Zwakenberg et al. [18], Steinberger [22] for LDP applications for estimating discrete distributions, and Alparslan and Yıldırım [2], Yıldırım [32] in similar problems involving parametric continuous distributions.
For a given \(\theta\in\Delta\), let \(F(\theta;S,\epsilon)\) be the FIM evaluated at \(\theta\) when \(X\sim\text{Cat}(\theta)\) and \(Y=\texttt{RRRR}(X;S,\epsilon)\). Let
when \(Y=\texttt{RRRR}(X;S,\epsilon)\). The following result states \(F(\theta;S,\epsilon)\) in terms of \(g_{S,\epsilon}\) and \(\theta\). The result is derived in Lopuhaä-Zwakenberg et al. [18]; we also give a simple proof in Appendix A.2. Note that \(F(\theta;S,\epsilon)\) is \((K-1)\times(K-1)\) since \(\theta\) has \(K-1\) free components and \(\theta_{K}=1-\sum_{i=1}^{K-1}\theta_{i}\).
We define the following utility function based on the Fisher information
This utility function depends on the Fisher information differently from Lopuhaä-Zwakenberg et al. [18], Steinberger [22], who considered the determinant of the FIM as the utility function. The rationale behind (8) is that the for an unbiased estimator \(\hat{\theta}(Y)\) of \(\theta^{\ast}\) based on \(Y=\texttt{RRRR}(X;S,\epsilon)\), the expected MSE is bounded by \(E_{\theta^{\ast}}[\|\hat{\theta}(Y)-\theta^{\ast}\|^{2}]\leq\text{Tr} \left[F^{-1}(\theta^{\ast};S,\epsilon)\right]\). For the utility function in (8) to be well-defined, the FIM needs to be invertible. Proposition 2, proven in Appendix A.2, states that this is indeed the case.
4.3.2 Entropy of Randomized Response.
For discrete distributions, entropy measures uniformity. Hence, in the LDP framework, a lower entropy for the randomized response \(Y\) implies a more informative \(Y\). Based on that observation, a utility function can be defined as the negative entropy of the marginal distribution of \(Y\),
We consider two utility functions based on TV distance. The first function arises from the observation that a more informative response \(Y\) generally leads to a larger change in the posterior distribution of \(X\) given \(Y,\theta\),
relative to its prior \(\text{Cat}(\theta)\). The expected amount of change can be formulated as the expectation of the TV distance between the prior and posterior distributions with respect to the marginal distribution of \(Y\) given \(\theta\). Then, a utility function can be defined as
Another utility function is related to the TV distance between the marginal probability distributions of \(X\) given \(\theta\) and \(Y\) given \(\theta\). Since \(X\) is more informative about \(\theta\) than the randomized response \(Y\), the mentioned TV distance is desired to be as small as possible. Hence, a utility function may be formulated as
One can also wish to choose \(S\) such that the Bayesian estimator of \(X\) given \(Y\) has the lowest expected squared error. Specifically, given \(k\in[K]\) let \(e_{k}\) be a \(K\times 1\) vector of \(0\)s except that \(e_{k}=1\). A utility function can be defined based on that as
where \(\mathbb{E}_{\theta}\left[\|e_{X}-\widehat{e_{X}}(Y)\|^{2}\right]\) is the MSE for the estimator \(\widehat{e_{X}}\) of \(e_{X}\) given \(Y\) when \(X\sim\text{Cat}(\theta)\) and \(Y=\texttt{RRRR}(X;S,\epsilon)\), which is known to be minimized when \(\widehat{e_{X}}\) is the Bayesian estimator of \(e_{X}\). Proposition 3 provides an explicit formula for this utility function. A proof is given in Appendix A.2.
4.3.5 Probability of Honest Response.
Our last alternative for the utility function is a simple yet intuitive one, which is the probability of an honest response, i.e.,
Recall that, for computational tractability, we confined the possible sets for \(S\) to the subsets \(\{\sigma_{\theta}(1),\ldots,\sigma_{\theta}(k)\}\), \(k=0,\ldots,K-1\) and select \(S\) by solving the maximization problem in (6). Remarkably, if \(U_{6}(\theta,S,\epsilon)\) is used for the utility function, the restricted maximization (6) is equivalent to global maximization, i.e., finding the best \(S\) among all the \(2^{K}\) possible subsets \(S\). We state this as a theorem and prove it in Appendix A.2.
4.3.6 Semi-Adaptive Approach.
We also consider a semi-adaptive approach which uses a fixed parameter \(\alpha\in(0,1)\) to select the smallest \(S_{k,\theta}\) in (5) such that \(\mathbb{P}_{\theta}(X\in S_{k,\theta})\geq\alpha\), that is, \(S=\{\sigma_{\theta}(1),\ldots,\sigma_{\theta}(k^{\ast})\}\) is taken such that
\begin{align*}\mathbb{P}_{\theta}(X\in\{\sigma_{\theta}(1),\ldots,\sigma_{\theta}(k^ {\ast}-1)\}) < \alpha\text{ and }\mathbb{P}_{\theta}(X\in\{ \sigma_{\theta}(1),\ldots,\sigma_{\theta}(k^{\ast})\})\geq\alpha.\end{align*}
Again, the idea is to randomize the most likely values of \(X\) with a high accuracy. The approach forms the subset \(S\) by including values for \(X\) in descending order of their probabilities (given by \(\theta\)) until the cumulative probability exceeds \(\alpha\). In that way, it is expected to have set \(S\) that is small-sized (especially when \(\theta\) is unbalanced) and captures the most likely values of \(X\). The resulting \(S\) has varying cardinality depending on the sampled \(\theta\) at the current timestep.
We call this approach “semi-adaptive” because, while it still adapts to \(\theta\), it uses the fixed parameter \(\alpha\). As we will see in Section 7, the best \(\alpha\) depends on various parameters such as \(\epsilon\), \(K\), and the degree of evenness in \(\theta\).
4.4 Computational Complexity of Utility Functions
We now provide the computational complexity analysis of the utility metrics presented in Sections 4.3.1–4.3.5, and that of the semi-adaptive approach in Section 4.3.6, as a function of \(K\). The first row of Table 1 shows the computational complexities of calculating the utility function for a fixed \(S\), and the second row shows the complexities of choosing the best \(S\) according to (6). To find (6), the utility function generally needs to be calculated \(K\) times, which explains the additional \(K\) factor in the computational complexities in the second row.
Table 1.
Fisher
Entropy
\(\text{TV}_{1}\)
\(\text{TV}_{2}\)
MSE
\(\mathbb{P}_{\theta}(Y=X)\)
Semi-Adaptive
Computing utility
\(\mathcal{O}(K^{3})\)
\(\mathcal{O}(K^{2})\)
\(\mathcal{O}(K^{2})\)
\(\mathcal{O}(K^{2})\)
\(\mathcal{O}(K^{2})\)
\(\mathcal{O}(K)\)
NA
Choosing \(S\)
\(\mathcal{O}(K^{4})\)
\(\mathcal{O}(K^{3})\)
\(\mathcal{O}(K^{3})\)
\(\mathcal{O}(K^{3})\)
\(\mathcal{O}(K^{3})\)
\(\mathcal{O}(K)\)
\(\mathcal{O}(K)\)
Table 1. Computational Complexity of Utility Functions and Choosing \(S\)
The least demanding utility function is \(U_{6}\), that is based on \(\mathbb{P}_{\theta}(Y=X)\), whose complexity is \(\mathcal{O}(K)\). Moreover, finding the best \(S\) can also be done in \(\mathcal{O}(K)\) time because one can compute this utility metric for all \(k=0,\ldots,K-1\) by starting with \(S=\emptyset\) and expanding it incrementally. Also note that the semi-adaptive approach does not use a utility metric and finding \(k^{\ast}\) can be done in \(\mathcal{O}(K)\) time by summing the components of \(\theta\) from largest to smallest until the cumulative sum exceeds the given \(\alpha\) parameter. So, its complexity is \(\mathcal{O}(K)\).
For all these approaches, it is additionally required to sort \(\theta\) beforehand, which is an \(\mathcal{O}(K\ln K)\) operation with an efficient sorting algorithm like merge sort.
In practice, one can choose among these utility functions depending on the nature of the application. When the number of categories \(K\) or the arrival rate of sensitive data is large, we suggest using \(U_{6}\) or a semi-adaptive approach. When \(K\) and the arrival rate of the personal data are both small, the more computationally demanding utility functions can also be used.
Figure 2 shows, for a fixed \(\epsilon\) and \(K=20\), and various values of \(k\), the probability of the randomized response being equal to the sensitive information, i.e., \(\mathbb{P}_{\theta}(Y=X)\) vs. \(\theta_{i}/\theta_{i+1}\) when \(S=\{1,\ldots,k\}\) in RRRR. (Recall that this probability corresponds to \(U_{6}(\theta,S,\epsilon)\).) Comparing this probability with \(e^{\epsilon}/(e^{\epsilon}+K-1)\), the probability obtained with \(Y=\text{SRR}(X;[K],\epsilon)\), it can be observed that RRRR can do significantly better than SRR if \(k\) can be chosen suitably. The plots demonstrate that the “suitable” \(k\) depends on \(\theta\): While the best \(k\) tends to be larger for more even \(\theta\), small \(k\) becomes the better choice for non-even \(\theta\) (large \(\theta_{i}/\theta_{i+1}\)). This is because, when \(\theta_{i}/\theta_{i+1}\) is large, the probability is concentrated on just a few components, and \(S\) with a small \(k\) captures most of the probability. Moreover, the plots for \(\epsilon=1\) and \(\epsilon=5\) also show the effect of the level of privacy. In more challenging scenarios where \(\epsilon\) is smaller, the gain obtained by RRRR compared to SRR is bigger.
Fig. 2.
5 Posterior Sampling
Steps 1–2 of AdOBEst-LDP in Algorithm 1 were detailed in the previous section. In this section, we provide the details of Step 3.
Step 3 of AdOBEst-LDP requires sampling from the posterior distribution \(\Pi(\cdot|Y_{1:n},S_{1:n})\) of \(\theta\) given \(Y_{1:n}\) and \(S_{1:n}\) for \(n\geq 1\), where \(S_{t}\) is the subset selected at time \(t\) to generate \(Y_{t}\) from \(X_{t}\). Let \(\pi(\theta|Y_{1:n},S_{1:n})\) denote the pdf of \(\Pi(\cdot|Y_{1:n},S_{1:n})\). Given \(Y_{1:n}=y_{1:n}\) and \(S_{1:n}=s_{1:n}\), the posterior density can be written as
Note that the right-hand side does not include a transition probability for \(S_{t}\)’s because the sampling procedure of \(S_{t}\) given \(Y_{1:t-1}\) and \(S_{1:t-1}\) does not depend on \(\theta^{\ast}\). Furthermore, we assume that the prior distribution \(\eta(\theta)\) is a Dirichlet distribution \(\theta\sim\text{Dir}(\rho_{1},\ldots,\rho_{K})\) with prior hyper-parameters \(\rho_{k}>0\), for \(k=1,\ldots,K\).
Unfortunately, the posterior distribution in (12) is intractable. Therefore, we resort to approximate sampling approaches using MCMC. Below, we present two MCMC methods, namely SGLD and Gibbs sampling.
5.1 SGLD
SGLD is an asymptotically exact gradient-based MCMC sampling approach that enables the use of subsamples of size \(m\ll t\). A direct application of SGLD to generate samples for \(\theta\) from the posterior distribution in (12) is difficult. This is because \(\theta\) lives in the probability simplex \(\Delta\), which makes the task of keeping the iterates for \(\theta\) inside \(\Delta\) challenging. We overcome this problem by defining the surrogate variables \(\phi_{1},\ldots,\phi_{K}\) with
It is well-known that the resulting \((\theta_{1},\ldots,\theta_{K})\) has a Dirichlet distribution \(\text{Dir}(\rho_{1},\ldots,\rho_{K})\), which is exactly the prior distribution \(\eta(\theta)\). Therefore, this change of variables preserves the originally constructed probabilistic model. Moreover, since \(\phi=(\phi_{1},\ldots,\phi_{K})\) takes values in \([0,\infty)^{K}\), we run SGLD for \(\phi\), where the \(j\)’th update is
The reflection in (14) via taking the component-wise absolute value is necessary because each \(\phi_{k}^{(j)}\) must be positive. Step 3 of Algorithm 1 can be approximated by running SGLD for some \(M>0\) iterations. To exploit the SGLD updates from the previous time, one should start the updates at time \(n\) by setting the initial value for \(\phi\) to the last SGLD iterate at time \(n-1\).
The next proposition provides the explicit formulae for the gradients of the log-prior and the log-likelihood of \(\phi\) in (14). A proof is given in Appendix A.3.
5.2 Gibbs Sampling
An alternative to SGLD is the Gibbs sampler, which operates on the joint posterior distribution of \(\theta\) and \(X_{1:n}\) given \(Y_{1:n}=y_{1:n}\) and \(S_{1:n}=s_{1:n}\),
where \(p_{s_{t},\epsilon}(x_{t}|y_{t},\theta)\) is defined in (9). Therefore, (16) is a product of \(n\) categorical distributions, each with support \([K]\). Furthermore, the full conditional distribution of \(\theta\) is a Dirichlet distribution due to the conjugacy between the categorical and the Dirichlet distributions. Specifically,
where the hyper-parameters of the posterior distribution are given by \(\rho^{\text{post}}_{k}:=\rho_{k}+\sum_{t=1}^{n}\mathbb{I}(x_{t}=k)\) for \(k=1,\ldots,K\).
Computational load at time \(t\) of sampling from \(t\) distributions in (16), is proportional to \(tK\), which renders the computational complexity of Gibbs sampling \(\mathcal{O}(n^{2}K)\) after \(n\) timesteps. This can be computationally prohibitive when \(n\) gets large.
6 Theoretical Analysis
We address two questions concerning AdOBEst-LDP in Algorithm 1 when it is run with RRRR whose subset is selected as described in Section 4.3. (i) Does the targeted posterior distribution based on the observations generated by Algorithm 1 converge to the true value \(\theta^{\ast}\)? (ii) How frequently does Algorithm 1 with RRRR select the optimum subset \(S\) according to the chosen utility function?
6.1 Convergence of the Posterior Distribution
We begin by developing the joint probability distribution of the random variables involved in AdOBEst-LDP.
–
Given \(Y_{1:n}\) and \(S_{1:n}\), the posterior distribution \(\Pi(\cdot|Y_{1:n},S_{1:n})\) is defined such that for any measurable set \(A\subseteq\Delta\), the posterior probability of \(\{\theta\in A\}\) is given by
Let \(Q(\cdot|Y_{1:n},S_{1:n},\Theta_{n-1})\) be the probability distribution corresponding to the posterior sampling process for \(\Theta_{n}\). Note that if exact posterior sampling was used, we would have \(Q(A|Y_{1:n},S_{1:n},\)\(\Theta_{n-1})=\Pi(A|Y_{1:n},S_{1:n})\); however, when approximate sampling techniques are used to target \(\Pi\), such as SGLD or Gibbs sampling, the equality does not hold in general.
be the best subset according to \(\theta\), where \(S_{k,\theta}=\{\sigma_{\theta}(1),\ldots,\sigma_{\theta}(k)\}\) is defined in (5). Given \(\Theta_{1:t-1}\) and \(Y_{1:t}\), \(S_{t}\) depends only on \(\Theta_{t-1}\) and it is given by \(S_{t}=S^{\ast}_{\Theta_{t-1}}\).
Combining all, the joint law of \(S_{1:n},Y_{1:n}\) can be expressed as
where we use the convention that \(Q(\mathrm{d}\theta_{0}|Y_{1:0},S_{1:0},\theta_{-1})=\delta_{\theta_{\text{init }}}(\mathrm{d}\theta_{0})\) for an initial value \(\theta_{\text{init}}\in\Delta\).
The posterior probability in (17) is a random variable with respect to \(P_{\theta^{\ast}}\) defined in (18). Theorem 3 establishes that under the fairly mild Assumption 1 on the prior, the \(\Pi(\cdot|Y_{1:n},S_{1:n})\) converges to \(\theta^{\ast}\) regardless of the choice of \(Q\) for posterior sampling.
A proof is given in Appendix A.4.2, where the constant \(c\) in the sets \(\Omega_{n}\) is explicitly given.
6.2 Selecting the Best Subset
Let \(S^{\ast}:=S^{\ast}_{\theta^{\ast}}\) be the best subset at \(\theta^{\ast}\). In this part, we prove that if posterior sampling is performed exactly, the best subset is chosen with an expected long-run frequency of \(1\). Our result relies on some mild assumptions.
The result in (20) can be likened to sublinear regret from the reinforcement learning theory.
7 Numerical Results
We tested1 the performance of AdOBEst-LDP when the subset \(S\) in RRRR is determined according to a utility function in Section 4.3. We compared AdOBEst-LDP when combined with each of the utility functions defined in Sections 4.3.1–4.3.5 with its non-adaptive counterpart when SRR is used to generate \(Y_{t}\) at all steps. We also included the semi-adaptive subset selection method in Section 4.3.6 into the comparison. For the semi-adaptive approach, we obtained results for five different values of its \(\alpha\) parameter, namely \(\alpha\in\{0.2,0.6,0.8,0.9,0.95\}\).
We ran each method for 50 Monte Carlo runs. Each run contained \(T=500K\) timesteps. For each run, the sensitive information is generated as \(X_{t}\overset{\text{i.i.d.}}{\sim}\text{Cat}(\theta^{\ast})\) where \(\theta^{\ast}\) itself was randomly drawn from \(\text{Dirichlet}(\rho,\ldots,\rho)\). Here, the parameter \(\rho\) was used to control the unevenness among the components of \(\theta^{\ast}\). (Smaller \(\rho\) leads to more uneven components in general). At each timestep, Step 3 of Algorithm 1 was performed by running \(M=20\) updates of an SGLD-based MCMC kernel as described Section 5.1. In SGLD, we took the subsample size \(m=50\) and the step-size parameter \(a=\frac{0.5}{t}\) at timestep \(t\). Prior hyper-parameters for the gamma distribution were taken \(\rho_{0}=1_{K}\). The posterior sample \(\Theta_{t}\) was taken as the last iterate of those SGLD updates. Only for the last timestep, \(t=T\), the number of MCMC iterations was taken \(2{,}000\) to reliably calculate the final estimate \(\hat{\theta}\) of \(\theta\) by averaging the last \(1{,}000\) of those \(2{,}000\) iterates. (This average is the MCMC approximation of the posterior mean of \(\theta\) given \(Y_{1:T}\) and \(S_{1:T}\).) We compared the mean posterior estimate of \(\theta\) and the true value, and the performance measure was taken as the TV distance between \(\text{Cat}(\theta^{\ast})\) and \(\text{Cat}(\hat{\theta})\), that is,
Finally, the comparison among the methods was repeated for all the combinations (\(K,\epsilon,\kappa,\rho\)) of \(K\in\{10,20\}\), \(\epsilon\in\{0.5,1,5\}\), \(\kappa\in\{0.8,0.9\}\), and \(\rho\in\{0.01,0.1,1\}\).
The accuracy results for the methods in comparison are summarized in Figures 3 and 4 in terms of the error given in (21). The box plots are centered at the error median, and the whiskers stretch from the minimum to the maximum over the 50 MC runs, excluding the outliers. When the medians are compared, the fully adaptive algorithms, which use a utility function to select \(S_{t}\), yield comparable results to the best semi-adaptive approach in both figures. As one may expect, the non-adaptive approach yielded the worst results in general, especially in the high-privacy regimes (smaller \(\epsilon\)) and uneven \(\theta^{\ast}\) (smaller \(\rho\)). We also observe that, while most utility metrics are generally robust, the one based on FIM seems sensitive to the choice of \(\epsilon_{1}\) parameter. This can be attributed to the fact that the FIM approaches singularity when \(\epsilon_{2}\) is too small, which is the case if \(\epsilon_{1}\) is chosen too close to \(\epsilon\). Supporting this, we see that when \(\epsilon_{1}=0.8\epsilon\), the utility metric based on FIM becomes more robust. Another remarkable observation is that the utility function based on the probability of honest response, \(U_{6}\), has competitive performance despite being the lightest utility metric in computational complexity. Finally, while the semi-adaptive approach is computationally less demanding than most fully adaptive versions, the results show it can dramatically fail if its \(\alpha\) hyper-parameter is not tuned properly. In contrast, the fully adaptive approaches adapt well to \(\epsilon\) or \(\rho\) and do not need additional tuning.
Fig. 3.
Fig. 4.
In addition to the error graphs, the heat maps in Figures 5 and 6 show the effect of parameters \(\rho\) and \(\epsilon\) on the average cardinality of the subsets \(S\) chosen by each algorithm (again, averaged over 50 Monte Carlo runs). According to these figures, increasing the value of \(\rho\) causes an increase in the cardinalities of subsets chosen by each algorithm (except the non-adaptive one since it uses all \(K\) categories rather than a smaller subset). This is expected since higher \(\rho\)-values cause \(\text{Cat}(\theta^{\ast})\) to be closer to the uniform distribution, thus causing \(X\) to be more evenly distributed among the categories. Moreover, for small \(\rho\), increasing the value of \(\epsilon\) causes a decrease in the cardinalities of these subsets, which can be attributed to a higher \(\epsilon\), leading to a more accurate estimation. When we compare the utility functions for the adaptive approach among themselves, we observe that for \(\epsilon_{1}=0.8\epsilon\), the third utility function (TV1) uses the subsets with the largest cardinality (on average). However, when we increase the \(\epsilon_{1}\) value to \(\epsilon_{1}=0.9\epsilon\), the second utility function (FIM) uses the subsets with the largest cardinality. This might be due to the sensitivity of the FIM-based utility function to the choice of \(\epsilon_{1}\) parameter that we mentioned before, which affects the invertibility of the FIM when \(\epsilon_{1}\) is too close to \(\epsilon\).
Fig. 5.
Fig. 6.
8 Conclusion
In this article, we proposed a new adaptive framework, AdOBEst-LDP, for online estimation of the distribution of categorical data under the \(\epsilon\)-LDP constraint. AdOBEst-LDP, run with RRRR for randomization, encompasses both privatization of the sensitive data and accurate Bayesian estimation of population parameters from privatized data in a dynamic way. Our privatization mechanism (RRRR) is distinguished from the baseline approach (SRR) in a way that it operates on a smaller subset of the sample space rather than the entire sample space. We employed an adaptive approach to dynamically adjust the subset at each iteration, based on the knowledge about \(\theta^{\ast}\) obtained from the past data. The selection of these subsets was guided by various alternative utility functions that we used throughout the article. For the posterior sampling of \(\theta\) at each iteration, we employed an efficient SGLD-based sampling scheme on a constrained region, namely the \(K\)-dimensional probability simplex. We distinguished this scheme from Gibbs sampling, which uses all of the historical data and is not scalable to large datasets.
In the numerical experiments, we demonstrated that AdOBEst-LDP can estimate the population distribution more accurately than the non-adaptive approach under experimental settings with various privacy levels \(\epsilon\) and degrees of evenness among the components of \(\theta^{\ast}\). While the performance of AdOBEst-LDP is generally robust for all the utility functions considered in the article, the utility function based on the probability of honest response can be preferred due to its much lower computational complexity than the other utility functions. Our experiments also showed that the accuracy of the adaptive approach is comparable to that of the semi-adaptive approach. However, the semi-adaptive approach requires adjusting its parameter \(\alpha\) carefully, which makes it challenging to use.
In a theoretical analysis, we showed that, regardless of whether the posterior sampling is conducted exactly or approximately, the posterior distribution targeted in AdOBEst-LDP converges to the true population parameter \(\theta^{\ast}\). We also showed that, under exact posterior sampling, the best subset given utility function is selected with probability \(1\) in the long run.
It is important to note that the observations \(\{Y_{t}\}_{t\geq 1}\) generated by AdOBEst-LDP are dependent. Therefore, the theoretical analysis presented in Section 6 can also be seen as a contribution to the literature on the convergence of posterior distributions with dependent data. Additionally, we have already highlighted an analogy between AdOBEst-LDP and Thompson sampling [21]. Both methods involve posterior sampling, and the subset selection step in AdOBEst-LDP can be viewed as analogous to the action selection step in reinforcement learning schemes. In this regard, we believe that the theoretical results may also inspire future research on the convergence of dynamic reinforcement learning algorithms, especially those based on Thompson sampling.
Categorical distributions serve as useful nonparametric discrete approximations of continuous distributions. As a potential future direction, AdOBEst-LDP could be adapted for nonparametric density estimation. A key challenge in this context would be determining how to partition the support domain of the data.
RRRR is a practical LDP mechanism with a subset parameter that adapts based on past data. It has been shown to outperform SRR when leveraging the knowledge of \(\theta^{\ast}\). However, in this work, it is not proven that RRRR is the optimal\(\epsilon\)-LDP mechanism with respect to the utility functions considered. While the optimal \(\epsilon\)-LDP mechanism could be identified numerically by solving a constrained optimization problem—where the utility function is maximized under the LDP constraint—it may not have a closed-form solution for complex utility functions. A promising direction for future research would be to compare the optimal \(\epsilon\)-LDP mechanism with the \(\epsilon\)-LDP RRRR mechanism by analyzing their transition probability matrices and assessing the suboptimality of RRRR. Additionally, insights from the optimal \(\epsilon\)-LDP mechanism could inspire the development of new, tractable, and approximately optimal \(\epsilon\)-LDP mechanisms.
Acknowledgments
We thank our colleague Prof. Berrin Yanıkoğlu for reviewing the draft of the article and providing insightful comments.
Note that \(W_{t}(\theta,\theta^{\prime})\) is bounded as \(0\leq W_{i}(\theta,\theta^{\prime})\leq 1\). We now quote a critical theorem from Pelekis and Ramon [20, Theorem 3.2] regarding the sum of dependent and bounded random variables, which will be useful for bounding \(\sum_{t=1}^{n}V_{t}(\theta,\theta^{\prime})\).
In the following, we apply Theorem 5 for \(\sum_{t=1}^{n}V_{t}(\theta,\theta^{\prime})\).
Smoothness of \(\ln h_{S,\epsilon}(y|\theta)\):
Next, we establish the \(L\)-smoothness of \(h_{S,\epsilon}(y|\theta)\) as a function of \(\theta\) for any \(y\in[K]\) and \(S\subset[K]\). Some technical lemmas are needed first.
Second Moment of the Gradient at\(\theta^{\ast}\):
Let the average log-marginal likelihoods be defined as
where \(V_{t}(\theta,\theta^{\prime})\) was defined in (37) and \(c_{u}>0\) was defined in the proof of Lemma 9, respectively. The proof of Theorem 3 requires the following lemma concerning \(\mathcal{E}^{\mu}_{n}(\theta,\theta^{\prime})\).
A.4.3 Convergence of the Expected Frequency.
References
[1]
Jayadev Acharya, Clément L. Canonne, Ziteng Sun, and Himanshu Tyagi. 2023. Unified lower bounds for interactive high-dimensional estimation under information constraints. In Advances in Neural Information Processing Systems. A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine (Eds.), Vol. 36, Curran Associates, Inc., New Orleans, US, 51133–51165. Retrieved from https://proceedings.neurips.cc/paper_files/paper/2023/file/a07e87ecfa8a651d62257571669b0150-Paper-Conference.pdf
Barış Alparslan and Sinan Yıldırım. 2022. Statistic selection and MCMC for differentially private Bayesian estimation. Statistics and Computing 32, 5 (2022), 66.
Leighton Pate Barnes, Wei-Ning Chen, and Ayfer Özgür. 2020. Fisher information under local differential privacy. IEEE Journal on Selected Areas in Information Theory 1, 3 (2020), 645–659.
Karuna Bhaila, Wen Huang, Yongkai Wu, and Xintao Wu. 2024. Local differential privacy in graph neural networks: A reconstruction approach. In Proceedings of the 2024 SIAM International Conference on Data Mining (SDM ’24). SIAM, 1–9.
Graham Cormode and Akash Bharadwaj. 2022. Sample-and-threshold differential privacy: Histograms and applications. In Proceedings of the International Conference on Artificial Intelligence and Statistics. PMLR, 1420–1431.
Graham Cormode, Tejas Kulkarni, and Divesh Srivastava. 2018. Marginal release under local differential privacy. In Proceedings of the 2018 International Conference on Management of Data, 131–146.
James Foulds, Joseph Geumlek, Max Welling, and Kamalika Chaudhuri. 2016. On the theory and practice of privacy-preserving Bayesian data analysis. arXiv:1603.07294. Retrieved from https://arxiv.org/abs/1603.07294
Richard D. Gill and Boris Y. Levit. 1995. Applications of the van trees inequality: A Bayesian CraméR-Rao bound. Bernoulli 1, 1/2 (1995), 59–79. Retrieved from http://www.jstor.org/stable/3318681
Jinyuan Jia and Neil Zhenqiang Gong. 2019. Calibrate: Frequency estimation and heavy hitter identification with local differential privacy via incorporating Prior knowledge. In Proceedings of the IEEE INFOCOM 2019-IEEE Conference on Computer Communications. IEEE, 2008–2016.
Peter Kairouz, Sewoong Oh, and Pramod Viswanath. 2016. Extremal mechanisms for local differential privacy. The Journal of Machine Learning Research 17, 1 (2016), 492–542.
Vishesh Karwa, Aleksandra B. Slavković, and Pavel Krivitsky. 2014. Differentially private exponential random graphs. In Privacy in Statistical Databases. Josep Domingo-Ferrer (Ed.). Springer International Publishing, Cham, 143–155.
Shiva Prasad Kasiviswanathan, Homin K. Lee, Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith. 2011. What can we learn privately? SIAM Journal on Computing 40, 3 (2011), 793–826. DOI:
Chansoo Kim, Jinhyouk Jung, and Younshik Chung. 2011. Bayesian estimation for the exponentiated weibull model under type II progressive censoring. Statistical Papers 52, 1 (2011), 53–70. DOI:
Tianyu Liu, Lulu Zhang, Guang Jin, and Zhengqiang Pan. 2022. Reliability assessment of heavily censored data based on E-Bayesian estimation. Mathematics 10, 22 (2022). DOI:
Showkat Ahmad Lone, Hanieh Panahi, Sadia Anwar, and Sana Shahab. 2024. Inference of reliability model with burr type XII distribution under two sample balanced progressive censored samples. Physica Scripta 99, 2 (Jan. 2024), 025019. DOI:
Milan Lopuhaä-Zwakenberg, Boris Škorić, and Ninghui Li. 2022. Fisher information as a utility metric for frequency estimation under local differential privacy. In Proceedings of the 21st Workshop on Privacy in the Electronic Society, 41–53.
Eric Mazumdar, Aldo Pacchiano, Yi-An Ma, Peter L. Bartlett, and Michael I. Jordan. 2020. On approximate Thompson sampling with langevin algorithms. In Proceedings of the 37th International Conference on Machine Learning (ICML’20). JMLR.org, Article 631, 11 pages.
Christos Pelekis and Jan Ramon. 2017. Hoeffding’s inequality for sums of dependent random variables. Mediterranean Journal of Mathematics 14, 6 (2017), 243. DOI:
M. Wang, H. Jiang, P. Peng, and Y. Li. 2024. Accurately estimating frequencies of relations with relation privacy preserving in decentralized networks. IEEE Transactions on Mobile Computing 23, 5 (May 2024), 6408–6422. DOI:
Shaowei Wang, Liusheng Huang, Pengzhan Wang, Yiwen Nie, Hongli Xu, Wei Yang, Xiang-Yang Li, and Chunming Qiao. 2016. Mutual information optimally local private discrete distribution estimation. arXiv:1607.08025. Retrieved from https://arxiv.org/abs/1607.08025
S. Wang, Y. Li, Y. Zhong, K. Chen, X. Wang, Z. Zhou, F. Peng, Y. Qian, J. Du, and W. Yang. 2024. Locally private set-valued data analyses: Distribution and heavy hitters estimation. IEEE Transactions on Mobile Computing 23, 8 (Aug 2024), 8050–8065. DOI:
Tianhao Wang, Jeremiah Blocki, Ninghui Li, and Somesh Jha. 2017. Locally differentially private protocols for frequency estimation. In Proceedings of the 26th USENIX Security Symposium (USENIX Security ’17), 729–745.
Tianhao Wang, Milan Lopuhaä-Zwakenberg, Zitao Li, Boris Skoric, and Ninghui Li. 2020. Locally differentially private frequency estimation with consistency. In Proceedings of the 27th Annual Network and Distributed System Security Symposium (NDSS ’20). 16 pages. DOI:
Ian Waudby-Smith, Steven Wu, and Aaditya Ramdas. 2023. Nonparametric extensions of randomized response for private confidence sets. In Proceedings of the International Conference on Machine Learning. PMLR, 36748–36789.
Max Welling and Yee Whye Teh. 2011. Bayesian learning via stochastic gradient langevin dynamics. In Proceedings of the 28th International Conference on International Conference on Machine Learning (ICML’11). Omnipress, Madison, WI, 681–688.
Oliver Williams and Frank Mcsherry. 2010. Probabilistic inference and differential privacy. In Advances in Neural Information Processing Systems. J. Lafferty, C. Williams, J. Shawe-Taylor, R. Zemel, and A. Culotta (Eds.), Vol. 23, Curran Associates, Inc. Retrieved from https://proceedings.neurips.cc/paper/2010/file/fb60d411a5c5b72b2e7d3527cfc84fd0-Paper.pdf
Dan Zhao, Su-Yun Zhao, Hong Chen, Rui-Xuan Liu, Cui-Ping Li, and Xiao-Ying Zhang. 2023. Hadamard encoding based frequent itemset mining under local differential privacy. Journal of Computer Science and Technology 38, 6 (2023), 1403–1422.
Youwen Zhu, Yiran Cao, Qiao Xue, Qihui Wu, and Yushu Zhang. 2024. Heavy hitter identification over large-domain set-valued data with local differential privacy. IEEE Transactions on Information Forensics and Security 19 (2024), 414–426. DOI:
C3S2E '16: Proceedings of the Ninth International C* Conference on Computer Science & Software Engineering
In the recent past, there has been a tremendous increase of large repositories of data, examples being in healthcare data, consumer data from retailers, and airline passenger data. These data are continually being shared with interested parties, either ...
CODASPY '22: Proceedings of the Twelfth ACM Conference on Data and Application Security and Privacy
Differential privacy (DP) is a state-of-the-art concept that formalizes privacy guarantees. We derive a new bound for the privacy loss from releasing Bayesian posterior samples in the setting of DP. The new bound is tighter than the existing bounds for ...
SEC'19: Proceedings of the 28th USENIX Conference on Security Symposium
LDP (Local Differential Privacy) has been widely studied to estimate statistics of personal data (e.g., distribution underlying the data) while protecting users' privacy. Although LDP does not require a trusted third party, it regards all personal data ...