Abstract
We introduce a formal model for the information leakage of probability distributions and define a notion called distribution privacy as the local differential privacy for probability distributions. Roughly, the distribution privacy of a local obfuscation mechanism means that the attacker cannot significantly gain any information on the distribution of the mechanism’s input by observing its output. Then we show that existing local mechanisms can hide input distributions in terms of distribution privacy, while deteriorating the utility by adding too much noise. For example, we prove that the Laplace mechanism needs to add a large amount of noise proportionally to the infinite Wasserstein distance between the two distributions we want to make indistinguishable. To improve the tradeoff between distribution privacy and utility, we introduce a local obfuscation mechanism, called a tupling mechanism, that adds random dummy data to the output. Then we apply this mechanism to the protection of user attributes in location based services. By experiments, we demonstrate that the tupling mechanism outperforms popular local mechanisms in terms of attribute obfuscation and service quality.
This work was partially supported by JSPS KAKENHI Grant JP17K12667, JP19H04113, and Inria LOGIS project.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
- Local differential privacy
- Obfuscation mechanism
- Location privacy
- Attribute privacy
- Wasserstein metric
- Compositionality
1 Introduction
Differential privacy [1] is a quantitative notion of privacy that has been applied to a wide range of areas, including databases, geo-locations, and social network. The protection of differential privacy can be achieved by adding controlled noise to given data that we wish to hide or obfuscate. In particular, a number of recent studies have proposed local obfuscation mechanisms [2,3,4], namely, randomized algorithms that perturb each single “point” data (e.g., a geo-location point) by adding certain probabilistic noise before sending it out to a data collector. However, the obfuscation of a probability distribution of points (e.g., a distribution of locations of users at home/outside home) still remains to be investigated in terms of differential privacy.
For example, a location-based service (LBS) provider collects each user’s geo-location data to provide a service (e.g., navigation or point-of-interest search), and has been widely studied in terms of the privacy of user locations. As shown in [3, 5], users can hide their accurate locations by sending to the LBS provider only approximate location information calculated by an obfuscation mechanism.
Nevertheless, a user’s location information can be used for an attacker to infer the user’s attributes (e.g., age, gender, social status, and residence area) or activities (e.g., working, sleeping, and shopping) [6,7,8,9]. For example, when an attacker knows the distribution of residence locations, he may detect whether given users are at home or outside home after observing their obfuscated locations. For another example, an attacker may learn whether users are rich or poor by observing their obfuscated behaviors. These attributes can be used by robbers hence should be protected from them. Privacy issues of such attribute inference are also known in other applications, including recommender systems [10, 11] and online social networks [12, 13]. However, to our knowledge, no literature has addressed the protection of attributes in terms of local differential privacy.
To illustrate the privacy of attributes in an LBS, let us consider a running example where users try to prevent an attacker from inferring whether they are at home or not. Let \(\lambda _{{ home}}\) and \(\lambda _{{ out}}\) be the probability distributions of locations of the users at home and outside home, respectively. Then the privacy of this attribute means that the attacker cannot learn from an obfuscated location whether the actual location follows the distribution \(\lambda _{{ home}}\) or \(\lambda _{{ out}}\).
This can be formalized using differential privacy. For each \(t\in \{ { home}, { out}\}\), we denote by \(p( y \,|\, \lambda _{t})\) the probability of observing an obfuscated location y when an actual location is distributed over \(\lambda _{t}\). Then the privacy of \(t\) is defined by:
which represents that the attacker cannot distinguish whether the users follow the distribution \(\lambda _{{ home}}\) or \(\lambda _{{ out}}\) with degree of \(\varepsilon \).
To generalize this, we define a notion, called distribution privacy (DistP), as the differential privacy for probability distributions. Roughly, we say that a mechanism \( A \) provides DistP w.r.t. \(\lambda _{{ home}}\) and \(\lambda _{{ out}}\) if no attacker can detect whether the actual location (input to \( A \)) is sampled from \(\lambda _{{ home}}\) or \(\lambda _{{ out}}\) after he observed an obfuscated location y (output by \( A \))Footnote 1. Here we note that each user applies the mechanism \( A \) locally by herself, hence can customize the amount of noise added to y according to the attributes she wants to hide.
Although existing local differential privacy mechanisms are designed to protect point data, they also hide the distribution that the point data follow. However, we demonstrate that they need to add a large amount of noise to obfuscate distributions, and thus deteriorate the utility of the mechanisms.
To achieve both high utility and strong privacy of attributes, we introduce a mechanism, called the tupling mechanism, that not only perturbs an actual input, but also adds random dummy data to the output. Then we prove that this mechanism provides DistP. Since the random dummy data obfuscate the shape of the distribution, users can instead reduce the amount of noise added to the actual input, hence they get better utility (e.g., quality of a POI service).
This implies that DistP is a relaxation of differential privacy that guarantees the privacy of attributes while achieving higher utility by weakening the differentially private protection of point data. For example, suppose that users do not mind revealing their actual locations outside home, but want to hide (e.g., from robbers) the fact that they are outside home. When the users employ the tupling mechanism, they output both their (slightly perturbed) actual locations and random dummy locations. Since their outputs include their (roughly) actual locations, they obtain high utility (e.g., learning shops near their locations), while their actual location points are protected only weakly by differential privacy. However, their attributes at home/outside home are hidden among the dummy locations, hence protected by DistP. By experiments, we demonstrate that the tupling mechanism is useful to protect the privacy of attributes, and outperforms popular existing mechanisms (the randomized response [14], the planar Laplace [3] and Gaussian mechanisms) in terms of DistP and service quality.
Our Contributions. The main contributions of this work are given as follows:
-
We propose a formal model for the privacy of probability distributions in terms of differential privacy. Specifically, we define the notion of distribution privacy (DistP) to represent that the attacker cannot significantly gain information on the distribution of a mechanism’s input by observing its output.
-
We provide theoretical foundation of DistP, including its useful properties (e.g., compositionality) and its interpretation (e.g., in terms of Bayes factor).
-
We quantify the effect of distribution obfuscation by existing local mechanisms. In particular, we show that (extended) differential privacy mechanisms are able to make any two distributions less distinguishable, while deteriorating the utility by adding too much noise to protect all point data.
-
For instance, we prove that extended differential privacy mechanisms (e.g., the Laplace mechanism) need to add a large amount of noise proportionally to the \(\infty \)-Wasserstein distance \( W _{\infty , d }(\lambda _0, \lambda _1)\) between the two distributions \(\lambda _0\) and \(\lambda _1\) that we want to make indistinguishable.
-
We show that DistP is a useful relaxation of differential privacy when users want to hide their attributes, but not necessarily to protect all point data.
-
To improve the tradeoff between DistP and utility, we introduce the tupling mechanism, which locally adds random dummies to the output. Then we show that this mechanism provides DistP and hight utility for users.
-
We apply local mechanisms to the obfuscation of attributes in location based services (LBSs). Then we show that the tupling mechanism outperforms popular existing mechanisms in terms of DistP and service quality.
All proofs of technical results can be found in [15].
2 Preliminaries
In this section we recall some notions of privacy and metrics used in this paper. Let \(\mathbb {N}^{>0}\) be the set of positive integers, and \(\mathbb {R}^{>0}\) (resp. \(\mathbb {R}^{\ge 0}\)) be the set of positive (resp. non-negative) real numbers. Let [0, 1] be the set of non-negative real numbers not grater than 1. Let \(\varepsilon , \varepsilon _0, \varepsilon _1 \in \mathbb {R}^{\ge 0}\) and \(\delta , \delta _0, \delta _1 \in [0, 1]\).
2.1 Notations for Probability Distributions
We denote by \(\mathbb {D}\mathcal {X}\) the set of all probability distributions over a set \(\mathcal {X}\), and by \(|\mathcal {X}|\) the number of elements in a finite set \(\mathcal {X}\).
Given a finite set \(\mathcal {X}\) and a distribution \(\lambda \in \mathbb {D}\mathcal {X}\), the probability of drawing a value x from \(\lambda \) is denoted by \(\lambda [x]\). For a finite subset \(\mathcal {X}'\subseteq \mathcal {X}\) we define \(\lambda [\mathcal {X}']\) by: \(\lambda [\mathcal {X}'] = \sum _{x'\in \mathcal {X}'} \lambda [x']\). For a distribution \(\lambda \) over a finite set \(\mathcal {X}\), its support \(\mathsf {supp}(\lambda )\) is defined by \(\mathsf {supp}(\lambda ) = \{ x \in \mathcal {X}:\lambda [x] > 0 \}\). Given a \(\lambda \in \mathbb {D}\mathcal {X}\) and a \(f:\mathcal {X}\rightarrow \mathbb {R}\), the expected value of f over \(\lambda \) is: \({\mathbb {E}}_{x\sim \lambda }[f(x)] {\mathop {=}\limits ^{\mathrm {def}}}\sum _{x\in \mathcal {X}} \lambda [x] f(x)\).
For a randomized algorithm \( A :\mathcal {X}\rightarrow \mathbb {D}\mathcal {Y}\) and a set \(R\subseteq \mathcal {Y}\) we denote by \( A (x)[R]\) the probability that given input x, \( A \) outputs one of the elements of R. Given a randomized algorithm \( A :\mathcal {X}\rightarrow \mathbb {D}\mathcal {Y}\) and a distribution \(\lambda \) over \(\mathcal {X}\), we define \({ A }^{\#}(\lambda )\) as the distribution of the output of \( A \). Formally, for a finite set \(\mathcal {X}\), the lifting of \( A \) w.r.t. \(\mathcal {X}\) is the function \({ A }^{\#}: \mathbb {D}\mathcal {X}\rightarrow \mathbb {D}\mathcal {Y}\) such that \( { A }^{\#}(\lambda )[R] {\mathop {=}\limits ^{\mathrm {def}}}\sum _{x\in \mathcal {X}}\lambda [x] A (x)[R] \).
2.2 Differential Privacy (DP)
Differential privacy [1] captures the idea that given two “adjacent” inputs x and \(x'\) (from a set \(\mathcal {X}\) of data with an adjacency relation \(\varPhi \)), a randomized algorithm \( A \) cannot distinguish x from \(x'\) (with degree of \(\varepsilon \) and up to exceptions \(\delta \)).
Definition 1
(Differential privacy). Let e be the base of natural logarithm. A randomized algorithm \( A : \mathcal {X}\rightarrow \mathbb {D}\mathcal {Y}\) provides \((\varepsilon ,\delta )\)-differential privacy \({\textit{(}}{{{\textsf {\textit{DP}}}}}{\textit{)}}\) w.r.t. an adjacency relation \(\varPhi \subseteq \mathcal {X}\times \mathcal {X}\) if for any \((x, x')\in \varPhi \) and any \(R\subseteq \mathcal {Y}\),
where the probability is taken over the random choices in \( A \).
2.3 Differential Privacy Mechanisms and Sensitivity
Differential privacy can be achieved by a privacy mechanism, namely a randomized algorithm that adds probabilistic noise to a given input that we want to protect. The amount of noise added by some popular mechanisms (e.g., the exponential mechanism) depends on a utility function \( u :\mathcal {X}\times \mathcal {Y}\rightarrow \mathbb {R}\) that maps a pair of input and output to a utility score. More precisely, the noise is added according to the “sensitivity” of \( u \), which we define as follows.
Definition 2
(Utility distance). The utility distance w.r.t a utility function \( u :(\mathcal {X}\times \mathcal {Y})\rightarrow \mathbb {R}\) is the function \( d \) given by: \( d (x,x') {\mathop {=}\limits ^{\mathrm {def}}}\max _{y\in \mathcal {Y}} \bigl | u (x, y) - u (x', y) \bigr |\).
Note that \( d \) is a pseudometric. Hereafter we assume that for all x, y, \( u (x,y)=0\) is logically equivalent to \(x=y\). Then the utility distance \( d \) is a metric.
Definition 3
(Sensitivity w.r.t. an adjacency relation). The sensitivity of a utility function \( u \) w.r.t. an adjacency relation \(\varPhi \subseteq \mathcal {X}\times \mathcal {X}\) is defined as:
2.4 Extended Differential Privacy (XDP)
We review the notion of extended differential privacy [16], which relaxes DP by incorporating a metric d. Intuitively, this notion guarantees that when two inputs x and \(x'\) are closer in terms of d, the output distributions are less distinguishable.
Definition 4
(Extended differential privacy). For a metric \(d: \mathcal {X}\times \mathcal {X}\rightarrow \mathbb {R}\), we say that a randomized algorithm \( A : \mathcal {X}\rightarrow \mathbb {D}\mathcal {Y}\) provides \((\varepsilon ,\delta ,d)\)-extended differential privacy \({\textit{(}}{{{\textsf {\textit{XDP}}}}}{\textit{)}}\) if for all \(x, x'\in \mathcal {X}\) and for any \(R\subseteq \mathcal {Y}\),
2.5 Wasserstein Metric
We recall the notion of probability coupling as follows.
Definition 5
(Coupling). Given \(\lambda _0\in \mathbb {D}\mathcal {X}_0\) and \(\lambda _1\in \mathbb {D}\mathcal {X}_1\), a coupling of \(\lambda _0\) and \(\lambda _1\) is a \(\gamma \in \mathbb {D}(\mathcal {X}_0\times \mathcal {X}_1)\) such that \(\lambda _0\) and \(\lambda _1\) are \(\gamma \)’s marginal distributions, i.e., for each \(x_0\in \mathcal {X}_0\), \(\lambda _0[x_0] =\!\sum _{x'_1\in \mathcal {X}_1}\!\gamma [x_0, x'_1]\) and for each \(x_1\in \mathcal {X}_1\), \(\lambda _1[x_1] =\!\sum _{x'_0\in \mathcal {X}_0}\!\gamma [x'_0, x_1]\). We denote by \(\mathsf {cp}(\lambda _0, \lambda _1)\) the set of all couplings of \(\lambda _0\) and \(\lambda _1\).
Example 1
(Coupling as transformation of distributions). Let us consider two distributions \(\lambda _0\) and \(\lambda _1\) shown in Fig. 1. A coupling \(\gamma \) of \(\lambda _0\) and \(\lambda _1\) shows a way of transforming \(\lambda _0\) to \(\lambda _1\). For example, \(\gamma [2, 1] = 0.1\) moves from \(\lambda _0[2]\) to \(\lambda _1[1]\).
We then recall the \(\infty \)-Wasserstein metric [17] between two distributions.
Definition 6
(\(\infty \)-Wasserstein metric). Let \( d \) be a metric over \(\mathcal {X}\). The \(\infty \)-Wasserstein metric \( W _{\infty , d }\) w.r.t. \( d \) is defined by: for any \(\lambda _0, \lambda _1\in \mathbb {D}\mathcal {X}\),
The \(\infty \)-Wasserstein metric \( W _{\infty , d }(\lambda _0, \lambda _1)\) represents the minimum largest move between points in a transportation from \(\lambda _0\) to \(\lambda _1\). Specifically, in a transportation \(\gamma \), \(\max _{(x_0, x_1)\in \mathsf {supp}(\gamma )} d (x_0, x_1)\) represents the largest move from a point in \(\lambda _0\) to another in \(\lambda _1\). For instance, in the coupling \(\gamma \) in Example 1, the largest move is 1 (from \(\lambda _0[2]\) to \(\lambda _1[1]\), and from \(\lambda _0[2]\) to \(\lambda _1[3]\)). Such a largest move is minimized by a coupling that achieves the \(\infty \)-Wasserstein metric. We denote by \({\varGamma _{\!{ \infty , d }}}\) the set of all couplings that achieve the \(\infty \)-Wasserstein metric.
Finally, we recall the notion of the lifting of relations.
Definition 7
(Lifting of relations). Given a relation \(\varPhi \subseteq \mathcal {X}\times \mathcal {X}\), the lifting of \(\varPhi \) is the maximum relation \({\varPhi }^{\#}\subseteq \mathbb {D}\mathcal {X}\times \mathbb {D}\mathcal {X}\) such that for any \((\lambda _0, \lambda _1)\in {\varPhi }^{\#}\), there exists a coupling \(\gamma \in \mathsf {cp}(\lambda _0, \lambda _1)\) satisfying \(\mathsf {supp}(\gamma )\subseteq \varPhi \).
Note that by Definition 5, the coupling \(\gamma \) is a probability distribution over \(\varPhi \) whose marginal distributions are \(\lambda _0\) and \(\lambda _1\). If \(\varPhi = \mathcal {X}\times \mathcal {X}\), then \({\varPhi }^{\#} = \mathbb {D}\mathcal {X}\times \mathbb {D}\mathcal {X}\).
3 Privacy Notions for Probability Distributions
In this section we introduce a formal model for the privacy of user attributes, which is motivated in Sect. 1.
3.1 Modeling the Privacy of User Attributes in Terms of DP
As a running example, we consider an LBS (location based service) in which each user queries an LBS provider for a list of shops nearby. To hide a user’s exact location x from the provider, the user applies a randomized algorithm \( A : \mathcal {X}\rightarrow \mathbb {D}\mathcal {Y}\), called a local obfuscation mechanism, to her location x, and obtains an approximate information y with the probability \( A (x)[y]\).
To illustrate the privacy of attributes, let us consider an example in which users try to prevent an attacker from inferring whether they are \({ male}\) or \({ female}\) by obfuscating their own exact locations using a mechanism \( A \). For each \(t\in \{ { male}, { female}\}\), let \(\lambda _{t}\in \mathbb {D}\mathcal {X}\) be the prior distribution of the location of the users who have the attribute \(t\). Intuitively, \(\lambda _{{ male}}\) (resp. \(\lambda _{{ female}}\)) represents an attacker’s belief on the location of the male (resp. female) users before the attacker observes an output of the mechanism \( A \). Then the privacy of \(t\) can be modeled as a property that the attacker has no idea on whether the actual location x follows the distribution \(\lambda _{{ male}}\) or \(\lambda _{{ female}}\) after observing an output y of \( A \).
This can be formalized in terms of \(\varepsilon \)-local DP. For each \(t\in \{ { male}, { female}\}\), we denote by \(p( y \,|\, \lambda _{t})\) the probability of observing an obfuscated location y when an actual location x is distributed over \(\lambda _{t}\), i.e., \(p( y \,|\, \lambda _{t}) = \sum _{x\in \mathcal {X}} \lambda _{t}[x] A (x)[y]\). Then we can define the privacy of \(t\) by:
3.2 Distribution Privacy and Extended Distribution Privacy
We generalize the privacy of attributes (in Sect. 3.1) and define the notion of distribution privacy (DistP) as the differential privacy where the input is a probability distribution of data rather than a value of data. This notion models a level of obfuscation that hides which distribution a data value is drawn from. Intuitively, we say a randomized algorithm \( A \) provides DistP if, by observing an output of \( A \), we cannot detect from which distribution an input to \( A \) is generated.
Definition 8
(Distribution privacy). Let \(\varepsilon \in \mathbb {R}^{\ge 0}\) and \(\delta \in [0,1]\). We say that a randomized algorithm \( A :\mathcal {X}\rightarrow \mathbb {D}\mathcal {Y}\) provides \((\varepsilon ,\delta )\)-distribution privacy \({\textit{(}}{{{\textsf {\textit{DistP}}}}}{\textit{)}}\) w.r.t. an adjacency relation \(\varPsi \subseteq \mathbb {D}\mathcal {X}\times \mathbb {D}\mathcal {X}\) if its lifting \({ A }^{\#}:\mathbb {D}\mathcal {X}\rightarrow \mathbb {D}\mathcal {Y}\) provides \((\varepsilon ,\delta )\)-DP w.r.t. \(\varPsi \), i.e., for all pairs \((\lambda , \lambda ')\in \varPsi \) and all \(R \subseteq \mathcal {Y}\), we have:
We say \( A \) provides \((\varepsilon ,\delta )\)-DistP w.r.t. \(\varLambda \subseteq \mathbb {D}\mathcal {X}\) if it provides \((\varepsilon ,\delta )\)-DistP w.r.t. \(\varLambda ^2\).
For example, the privacy of a user attribute \(t\in \{ { male}{}, { female}{} \}\) described in Sect. 3.1 can be formalized as \((\varepsilon , 0)\)-DistP w.r.t. \(\{\lambda _{{ male}}, \lambda _{{ female}}\}\).
Mathematically, DistP is not a new notion but the DP for distributions. To contrast with DistP, we refer to the DP for data values as point privacy.
Next we introduce an extended form of distribution privacy to a metric. Intuitively, extended distribution privacy guarantees that when two input distributions are closer, then the output distributions must be less distinguishable.
Definition 9
(Extended distribution privacy). Let \(d: (\mathbb {D}\mathcal {X}\times \mathbb {D}\mathcal {X})\rightarrow \mathbb {R}\) be a utility distance, and \(\varPsi \subseteq \mathbb {D}\mathcal {X}\times \mathbb {D}\mathcal {X}\). We say that a mechanism \( A :\mathcal {X}\rightarrow \mathbb {D}\mathcal {Y}\) provides \((\varepsilon ,d,\delta )\)-extended distribution privacy \({\textit{(}}{{{\textsf {\textit{XDistP}}}}}{\textit{)}}\) w.r.t. \(\varPsi \) if the lifting \({ A }^{\#}\) provides \((\varepsilon ,d,\delta )\)-XDP w.r.t. \(\varPsi \), i.e., for all \((\lambda , \lambda ')\in \varPsi \) and all \(R\subseteq \mathcal {Y}\), we have:
3.3 Interpretation by Bayes Factor
The interpretation of DP has been explored in previous work [16, 18] using the notion of Bayes factor. Similarly, the meaning of DistP can also be explained in terms of Bayes factor, which compares the attacker’s prior and posterior beliefs.
Assume that an attacker has some belief on the input distribution before observing the output values of an obfuscater \( A \). We denote by \(p(\lambda )\) the prior probability that a distribution \(\lambda \) is chosen as the input distribution. By observing an output y of \( A \), the attacker updates his belief on the input distribution. We denote by \(p(\lambda | y)\) the posterior probability of \(\lambda \) being chosen, given an output y.
For two distributions \(\lambda _0, \lambda _1\), the Bayes factor \(K(\lambda _0, \lambda _1, y)\) is defined as the ratio of the two posteriors divided by that of the two priors: \(K(\lambda _0, \lambda _1, y) = \frac{p(\lambda _0|y)}{p(\lambda _1|y)} \big / \frac{p(\lambda _0)}{p(\lambda _1)}\). If the Bayes factor is far from 1 the attacker significantly updates his belief on the distribution by observing a perturbed output y of \( A \).
Assume that \( A \) provides \((\varepsilon ,0)\)-DistP. By Bayes’ theorem, we obtain:
Intuitively, if the attacker believes that \(\lambda _0\) is k times more likely than \(\lambda _1\) before the observation, then he believes that \(\lambda _0\) is \(k\cdot e^\varepsilon \) times more likely than \(\lambda _1\) after the observation. This means that for a small value of \(\varepsilon \), DistP guarantees that the attacker does not gain information on the distribution by observing y.
In the case of XDistP, the Bayes factor \(K(\lambda _0, \lambda _1, y)\) is bounded above by \(e^{\varepsilon d(\lambda _0, \lambda _1)}\). Hence the attacker gains more information for a larger distance \(d(\lambda _0, \lambda _1)\).
3.4 Privacy Guarantee for Attackers with Close Beliefs
In the previous sections, we assume that we know the distance between two actual input distributions, and can determine the amount of noise required for distribution obfuscation. However, an attacker may have different beliefs on the distributions that are closer to the actual ones, e.g., more accurate distributions obtained by more observations and specific situations (e.g., daytime/nighttime).
To see this, for each \(\lambda \in \mathbb {D}\mathcal {X}\), let \(\tilde{\lambda }\) be an attacker’s belief on \(\lambda \). We say that an attacker has \((c, d )\)-close beliefs if each distribution \(\lambda \) satisfies \( d (\lambda , \tilde{\lambda }) \le c\). Then extended distribution privacy in the presence of an attacker is given by:
Proposition 1
\({\varvec{(}}{{\mathbf {\mathsf{{XDistP}}}}}\) with close beliefs). Let \( A : \mathcal {X}\rightarrow \mathbb {D}\mathcal {Y}\) provide \((\varepsilon , d , 0)\)-XDistP w.r.t. some \(\varPsi \subseteq \mathcal {X}\times \mathcal {X}\). If an attacker has \((c, d )\)-close beliefs, then for all \((\lambda _0, \lambda _1)\in \varPsi \) and all \(R \subseteq \mathcal {Y}\), we have \( { A }^{\#}(\tilde{\lambda _0})[R] \le e^{\varepsilon \left( d (\lambda _0, \lambda _1) + 2c \right) } \cdot { A }^{\#}(\tilde{\lambda _1})[R] {.} \)
When the attacker’s beliefs are closer to ours, then c is smaller, hence a stronger distribution privacy is guaranteed. See [15] for a proposition with DistP. Note that assuming some attacker’s beliefs are inevitable also in many previous studies, e.g., when we want to protect the privacy of correlated data [19,20,21].
3.5 Difference from the Histogram Privacy
Finally, we present a brief remark on the difference between DistP and the differential privacy of histogram publication (e.g., [22]). Roughly, a histogram publication mechanism is a central mechanism that aims at hiding a single record \(x\in \mathcal {X}\) and outputs an obfuscated histogram, e.g., a distribution \(\mu \in \mathbb {D}\mathcal {Y}\), whereas a DistP mechanism is a local mechanism that aims at hiding an input distribution \(\lambda \in \mathbb {D}\mathcal {X}\) and outputs a single perturbed value \(y\in \mathcal {Y}\).
Note that neither of these implies the other. The \(\varepsilon \)-DP of a histogram publication mechanism means that for any two adjacent inputs \(x, x' \in \mathcal {X}\) and any histogram \(\mu \in \mathbb {D}\mathcal {Y}\), \( \frac{p(\mu | x)}{p(\mu | x')} \le e^\varepsilon . \) However, this does not derive \(\varepsilon \)-DistP, i.e., for any adjacent input distributions \(\lambda , \lambda ' \in \mathbb {D}\mathcal {X}\) and any output \(y \in \mathcal {Y}\), \( \frac{p(y | \lambda )}{p(y | \lambda ')} \le e^\varepsilon \).
4 Basic Properties of Distribution Privacy
In Table 1, we show basic properties of DistP. (See the arXiv version [15] for the full table with XDistP and their detailed proofs.)
The composition \(A_1 \mathbin {\odot }A_0\) means that an identical input x is given to two DistP mechanisms \(A_0\) and \(A_1\), whereas the composition \(A_1 \mathbin {\bullet }A_0\) means that independent inputs \(x_b\) are provided to mechanisms \( A _b\) [23]. The compositionality can be used to quantify the attribute privacy against an attacker who obtains multiple released data each obfuscated for the purpose of protecting a different attribute. For example, let \(\varPsi = \{ (\lambda _{{ male}}, \lambda _{{ female}}), (\lambda _{{ home}}, \lambda _{{ out}}) \}\), and \( A _0\) (resp. \( A _1\)) be a mechanism providing \(\varepsilon _0\)-DistP (resp. \(\varepsilon _1\)-DistP) w.r.t. \(\varPsi \). When \( A _0\) (resp. \( A _1\)) obfuscates a location \(x_0\) for the sake of protecting male/female (resp. home/out), then both male/female and home/out are protected with \((\varepsilon _0+\varepsilon _1)\)-DistP.
As for pre-processing, the stability notion is different from that for DP:
Definition 10
(Stability). Let \(c\in \mathbb {N}^{>0}\), \(\varPsi \subseteq \mathbb {D}\mathcal {X}\times \mathbb {D}\mathcal {X}\), and \( W \) be a metric over \(\mathbb {D}\mathcal {X}\). A transformation \(T:\mathbb {D}\mathcal {X}\rightarrow \mathbb {D}\mathcal {X}\) is \((c, \varPsi )\)-stable if for any \((\lambda _0,\lambda _1)\in \varPsi \), \(T(\lambda _0)\) can be reached from \(T(\lambda _1)\) at most c-steps over \(\varPsi \). Analogously, \(T:\mathbb {D}\mathcal {X}\rightarrow \mathbb {D}\mathcal {X}\) is \((c, W )\)-stable if for any \(\lambda _0,\lambda _1\in \mathbb {D}\mathcal {X}\), \( W (T(\lambda _0),T(\lambda _1)) \le c W (\lambda _0,\lambda _1)\).
We present relationships among privacy notions in [15]. An important property is that when the relation \(\varPsi \subseteq \mathbb {D}\mathcal {X}\times \mathbb {D}\mathcal {X}\) includes pairs of point distributions (i.e., distributions having single points with probability 1), \(\textsf {DistP}{}\) (resp. \(\textsf {XDistP}{}\)) implies \(\textsf {DP}{}\) (resp. \(\textsf {XDP}{}\)). In contrast, if \(\varPsi \) does not include pairs of point distributions, DistP (resp. \(\textsf {XDistP}{}\)) may not imply DP (resp. \(\textsf {XDP}{}\)), as in Sect. 6.
5 Distribution Obfuscation by Point Obfuscation
In this section we present how the point obfuscation mechanisms (including DP and XDP mechanisms) contribute to the obfuscation of probability distributions.
5.1 Distribution Obfuscation by DP Mechanisms
We first show every DP mechanism provides DistP. (See Definition 7 for \({\varPhi }^{\#}\).)
Theorem 1
(\((\varepsilon , \delta )\)-DP \(\Rightarrow \) \((\varepsilon ,\, \delta \cdot |\varPhi |)\)-DistP). Let \(\varPhi \subseteq \mathcal {X}\times \mathcal {X}\). If \( A :\mathcal {X}\rightarrow \mathbb {D}\mathcal {Y}\) provides \((\varepsilon , \delta )\)-DP w.r.t. \(\varPhi \), then it provides \((\varepsilon , \delta \cdot |\varPhi |)\)-DistP w.r.t. \({\varPhi }^{\#}\).
This means that the mechanism \( A \) makes any pair \((\lambda _0, \lambda _1)\in {\varPhi }^{\#}\) indistinguishable up to the threshold \(\varepsilon \) and with exceptions \(\delta \cdot |\varPhi |\). Intuitively, when \(\lambda _0\) and \(\lambda _1\) are adjacent w.r.t. the relation \({\varPhi }^{\#}\), we can construct \(\lambda _1\) from \(\lambda _0\) only by moving mass from \(\lambda _0[x_0]\) to \(\lambda _1[x_1]\) where \((x_0, x_1)\in \varPhi \) (i.e., \(x_0\) is adjacent to \(x_1\)).
Example 2
(Randomized response). By Theorem 1, the \((\varepsilon , 0)\)-DP randomized response [14] and RAPPOR [4] provide \((\varepsilon , 0)\)-DistP. When we use these mechanisms, the estimation of the input distribution is harder for a smaller \(\varepsilon \). However, these DP mechanisms tend to have small utility, because they add much noise to hide not only the input distributions, but everything about inputs.
5.2 Distribution Obfuscation by XDP Mechanisms
Compared to DP mechanisms, XDP mechanisms are known to provide better utility. Alvim et al. [24] show the planar Laplace mechanism [3] adds less noise than the randomized response, since XDP hides only closer locations. However, we show XDP mechanisms still need to add much noise proportionally to the \(\infty \)-Wasserstein distance between the distributions we want make indistinguishable.
The \(\infty \)-Wasserstein Distance \( W _{\infty , d }\) as Utility Distance. We first observe how much \(\varepsilon '\) is sufficient for an \(\varepsilon '\)-XDP mechanism (e.g., the Laplace mechanism) to make two distribution \(\lambda _0\) and \(\lambda _1\) indistinguishable in terms of \(\varepsilon \)-DistP.
Suppose that \(\lambda _0\) and \(\lambda _1\) are point distributions such that \(\lambda _0[x_0] = \lambda _1[x_1] = 1\) for some \(x_0,x_1\in \mathcal {X}\). Then an \(\varepsilon '\)-XDP mechanism \( A \) satisfies:
In order for \( A \) to provide \(\varepsilon \)-DistP, \(\varepsilon '\) should be defined as \(\frac{\varepsilon }{ d (x_0,x_1)}\). That is, the noise added by \( A \) should be proportional to the distance between \(x_0\) and \(x_1\).
To extend this to arbitrary distributions, we need to define a utility metric between distributions. A natural possible definition would be the largest distance between values of \(\lambda _0\) and \(\lambda _1\), i.e., the diameter over the supports defined by:
However, when there is an outlier in \(\lambda _0\) or \(\lambda _1\) that is far from other values in the supports, then the diameter \(\mathsf {diam}(\lambda _0, \lambda _1)\) is large. Hence the mechanisms that add noise proportionally to the diameter would lose utility too much.
To have better utility, we employ the \(\infty \)-Wasserstein metric \( W _{\infty , d }\). The idea is that given two distributions \(\lambda _0\) and \(\lambda _1\) over \(\mathcal {X}\), we consider the cost of a transportation of weights from \(\lambda _0\) to \(\lambda _1\). The transportation is formalized as a coupling \(\gamma \) of \(\lambda _0\) and \(\lambda _1\) (see Definition 5), and the cost of the largest move is \( \displaystyle \varDelta _{\mathsf {supp}(\gamma ), d } = \max _{(x_0, x_1)\in \mathsf {supp}(\gamma )} d (x_0,x_1), \) i.e., the sensitivity w.r.t. the adjacency relation \(\mathsf {supp}(\gamma )\subseteq \mathcal {X}\times \mathcal {X}\) (Definition 3). The minimum cost of the largest move is given by the \(\infty \)-Wasserstein metric: \( W _{\infty , d }(\lambda _0, \lambda _1)= \displaystyle \min _{\gamma \in \mathsf {cp}(\lambda _0, \lambda _1)} \varDelta _{\mathsf {supp}(\gamma ), d } {.} \)
XDP implies XDistP . We show every XDP mechanism provides XDistP with the metric \( W _{\infty , d }\). To formalize this, we define a lifted relation \({\varPhi }_{ W _{\infty }}^{\#}\) as the maximum relation over \(\mathbb {D}\mathcal {X}\) s.t. for any \((\lambda _0, \lambda _1)\in {\varPhi }_{ W _{\infty }}^{\#}\), there is a coupling \(\gamma \in \mathsf {cp}(\lambda _0, \lambda _1)\) satisfying \(\mathsf {supp}(\gamma )\subseteq \varPhi \) and \(\gamma \in {\varGamma _{\!{ \infty , d }}}(\lambda _0, \lambda _1)\). Then \({\varPhi }_{ W _{\infty }}^{\#}\subseteq {\varPhi }^{\#}\) holds.
Theorem 2
(\((\varepsilon , d , \delta )\)-XDP\(\,\Rightarrow \,(\varepsilon , W _{\infty , d }, \delta \!\cdot \!|\varPhi |)\)-XDistP). If \( A \!:\!\mathcal {X}\rightarrow \mathbb {D}\mathcal {Y}\) provides \((\varepsilon , d , \delta )\)-XDP w.r.t. \(\varPhi \subseteq \mathcal {X}\times \mathcal {X}\), it provides \((\varepsilon , W _{\infty , d }, {\delta \!\cdot \!|\varPhi |})\)-XDistP w.r.t. \({\varPhi }_{ W _{\infty }}^{\#}\).
By Theorem 2, when \(\delta > 0\), the noise required for obfuscation is proportional to \(|\varPhi |\), which is at most the domain size squared \(|\mathcal {X}|^2\). This implies that for a larger domain \(\mathcal {X}\), the Gaussian mechanism is not suited for distribution obfuscation. We will demonstrate this by experiments in Sect. 7.4.
In contrast, the Laplace/exponential mechanisms provide \((\varepsilon , W _{\infty , d }, 0)\)-DistP. Since \( W _{\infty , d }(\lambda _0, \lambda _1) \le \mathsf {diam}(\lambda _0, \lambda _1)\), the noise added proportionally to \( W _{\infty , d }\) can be smaller than \(\mathsf {diam}\). This implies that obfuscating a distribution requires less noise than obfuscating a set of data. However, the required noise can still be very large when we want to make two distant distributions indistinguishable.
6 Distribution Obfuscation by Random Dummies
In this section we introduce a local mechanism called a tupling mechanism to improve the tradeoff between DistP and utility, as motivated in Sect. 1.
6.1 Tupling Mechanism
We first define the tupling mechanism as a local mechanism that obfuscates a given input x by using a point perturbation mechanism \( A \) (not necessarily in terms of DP or XDP), and that also adds k random dummies \(r_1, r_2, \ldots , r_k\) to the output to obfuscate the input distribution (Algorithm 1). The probability that given an input x, the mechanism \( Q ^\mathsf{tp}_{k,\nu , A }\) outputs \(\bar{y}\) is given by \( Q ^\mathsf{tp}_{k,\nu , A }(x)[\bar{y}]\).
6.2 Privacy of the Tupling Mechanism
Next we show that the tupling mechanism provides DistP w.r.t. the following class of distributions. Given \(\beta , \eta \in [0, 1]\) and \( A : \mathcal {X}\rightarrow \mathbb {D}\mathcal {Y}\), we define \(\varLambda _{\beta ,\eta , A }\) by:
For instance, a distribution \(\lambda \) satisfying \(\max _{x} \lambda [x] \le \beta \) belongs to \(\varLambda _{\beta ,0, A }\).
Theorem 3
(\( {{\mathbf {\mathsf{{DistP}}}}}\) of the tupling mechanism). Let \(k\in \mathbb {N}^{>0}\), \(\nu \) be the uniform distribution over \(\mathcal {Y}\), \( A :\mathcal {X}\rightarrow \mathbb {D}\mathcal {Y}\), and \(\beta , \eta \in [0, 1]\). Given an \(0< \alpha < \frac{k}{|\mathcal {Y}|}\), let \(\varepsilon _{\alpha } = \ln {\textstyle \frac{ k + (\alpha + \beta )\cdot |\mathcal {Y}| }{ k - \alpha \cdot |\mathcal {Y}| }}\) and \(\delta _{\alpha } = 2 e^{-\frac{2\alpha ^2}{k\beta ^2}} + \eta \). Then the \((k,\nu , A )\)-tupling mechanism provides \((\varepsilon _{\!\alpha }, \delta _{\!\alpha })\)-DistP w.r.t. \(\varLambda _{\beta ,\eta , A }^2\).
This claim states that just adding random dummies achieves DistP without any assumption on \( A \) (e.g., \( A \) does not have to provide DP). For a smaller range size \(|\mathcal {Y}|\) and a larger number k of dummies, we obtain a stronger DistP.
Note that the distributions protected by \( Q ^\mathsf{tp}_{k,\nu , A }\) belong to the set \(\varLambda _{\beta ,\eta , A }\).
-
When \(\beta = 1\), \(\varLambda _{\beta ,\eta , A }\) is the set of all distributions (i.e., \(\varLambda _{1,\eta , A } = \mathbb {D}\mathcal {X}\)) while \(\varepsilon _{\alpha }\) and \(\delta _{\alpha }\) tend to be large.
-
For a smaller \(\beta \), the set \(\varLambda _{\beta ,\eta , A }\) is smaller while \(\varepsilon _{\alpha }\) and \(\delta _{\alpha }\) are smaller; that is, the mechanism provides a stronger DistP for a smaller set of distributions.
-
If \( A \) provides \(\varepsilon _{ A }\)-DP, \(\varLambda _{\beta ,\eta , A }\) goes to \(\mathbb {D}\mathcal {X}\) for \(\varepsilon _{ A } \rightarrow 0\). More generally, \(\varLambda _{\beta ,\eta , A }\) is larger when the maximum output probability \(\max _{y} { A }^{\#}(\lambda )[y]\) is smaller.
In practice, even when \(\varepsilon _{ A }\) is relatively large, a small number of dummies enables us to provide a strong DistP, as shown by experiments in Sect. 7.
We note that Theorem 3 may not imply DP of the tupling mechanism, depending on \( A \). For example, suppose that \( A \) is the identity function. For small \(\varepsilon _{\alpha }\) and \(\delta _{\alpha }\), we have \(\beta \ll 1\), hence no point distribution \(\lambda \) (where \(\lambda [x] = 1\) for some x) belongs to \(\varLambda _{\beta ,\eta , A }\), namely, the tupling mechanism does not provide \((\varepsilon _{\alpha }, \delta _{\alpha })\)-DP.
6.3 Service Quality Loss and Cost of the Tupling Mechanism
When a mechanism outputs a value y closer to the original input x, she obtains a larger utility, or equivalently, a smaller service quality loss \( d (x, y)\). For example, in an LBS (location based service), if a user located at x submits an obfuscated location y, the LBS provider returns the shops near y, hence the service quality loss can be expressed as the Euclidean distance \( d (x, y) {\mathop {=}\limits ^{\mathrm {def}}}\Vert x - y \Vert \).
Since each output of the tupling mechanism consists of \(k+1\) elements, the quality loss of submitting a tuple \(\bar{y} = (y_1, y_2, \ldots , y_{k+1})\) amounts to \( d (x, \bar{y}) \mathbin {:=} \min _{i} d (x, y_i)\). Then the expected quality loss of the mechanism is defined as follows.
Definition 11
(Expected quality loss of the tupling mechanism). For a \(\lambda \in \mathbb {D}\mathcal {X}\) and a metric \( d : \mathcal {X}\times \mathcal {Y}\rightarrow \mathbb {R}\), the expected quality loss of \( Q ^\mathsf{tp}_{k,\nu , A }\) is:
For a larger number k of random dummies, \(\min _{i} d (x, y_i)\) is smaller on average, hence \(L\bigl ( Q ^\mathsf{tp}_{k,\nu , A }\bigr )\) is also smaller. Furthermore, thanks to the distribution obfuscation by random dummies, we can instead reduce the perturbation noise added to the actual input x to obtain the same level of DistP. Therefore, the service quality is much higher than existing mechanisms, as shown in Sect. 7.
6.4 Improving the Worst-Case Quality Loss
As a point obfuscation mechanism \( A \) used in the tupling mechanism \( Q ^\mathsf{tp}_{k,\nu , A }{}\), we define the restricted Laplace (RL) mechanism below. Intuitively, \((\varepsilon _{\! A }, r)\)-RL mechanism adds \(\varepsilon _{\! A }\)-XDP Laplace noise only within a radius r of the original location x. This ensures that the worst-case quality loss of the tupling mechanisms is bounded above by the radius r, whereas the standard Laplace mechanism reports a location y that is arbitrarily distant from x with a small probability.
Definition 12
(RL mechanism). Let \(\mathcal {Y}_{x,r} = \{ y'\in \mathcal {Y}\,|\, d (x, y') \le r \}\). We define \((\varepsilon _{\! A },r)\)-restricted Laplace (RL) mechanism as the \( A :\mathcal {X}\rightarrow \mathbb {D}\mathcal {Y}\) defined by: \( A (x)[y] = \frac{ e^{-\varepsilon d (x, y)} }{ \sum _{y'\in \mathcal {Y}_{x,r}} e^{-\varepsilon d (x, y')} }\) if \(y \in \mathcal {Y}_{x,r}\), and \( A (x)[y] = 0\) otherwise.
Since the support of \( A \) is limited to \(\mathcal {Y}_{x,r}\), \( A \) provides better service quality but does not provide DP. Nevertheless, as shown in Theorem 3, \( Q ^\mathsf{tp}_{k,\nu , A }{}\) provides DistP, due to dummies in \(\mathcal {Y}\setminus \mathcal {Y}_{x,r}\). This implies that DistP is a relaxation of DP that guarantees the privacy of attributes while achieving higher utility by weakening the DP protection of point data. In other words, DistP mechanisms are useful when users want both to keep high utility and to protect the attribute privacy more strongly than what a DP mechanism can guarantee (e.g., when users do not mind revealing their actual locations outside home, but want to hide from robbers the fact that they are outside home, as motivated in Sect. 1).
7 Application to Attribute Privacy in LBSs
In this section we apply local mechanisms to the protection of the attribute privacy in location based services (LBSs) where each user submits her own location x to an LBS provider to obtain information relevant to x (e.g., shops near x).
7.1 Experimental Setup
We perform experiments on location privacy in Manhattan by using the Foursquare dataset (Global-scale Check-in Dataset) [25]. We first divide Manhattan into \(11 \times 10\) regions with \(1.0\,\mathrm {km}\) intervals. To provide more useful information to users in crowded regions, we further re-divide these regions to 276 regions by recursively partitioning each crowded region into four until each resulting region has roughly similar population density.Footnote 2 Let \(\mathcal {Y}\) be the set of those 276 regions, and \(\mathcal {X}\) be the set of the 228 regions inside the central \(10\,\mathrm {km} \times 9\,\mathrm {km}\) area in \(\mathcal {Y}\).
As an obfuscation mechanism Q, we use the tupling mechanism \( Q ^\mathsf{tp}_{k,\nu , A }{}\) that uses an \((\varepsilon _{\! A }, r)\)-RL mechanism \( A \) and the uniform distribution \(\nu \) over \(\mathcal {Y}\) to generate dummy locations. Note that \(\nu \) is close to the population density distribution over \(\mathcal {Y}\), because each region in \(\mathcal {Y}\) is constructed to have roughly similar population density. In the definitions of the RL mechanism and the quality loss, we use the Euclidean distance \(\Vert \cdot \Vert \) between the central points of the regions.
In the experiments, we measure the privacy of user attributes, formalized as DistP. For example, let us consider the attribute \({ male}/{ female}\). For each \(t\in \{ { male}, { female}\}\), let \(\lambda _{t}\in \mathbb {D}\mathcal {X}\) be the prior distribution of the location of the users having the attribute \(t\). Then, \(\lambda _{{ male}}\) (resp. \(\lambda _{{ female}}\)) represents an attacker’s belief on the location of the male (resp. female) users. We define these as the empirical distributions that the attacker can calculate from the above Foursquare dataset.
7.2 Evaluation of the Tupling Mechanism
Distribution Privacy. We demonstrate by experiments that the male users cannot be recognized as which of male or female in terms of DistP. In Fig. 2, we show the experimental results on the DistP of the tupling mechanism \( Q ^\mathsf{tp}_{k,\nu , A }{}\). For a larger number k of dummy locations, we have a stronger DistP (Fig. 2a). For a larger \(\varepsilon _{\! A }\), \((\varepsilon _{\! A }, 0.020)\)-RL mechanism \( A \) adds less noise, hence the tupling mechanism provides a weaker DistP (Fig. 2b)Footnote 3. For a larger radius r, the RL mechanism \( A \) spreads the original distribution \(\lambda _{{ male}}\) and thus provides a strong DistP (Fig. 2c). We also show the relationship between k and DistP in the eastern/western Tokyo and London, which have different levels of privacy (Fig. 3).
These results imply that if we add more dummies, we can decrease the noise level/radius of \( A \) to have better utility, while keeping the same level \(\varepsilon \) of DistP. Conversely, if \( A \) adds more noise, we can decrease the number k of dummies.
Expected Quality Loss. In Fig. 2d, we show the experimental results on the expected quality loss of the tupling mechanism. For a larger \(\varepsilon _{\! A }\), \( A \) adds less noise, hence the loss is smaller. We confirm that for more dummy data, the expected quality loss is smaller. Unlike the planar Laplace mechanism (\(\mathrm {PL}{}\)), \( A \) ensures that the worst quality loss is bounded above by the radius r. Furthermore, for a smaller radius r, the expected loss is also smaller as shown in Fig. 2d.
7.3 Appropriate Parameters
We define the attack success rate (ASR) as the ratio that the attacker succeeds to infer a user has an attribute when she does actually. We use an inference algorithm based on the Bayes decision rule [26] to minimize the identification error probability when the estimated posterior probability is accurate [26].
In Fig. 4, we show the relationships between DistP and ASR in Manhattan for the attribute home, meaning the users located at their home. In theory, \(\mathrm {ASR} = 0.5\) represents the attacker learns nothing about the attribute, whereas the empirical ASR in our experiments fluctuates around 0.5. This seems to be caused by the fact that the dataset and the number of locations are finite. From Fig. 4, we conclude that \(\varepsilon = 1\) is an appropriate parameter for \((\varepsilon , 0.001)\)-DistP to achieve \(\mathrm {ASR} = 0.5\) in our setting, and we confirm this for other attributes. However, we note that this is an empirical criterion possibly depending on our setting, and the choice of \(\varepsilon \) for DistP can be as controversial as that for DP and should also be investigated using approaches for DP (e.g., [27]) in future work.
7.4 Comparison of Obfuscation Mechanisms
We demonstrate that the tupling mechanism (TM) outperforms the popular mechanisms: the randomized response (RR), the planar Laplace (PL), and the planar Gaussian (PG). In Fig. 5 we compare these concerning the relationship between \(\varepsilon \)-DistP and expected quality loss. Since PG always has some \(\delta \), it provides a weaker DistP than PL for the same quality loss. We also confirm that PL has smaller loss than RR, since it adds noise proportionally to the distance.
Finally, we briefly discuss the computational cost of the tupling mechanism \( Q ^\mathsf{tp}_{k,\nu , A }{}\), compared to \(\mathrm {PL}{}\). In the implementation, for a larger domain \(\mathcal {X}\), \(\mathrm {PL}{}\) deals with a larger size \(|\mathcal {X}|\times |\mathcal {Y}|\) of the mechanism’s matrix, since it outputs each region with a non-zero probability. In contrast, since the RL mechanism \( A \) used in \( Q ^\mathsf{tp}_{k,\nu , A }{}\) maps each location x to a region within a radius r of x, the size of \( A \)’s matrix is \(|\mathcal {X}|\times |\mathcal {Y}_{x,r}|\), requiring much smaller memory space than \(\mathrm {PL}{}\).
Furthermore, the users of \(\mathrm {TM}{}\) can simply ignore the responses to dummy queries, whereas the users of \(\mathrm {PL}{}\) need to select relevant POIs (point of interests) from a large radius of x, which could cost computationally for many POIs. Therefore, \(\mathrm {TM}{}\) is more suited to be used in mobile environments than \(\mathrm {PL}{}\).
8 Related Work
Differential Privacy. Since the seminal work of Dwork [1] on DP, a number of its variants have been studied to provide different privacy guarantees; e.g., f-divergence privacy [28], d-privacy [16], Pufferfish privacy [20], local DP [2], and utility-optimized local DP [29]. All of these are intended to protect the input data rather than the input distributions. Note that distributional privacy [30] is different from DistP and does not aim at protecting the privacy of distributions.
To our knowledge, this is the first work that investigates the differential privacy of probability distributions lying behind the input. However, a few studies have proposed related notions. Jelasity et al. [31] propose distributional differential privacy w.r.t. parameters \(\theta \) and \(\theta '\) of two distributions, which aims at protecting the privacy of the distribution parameters but is defined in a Bayesian style (unlike DP and DistP) to satisfy that for any output sequence y, \(p(\theta | y) \le e^{\varepsilon } p(\theta ' | y)\). After a preliminary version of this paper appeared in arXiv [15], a notion generalizing DistP, called profile based privacy, is proposed in [32].
Some studies are technically related to our work. Song et al. [21] propose the Wasserstein mechanism to provide Pufferfish privacy, which protects correlated inputs. Fernandes et al. [33] introduce Earth mover’s privacy, which is technically different from DistP in that their mechanism obfuscates a vector (a bag-of-words) instead of a distribution, and perturbs each element of the vector. Sei et al. [34] propose a variant of the randomized response to protect individual data and provide high utility of database. However, we emphasize again that our work differs from these studies in that we aim at protecting input distributions.
Location Privacy. Location privacy has been widely studied in the literature, and its survey can be found in [35]. A number of location obfuscation methods have been proposed so far, and they can be broadly divided into the following four types: perturbation (adding noise) [3, 5, 36], location generalization (merging regions) [37, 38], and location hiding (deleting) [37, 39], and adding dummy locations [40,41,42]. Location obfuscation based on DP (or its variant) have also been widely studied, and they can be categorized into the ones in the centralized model [43, 44] and the ones in the local model [3, 5]. However, these methods aim at protecting locations, and neither at protecting users’ attributes (e.g., age, gender) nor activities (e.g., working, shopping) in a DP manner. Despite the fact that users’ attributes and activities can be inferred from their locations [6,7,8], to our knowledge, no studies have proposed obfuscation mechanisms to provide rigorous DP guarantee for such attributes and activities.
9 Conclusion
We have proposed a formal model for the privacy of probability distributions and introduced the notion of distribution privacy (DistP). Then we have shown that existing local mechanisms deteriorate the utility by adding too much noise to provide DistP. To improve the tradeoff between DistP and utility, we have introduced the tupling mechanism and applied it to the protection of user attributes in LBSs. Then we have demonstrated that the tupling mechanism outperforms popular local mechanisms in terms of attribute obfuscation and service quality.
Notes
- 1.
In our setting, the attacker observes only a sampled output of \( A \), and not the exact histogram of \( A \)’s output distribution. See Sect. 3.5 for more details.
- 2.
This partition may be useful to achieve smaller values \((\varepsilon , \delta )\) of DistP, because \(\beta \) tends to be smaller when the population density is closer to the uniform distribution.
- 3.
In Fig. 2b, for \(\varepsilon _{\! A } \rightarrow 0\), \(\varepsilon \) does not converge to 0, since the radius \(r = 0.020\) of RL does not cover the whole \(\mathcal {Y}\). However, if \(r \ge \max _{x,y} \Vert x - y \Vert \), \(\varepsilon \) converges to 0.
References
Dwork, C.: Differential privacy. In: Bugliesi, M., Preneel, B., Sassone, V., Wegener, I. (eds.) ICALP 2006. LNCS, vol. 4052, pp. 1–12. Springer, Heidelberg (2006). https://doi.org/10.1007/11787006_1
Duchi, J.C., Jordan, M.I., Wainwright, M.J.: Local privacy and statistical minimax rates. In: Proceedings of FOCS, pp. 429–438 (2013)
Andrés, M.E., Bordenabe, N.E., Chatzikokolakis, K., Palamidessi, C.: Geo-indistinguishability: differential privacy for location-based systems. In: Proceedings of CCS, pp. 901–914. ACM (2013)
Erlingsson, Ú., Pihur, V., Korolova, A.: RAPPOR: randomized aggregatable privacy-preserving ordinal response. In: Proceedings of CCS, pp. 1054–1067 (2014)
Bordenabe, N.E., Chatzikokolakis, K., Palamidessi, C.: Optimal geo-indistinguishable mechanisms for location privacy. In: Proceedings of CCS, pp. 251–262 (2014)
Liao, L., Fox, D., Kautz, H.: Extracting places and activities from GPS traces using hierarchical conditional random fields. Int. J. Robot. Res. 1(26), 119–134 (2007)
Zheng, V.W., Zheng, Y., Yang, Q.: Joint learning user’s activities and profiles from GPS data. In: Proceedings of LBSN, pp. 17–20 (2009)
Matsuo, Y., Okazaki, N., Izumi, K., Nakamura, Y., Nishimura, T., Hasida, K.: Inferring long-term user properties based on users’ location history. In: Proceedings of IJCAI, pp. 2159–2165 (2007)
Yang, D., Qu, B., Cudré-Mauroux, P.: Privacy-preserving social media data publishing for personalized ranking-based recommendation. IEEE Trans. Knowl. Data Eng. 31(3), 507–520 (2019)
Otterbacher, J.: Inferring gender of movie reviewers: exploiting writing style, content and metadata. In: Proceedings of CIKM, pp. 369–378 (2010)
Weinsberg, U., Bhagat, S., Ioannidis, S., Taft, N.: BlurMe: inferring and obfuscating user gender based on ratings. In: Proceedings of RecSys, pp. 195–202 (2012)
Gong, N.Z., Liu, B.: Attribute inference attacks in online social networks. ACM Trans. Priv. Secur. 21(1), 3:1–3:30 (2018)
Mislove, A., Viswanath, B., Gummadi, P.K., Druschel, P.: You are who you know: inferring user profiles in online social networks. In: Proceedings of WSDM, pp. 251–260 (2010)
Kairouz, P., Bonawitz, K., Ramage, D.: Discrete distribution estimation under local privacy. In: Proceedings of ICML, pp. 2436–2444 (2016)
Kawamoto, Y., Murakami, T.: Local obfuscation mechanisms for hiding probability distributions, CoRR, vol. abs/1812.00939 (2018). arXiv:1812.00939
Chatzikokolakis, K., Andrés, M.E., Bordenabe, N.E., Palamidessi, C.: Broadening the scope of differential privacy using metrics. In: De Cristofaro, E., Wright, M. (eds.) PETS 2013. LNCS, vol. 7981, pp. 82–102. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-39077-7_5
Vaserstein, L.: Markovian processes on countable space product describing large systems of automata. Probl. Peredachi Inf. 5(3), 64–72 (1969)
Dwork, C., McSherry, F., Nissim, K., Smith, A.: Calibrating noise to sensitivity in private data analysis. In: Halevi, S., Rabin, T. (eds.) TCC 2006. LNCS, vol. 3876, pp. 265–284. Springer, Heidelberg (2006). https://doi.org/10.1007/11681878_14
Kifer, D., Machanavajjhala, A.: No free lunch in data privacy. In: Proceedings of SIGMOD, pp. 193–204 (2011)
Kifer, D., Machanavajjhala, A.: A rigorous and customizable framework for privacy. In: Proceedings of PODS, pp. 77–88 (2012)
Song, S., Wang, Y., Chaudhuri, K.: Pufferfish privacy mechanisms for correlated data. In: Proceedings of SIGMOD, pp. 1291–1306 (2017)
Xu, J., Zhang, Z., Xiao, X., Yang, Y., Yu, G., Winslett, M.: Differentially private histogram publication. VLDB J. 22(6), 797–822 (2013)
Kawamoto, Y., Chatzikokolakis, K., Palamidessi, C.: On the compositionality of quantitative information flow. Log. Methods Comput. Sci. 13(3) (2017)
Alvim, M.S., Chatzikokolakis, K., Palamidessi, C., Pazii, A.: Invited paper: local differential privacy on metric spaces: optimizing the trade-off with utility. In: Proceedings of CSF, pp. 262–267 (2018)
Yang, D., Zhang, D., Qu, B.: Participatory cultural mapping based on collective behavior data in location based social networks. ACM Trans. Intell. Syst. Technol. 7(3), 30:1–30:23 (2015)
Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification. Wiley, Hoboken (2000)
Hsu, J., et al.: Differential privacy: an economic method for choosing epsilon. In: Proceedings of CSF, pp. 398–410 (2014)
Barthe, G., Olmedo, F.: Beyond differential privacy: composition theorems and relational logic for f-divergences between probabilistic programs. In: Fomin, F.V., Freivalds, R., Kwiatkowska, M., Peleg, D. (eds.) ICALP 2013. LNCS, vol. 7966, pp. 49–60. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-39212-2_8
Murakami, T., Kawamoto, Y.: Utility-optimized local differential privacy mechanisms for distribution estimation. In: Proceedings of USENIX Security (2019, to appear)
Blum, A., Ligett, K., Roth, A.: A learning theory approach to noninteractive database privacy. J. ACM 60(2), 12:1–12:25 (2013)
Jelasity, M., Birman, K.P.: Distributional differential privacy for large-scale smart metering. In: Proceedings of IH&MMSec, pp. 141–146 (2014)
Geumlek, J., Chaudhuri, K.: Profile-based privacy for locally private computations, CoRR, vol. abs/1903.09084 (2019)
Fernandes, N., Dras, M., McIver, A.: Generalised differential privacy for text document processing. In: Nielson, F., Sands, D. (eds.) POST 2019. LNCS, vol. 11426, pp. 123–148. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-17138-4_6
Sei, Y., Ohsuga, A.: Differential private data collection and analysis based on randomized multiple dummies for untrusted mobile crowdsensing. IEEE Trans. Inf. Forensics Secur. 12(4), 926–939 (2017)
Chatzikokolakis, K., ElSalamouny, E., Palamidessi, C., Anna, P.: Methods for location privacy: a comparative overview. Found. Trends\(\textregistered \) Priv. Secur. 1(4), 199–257 (2017)
Shokri, R., Theodorakopoulos, G., Troncoso, C., Hubaux, J.-P., Boudec, J.-Y.L.: Protecting location privacy: optimal strategy against localization attacks. In: Proceedings of CCS, pp. 617–627. ACM (2012)
Shokri, R., Theodorakopoulos, G., Boudec, J.-Y.L., Hubaux, J.-P.: Quantifying location privacy. In: Proceedings of S&P, pp. 247–262. IEEE (2011)
Xue, M., Kalnis, P., Pung, H.K.: Location diversity: enhanced privacy protection in location based services. In: Choudhury, T., Quigley, A., Strang, T., Suginuma, K. (eds.) LoCA 2009. LNCS, vol. 5561, pp. 70–87. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-01721-6_5
Hoh, B., Gruteser, M., Xiong, H., Alrabady, A.: Preserving privacy in GPS traces via uncertainty-aware path cloaking. In: Proceedings of CCS, pp. 161–171. ACM (2007)
Bindschaedler, V., Shokri, R.: Synthesizing plausible privacy-preserving location traces. In: Proceedings of S&P, pp. 546–563 (2016)
Chow, R., Golle, P.: Faking contextual data for fun, profit, and privacy. In: Proceedings of PES, pp. 105–108. ACM (2009)
Kido, H., Yanagisawa, Y., Satoh, T.: Protection of location privacy using dummies for location-based services. In: Proceedings of ICDE Workshops, p. 1248 (2005)
Machanavajjhala, A., Kifer, D., Abowd, J.M., Gehrke, J., Vilhuber, L.: Privacy: theory meets practice on the map. In: Proceedings of ICDE, pp. 277–286. IEEE (2008)
Ho, S.-S., Ruan, S.: Differential privacy for location pattern mining. In: Proceedings of SPRINGL, pp. 17–24. ACM (2011)
Cheng, Z., Caverlee, J., Lee, K., Sui, D.Z.: Exploring millions of footprints in location sharing services. In: Proceedings of ICWSM (2011)
Acknowledgment
We thank the reviewers, Catuscia Palamidessi, Gilles Barthe, and Frank D. Valencia for their helpful comments on preliminary drafts.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
A Experimental Results
A Experimental Results
In this section we present some of the experimental results on the following four attributes. See [15] for further experimental results.
-
social/less-social represent whether a user’s social status [45] (the number of followers divided by the number of followings) is greater than 5 or not.
-
workplace/non-workplace represent whether a user is at office or not. This attribute can be thought as sensitive when it implies users are unemployed.
-
home/out represent whether a user is at home or not.
-
north/south represent whether a user’s home is located in the northern or southern Manhattan. This attribute needs to be protected from stalkers.
First, we compare different obfuscation mechanisms for various attributes in Figs. 5, 6a, and b. We also compare different time periods: 00 h–05 h, 06 h–11 h, 12 h–17 h, 18 h–23 h in Manhattan in Fig. 7.
Next, we compare the experimental results on five cities: Manhattan, eastern Tokyo, western Tokyo, London, and Paris. In Table 2 we show examples of parameters that achieve the same levels of DistP in different cities. More detailed can be found in Fig. 8 (male/female).
Finally, we compare theoretical/empirical values of \(\varepsilon \)-DistP as follows. In Table 3, we show the theoretical values of \(\varepsilon \) calculated by Theorem 3 for \(\delta = 0.001, 0.01, 0.1\). Compared to experiments, those values can only give loose upper bounds on \(\varepsilon \), because of the concentration inequality used to derive Theorem 3.
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Kawamoto, Y., Murakami, T. (2019). Local Obfuscation Mechanisms for Hiding Probability Distributions. In: Sako, K., Schneider, S., Ryan, P. (eds) Computer Security – ESORICS 2019. ESORICS 2019. Lecture Notes in Computer Science(), vol 11735. Springer, Cham. https://doi.org/10.1007/978-3-030-29959-0_7
Download citation
DOI: https://doi.org/10.1007/978-3-030-29959-0_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-29958-3
Online ISBN: 978-3-030-29959-0
eBook Packages: Computer ScienceComputer Science (R0)