Keywords

1 Introduction

Differential privacy [1] is a quantitative notion of privacy that has been applied to a wide range of areas, including databases, geo-locations, and social network. The protection of differential privacy can be achieved by adding controlled noise to given data that we wish to hide or obfuscate. In particular, a number of recent studies have proposed local obfuscation mechanisms [2,3,4], namely, randomized algorithms that perturb each single “point” data (e.g., a geo-location point) by adding certain probabilistic noise before sending it out to a data collector. However, the obfuscation of a probability distribution of points (e.g., a distribution of locations of users at home/outside home) still remains to be investigated in terms of differential privacy.

For example, a location-based service (LBS) provider collects each user’s geo-location data to provide a service (e.g., navigation or point-of-interest search), and has been widely studied in terms of the privacy of user locations. As shown in [3, 5], users can hide their accurate locations by sending to the LBS provider only approximate location information calculated by an obfuscation mechanism.

Nevertheless, a user’s location information can be used for an attacker to infer the user’s attributes (e.g., age, gender, social status, and residence area) or activities (e.g., working, sleeping, and shopping) [6,7,8,9]. For example, when an attacker knows the distribution of residence locations, he may detect whether given users are at home or outside home after observing their obfuscated locations. For another example, an attacker may learn whether users are rich or poor by observing their obfuscated behaviors. These attributes can be used by robbers hence should be protected from them. Privacy issues of such attribute inference are also known in other applications, including recommender systems [10, 11] and online social networks [12, 13]. However, to our knowledge, no literature has addressed the protection of attributes in terms of local differential privacy.

To illustrate the privacy of attributes in an LBS, let us consider a running example where users try to prevent an attacker from inferring whether they are at home or not. Let \(\lambda _{{ home}}\) and \(\lambda _{{ out}}\) be the probability distributions of locations of the users at home and outside home, respectively. Then the privacy of this attribute means that the attacker cannot learn from an obfuscated location whether the actual location follows the distribution \(\lambda _{{ home}}\) or \(\lambda _{{ out}}\).

This can be formalized using differential privacy. For each \(t\in \{ { home}, { out}\}\),  we denote by \(p( y \,|\, \lambda _{t})\) the probability of observing an obfuscated location y when an actual location is distributed over \(\lambda _{t}\). Then the privacy of \(t\) is defined by:

$$\begin{aligned} \frac{ p( y \,|\, \lambda _{{ home}}) }{ p( y \,|\, \lambda _{{ out}})~ } \le e^{\varepsilon } {,} \end{aligned}$$

which represents that the attacker cannot distinguish whether the users follow the distribution \(\lambda _{{ home}}\) or \(\lambda _{{ out}}\) with degree of \(\varepsilon \).

To generalize this, we define a notion, called distribution privacy (DistP), as the differential privacy for probability distributions. Roughly, we say that a mechanism \( A \) provides DistP w.r.t. \(\lambda _{{ home}}\) and \(\lambda _{{ out}}\) if no attacker can detect whether the actual location (input to \( A \)) is sampled from \(\lambda _{{ home}}\) or \(\lambda _{{ out}}\) after he observed an obfuscated location y (output by \( A \))Footnote 1. Here we note that each user applies the mechanism \( A \) locally by herself, hence can customize the amount of noise added to y according to the attributes she wants to hide.

Although existing local differential privacy mechanisms are designed to protect point data, they also hide the distribution that the point data follow. However, we demonstrate that they need to add a large amount of noise to obfuscate distributions, and thus deteriorate the utility of the mechanisms.

To achieve both high utility and strong privacy of attributes, we introduce a mechanism, called the tupling mechanism, that not only perturbs an actual input, but also adds random dummy data to the output. Then we prove that this mechanism provides DistP. Since the random dummy data obfuscate the shape of the distribution, users can instead reduce the amount of noise added to the actual input, hence they get better utility (e.g., quality of a POI service).

This implies that DistP is a relaxation of differential privacy that guarantees the privacy of attributes while achieving higher utility by weakening the differentially private protection of point data. For example, suppose that users do not mind revealing their actual locations outside home, but want to hide (e.g., from robbers) the fact that they are outside home. When the users employ the tupling mechanism, they output both their (slightly perturbed) actual locations and random dummy locations. Since their outputs include their (roughly) actual locations, they obtain high utility (e.g., learning shops near their locations), while their actual location points are protected only weakly by differential privacy. However, their attributes at home/outside home are hidden among the dummy locations, hence protected by DistP. By experiments, we demonstrate that the tupling mechanism is useful to protect the privacy of attributes, and outperforms popular existing mechanisms (the randomized response [14], the planar Laplace [3] and Gaussian mechanisms) in terms of DistP and service quality.

Our Contributions. The main contributions of this work are given as follows:

  • We propose a formal model for the privacy of probability distributions in terms of differential privacy. Specifically, we define the notion of distribution privacy (DistP) to represent that the attacker cannot significantly gain information on the distribution of a mechanism’s input by observing its output.

  • We provide theoretical foundation of DistP, including its useful properties (e.g., compositionality) and its interpretation (e.g., in terms of Bayes factor).

  • We quantify the effect of distribution obfuscation by existing local mechanisms. In particular, we show that (extended) differential privacy mechanisms are able to make any two distributions less distinguishable, while deteriorating the utility by adding too much noise to protect all point data.

  • For instance, we prove that extended differential privacy mechanisms (e.g., the Laplace mechanism) need to add a large amount of noise proportionally to the \(\infty \)-Wasserstein distance \( W _{\infty , d }(\lambda _0, \lambda _1)\) between the two distributions \(\lambda _0\) and \(\lambda _1\) that we want to make indistinguishable.

  • We show that DistP is a useful relaxation of differential privacy when users want to hide their attributes, but not necessarily to protect all point data.

  • To improve the tradeoff between DistP and utility, we introduce the tupling mechanism, which locally adds random dummies to the output. Then we show that this mechanism provides DistP and hight utility for users.

  • We apply local mechanisms to the obfuscation of attributes in location based services (LBSs). Then we show that the tupling mechanism outperforms popular existing mechanisms in terms of DistP and service quality.

All proofs of technical results can be found in [15].

2 Preliminaries

In this section we recall some notions of privacy and metrics used in this paper. Let \(\mathbb {N}^{>0}\) be the set of positive integers, and \(\mathbb {R}^{>0}\) (resp. \(\mathbb {R}^{\ge 0}\)) be the set of positive (resp. non-negative) real numbers. Let [0, 1] be the set of non-negative real numbers not grater than 1. Let \(\varepsilon , \varepsilon _0, \varepsilon _1 \in \mathbb {R}^{\ge 0}\) and \(\delta , \delta _0, \delta _1 \in [0, 1]\).

Fig. 1.
figure 1

Coupling \(\gamma \).

2.1 Notations for Probability Distributions

We denote by \(\mathbb {D}\mathcal {X}\) the set of all probability distributions over a set \(\mathcal {X}\), and by \(|\mathcal {X}|\) the number of elements in a finite set \(\mathcal {X}\).

Given a finite set \(\mathcal {X}\) and a distribution \(\lambda \in \mathbb {D}\mathcal {X}\), the probability of drawing a value x from \(\lambda \) is denoted by \(\lambda [x]\). For a finite subset \(\mathcal {X}'\subseteq \mathcal {X}\) we define \(\lambda [\mathcal {X}']\) by: \(\lambda [\mathcal {X}'] = \sum _{x'\in \mathcal {X}'} \lambda [x']\). For a distribution \(\lambda \) over a finite set \(\mathcal {X}\), its support \(\mathsf {supp}(\lambda )\) is defined by \(\mathsf {supp}(\lambda ) = \{ x \in \mathcal {X}:\lambda [x] > 0 \}\). Given a \(\lambda \in \mathbb {D}\mathcal {X}\) and a \(f:\mathcal {X}\rightarrow \mathbb {R}\), the expected value of f over \(\lambda \) is: \({\mathbb {E}}_{x\sim \lambda }[f(x)] {\mathop {=}\limits ^{\mathrm {def}}}\sum _{x\in \mathcal {X}} \lambda [x] f(x)\).

For a randomized algorithm \( A :\mathcal {X}\rightarrow \mathbb {D}\mathcal {Y}\) and a set \(R\subseteq \mathcal {Y}\) we denote by \( A (x)[R]\) the probability that given input x, \( A \) outputs one of the elements of R. Given a randomized algorithm \( A :\mathcal {X}\rightarrow \mathbb {D}\mathcal {Y}\) and a distribution \(\lambda \) over \(\mathcal {X}\), we define \({ A }^{\#}(\lambda )\) as the distribution of the output of \( A \). Formally, for a finite set \(\mathcal {X}\), the lifting of \( A \) w.r.t. \(\mathcal {X}\) is the function \({ A }^{\#}: \mathbb {D}\mathcal {X}\rightarrow \mathbb {D}\mathcal {Y}\) such that \( { A }^{\#}(\lambda )[R] {\mathop {=}\limits ^{\mathrm {def}}}\sum _{x\in \mathcal {X}}\lambda [x] A (x)[R] \).

2.2 Differential Privacy (DP)

Differential privacy [1] captures the idea that given two “adjacent” inputs x and \(x'\) (from a set \(\mathcal {X}\) of data with an adjacency relation \(\varPhi \)), a randomized algorithm \( A \) cannot distinguish x from \(x'\) (with degree of \(\varepsilon \) and up to exceptions \(\delta \)).

Definition 1

(Differential privacy). Let e be the base of natural logarithm. A randomized algorithm \( A : \mathcal {X}\rightarrow \mathbb {D}\mathcal {Y}\) provides \((\varepsilon ,\delta )\)-differential privacy \({\textit{(}}{{{\textsf {\textit{DP}}}}}{\textit{)}}\) w.r.t. an adjacency relation \(\varPhi \subseteq \mathcal {X}\times \mathcal {X}\) if for any \((x, x')\in \varPhi \) and any \(R\subseteq \mathcal {Y}\),

$$\begin{aligned} \mathrm {Pr}[ A (x)\in R ] \le e^\varepsilon \,\mathrm {Pr}[ A (x')\in R ] + \delta \end{aligned}$$

where the probability is taken over the random choices in \( A \).

2.3 Differential Privacy Mechanisms and Sensitivity

Differential privacy can be achieved by a privacy mechanism, namely a randomized algorithm that adds probabilistic noise to a given input that we want to protect. The amount of noise added by some popular mechanisms (e.g., the exponential mechanism) depends on a utility function \( u :\mathcal {X}\times \mathcal {Y}\rightarrow \mathbb {R}\) that maps a pair of input and output to a utility score. More precisely, the noise is added according to the “sensitivity” of \( u \), which we define as follows.

Definition 2

(Utility distance). The utility distance w.r.t a utility function \( u :(\mathcal {X}\times \mathcal {Y})\rightarrow \mathbb {R}\) is the function \( d \) given by: \( d (x,x') {\mathop {=}\limits ^{\mathrm {def}}}\max _{y\in \mathcal {Y}} \bigl | u (x, y) - u (x', y) \bigr |\).

Note that \( d \) is a pseudometric. Hereafter we assume that for all xy, \( u (x,y)=0\) is logically equivalent to \(x=y\). Then the utility distance \( d \) is a metric.

Definition 3

(Sensitivity w.r.t. an adjacency relation). The sensitivity of a utility function \( u \) w.r.t. an adjacency relation \(\varPhi \subseteq \mathcal {X}\times \mathcal {X}\) is defined as:

$$\begin{aligned} \varDelta _{\varPhi , d }{\mathop {=}\limits ^{\mathrm {def}}}\max _{\begin{array}{c} (x,x')\in \varPhi \end{array}} d (x,x') = \max _{\begin{array}{c} (x,x')\in \varPhi \end{array}} \max _{y\in \mathcal {Y}} \bigl | u (x, y) - u (x', y) \bigr | {.} \end{aligned}$$

2.4 Extended Differential Privacy (XDP)

We review the notion of extended differential privacy [16], which relaxes DP by incorporating a metric d. Intuitively, this notion guarantees that when two inputs x and \(x'\) are closer in terms of d, the output distributions are less distinguishable.

Definition 4

(Extended differential privacy). For a metric \(d: \mathcal {X}\times \mathcal {X}\rightarrow \mathbb {R}\), we say that a randomized algorithm \( A : \mathcal {X}\rightarrow \mathbb {D}\mathcal {Y}\) provides \((\varepsilon ,\delta ,d)\)-extended differential privacy \({\textit{(}}{{{\textsf {\textit{XDP}}}}}{\textit{)}}\) if for all \(x, x'\in \mathcal {X}\) and for any \(R\subseteq \mathcal {Y}\),

$$\begin{aligned} \mathrm {Pr}[ A (x)\in R ] \le e^{\varepsilon d(x,x')} \,\mathrm {Pr}[ A (x')\in R ] + \delta {.} \end{aligned}$$

2.5 Wasserstein Metric

We recall the notion of probability coupling as follows.

Definition 5

(Coupling). Given \(\lambda _0\in \mathbb {D}\mathcal {X}_0\) and \(\lambda _1\in \mathbb {D}\mathcal {X}_1\), a coupling of \(\lambda _0\) and \(\lambda _1\) is a \(\gamma \in \mathbb {D}(\mathcal {X}_0\times \mathcal {X}_1)\) such that \(\lambda _0\) and \(\lambda _1\) are \(\gamma \)’s marginal distributions, i.e., for each \(x_0\in \mathcal {X}_0\), \(\lambda _0[x_0] =\!\sum _{x'_1\in \mathcal {X}_1}\!\gamma [x_0, x'_1]\) and for each \(x_1\in \mathcal {X}_1\), \(\lambda _1[x_1] =\!\sum _{x'_0\in \mathcal {X}_0}\!\gamma [x'_0, x_1]\). We denote by \(\mathsf {cp}(\lambda _0, \lambda _1)\) the set of all couplings of \(\lambda _0\) and \(\lambda _1\).

Example 1

(Coupling as transformation of distributions). Let us consider two distributions \(\lambda _0\) and \(\lambda _1\) shown in Fig. 1. A coupling \(\gamma \) of \(\lambda _0\) and \(\lambda _1\) shows a way of transforming \(\lambda _0\) to \(\lambda _1\). For example, \(\gamma [2, 1] = 0.1\) moves from \(\lambda _0[2]\) to \(\lambda _1[1]\).

We then recall the \(\infty \)-Wasserstein metric [17] between two distributions.

Definition 6

(\(\infty \)-Wasserstein metric). Let \( d \) be a metric over \(\mathcal {X}\). The \(\infty \)-Wasserstein metric \( W _{\infty , d }\) w.r.t. \( d \) is defined by: for any \(\lambda _0, \lambda _1\in \mathbb {D}\mathcal {X}\),

$$\begin{aligned} W _{\infty , d }(\lambda _0, \lambda _1) = \min _{\gamma \in \mathsf {cp}(\lambda _0, \lambda _1)} \max _{(x_0, x_1)\in \mathsf {supp}(\gamma )} d (x_0, x_1) {.} \end{aligned}$$

The \(\infty \)-Wasserstein metric \( W _{\infty , d }(\lambda _0, \lambda _1)\) represents the minimum largest move between points in a transportation from \(\lambda _0\) to \(\lambda _1\). Specifically, in a transportation \(\gamma \), \(\max _{(x_0, x_1)\in \mathsf {supp}(\gamma )} d (x_0, x_1)\) represents the largest move from a point in \(\lambda _0\) to another in \(\lambda _1\). For instance, in the coupling \(\gamma \) in Example 1, the largest move is 1 (from \(\lambda _0[2]\) to \(\lambda _1[1]\), and from \(\lambda _0[2]\) to \(\lambda _1[3]\)). Such a largest move is minimized by a coupling that achieves the \(\infty \)-Wasserstein metric. We denote by \({\varGamma _{\!{ \infty , d }}}\) the set of all couplings that achieve the \(\infty \)-Wasserstein metric.

Finally, we recall the notion of the lifting of relations.

Definition 7

(Lifting of relations). Given a relation \(\varPhi \subseteq \mathcal {X}\times \mathcal {X}\), the lifting of \(\varPhi \) is the maximum relation \({\varPhi }^{\#}\subseteq \mathbb {D}\mathcal {X}\times \mathbb {D}\mathcal {X}\) such that for any \((\lambda _0, \lambda _1)\in {\varPhi }^{\#}\), there exists a coupling \(\gamma \in \mathsf {cp}(\lambda _0, \lambda _1)\) satisfying \(\mathsf {supp}(\gamma )\subseteq \varPhi \).

Note that by Definition 5, the coupling \(\gamma \) is a probability distribution over \(\varPhi \) whose marginal distributions are \(\lambda _0\) and \(\lambda _1\). If \(\varPhi = \mathcal {X}\times \mathcal {X}\), then \({\varPhi }^{\#} = \mathbb {D}\mathcal {X}\times \mathbb {D}\mathcal {X}\).

3 Privacy Notions for Probability Distributions

In this section we introduce a formal model for the privacy of user attributes, which is motivated in Sect. 1.

3.1 Modeling the Privacy of User Attributes in Terms of DP

As a running example, we consider an LBS (location based service) in which each user queries an LBS provider for a list of shops nearby. To hide a user’s exact location x from the provider, the user applies a randomized algorithm \( A : \mathcal {X}\rightarrow \mathbb {D}\mathcal {Y}\), called a local obfuscation mechanism, to her location x, and obtains an approximate information y with the probability \( A (x)[y]\).

To illustrate the privacy of attributes, let us consider an example in which users try to prevent an attacker from inferring whether they are \({ male}\) or \({ female}\) by obfuscating their own exact locations using a mechanism \( A \). For each \(t\in \{ { male}, { female}\}\), let \(\lambda _{t}\in \mathbb {D}\mathcal {X}\) be the prior distribution of the location of the users who have the attribute \(t\). Intuitively, \(\lambda _{{ male}}\) (resp. \(\lambda _{{ female}}\)) represents an attacker’s belief on the location of the male (resp. female) users before the attacker observes an output of the mechanism \( A \). Then the privacy of \(t\) can be modeled as a property that the attacker has no idea on whether the actual location x follows the distribution \(\lambda _{{ male}}\) or \(\lambda _{{ female}}\) after observing an output y of \( A \).

This can be formalized in terms of \(\varepsilon \)-local DP. For each \(t\in \{ { male}, { female}\}\),  we denote by \(p( y \,|\, \lambda _{t})\) the probability of observing an obfuscated location y when an actual location x is distributed over \(\lambda _{t}\), i.e., \(p( y \,|\, \lambda _{t}) = \sum _{x\in \mathcal {X}} \lambda _{t}[x] A (x)[y]\). Then we can define the privacy of \(t\) by:

$$\begin{aligned} \textstyle \frac{ p( y \,|\, \lambda _{{ male}}) }{ p( y \,|\, \lambda _{{ female}}) } \le e^{\varepsilon } {.} \end{aligned}$$

3.2 Distribution Privacy and Extended Distribution Privacy

We generalize the privacy of attributes (in Sect. 3.1) and define the notion of distribution privacy (DistP) as the differential privacy where the input is a probability distribution of data rather than a value of data. This notion models a level of obfuscation that hides which distribution a data value is drawn from. Intuitively, we say a randomized algorithm \( A \) provides DistP if, by observing an output of \( A \), we cannot detect from which distribution an input to \( A \) is generated.

Definition 8

(Distribution privacy). Let \(\varepsilon \in \mathbb {R}^{\ge 0}\) and \(\delta \in [0,1]\). We say that a randomized algorithm \( A :\mathcal {X}\rightarrow \mathbb {D}\mathcal {Y}\) provides \((\varepsilon ,\delta )\)-distribution privacy \({\textit{(}}{{{\textsf {\textit{DistP}}}}}{\textit{)}}\) w.r.t. an adjacency relation \(\varPsi \subseteq \mathbb {D}\mathcal {X}\times \mathbb {D}\mathcal {X}\) if its lifting \({ A }^{\#}:\mathbb {D}\mathcal {X}\rightarrow \mathbb {D}\mathcal {Y}\) provides \((\varepsilon ,\delta )\)-DP w.r.t. \(\varPsi \), i.e., for all pairs \((\lambda , \lambda ')\in \varPsi \) and all \(R \subseteq \mathcal {Y}\),  we have:

$$\begin{aligned} { A }^{\#}(\lambda )[R] \le e^{\varepsilon }\cdot { A }^{\#}(\lambda ')[R] + \delta {.} \end{aligned}$$

We say \( A \) provides \((\varepsilon ,\delta )\)-DistP w.r.t. \(\varLambda \subseteq \mathbb {D}\mathcal {X}\) if it provides \((\varepsilon ,\delta )\)-DistP w.r.t. \(\varLambda ^2\).

For example, the privacy of a user attribute \(t\in \{ { male}{}, { female}{} \}\) described in Sect. 3.1 can be formalized as \((\varepsilon , 0)\)-DistP w.r.t. \(\{\lambda _{{ male}}, \lambda _{{ female}}\}\).

Mathematically, DistP is not a new notion but the DP for distributions. To contrast with DistP, we refer to the DP for data values as point privacy.

Next we introduce an extended form of distribution privacy to a metric. Intuitively, extended distribution privacy guarantees that when two input distributions are closer, then the output distributions must be less distinguishable.

Definition 9

(Extended distribution privacy). Let \(d: (\mathbb {D}\mathcal {X}\times \mathbb {D}\mathcal {X})\rightarrow \mathbb {R}\) be a utility distance, and \(\varPsi \subseteq \mathbb {D}\mathcal {X}\times \mathbb {D}\mathcal {X}\). We say that a mechanism \( A :\mathcal {X}\rightarrow \mathbb {D}\mathcal {Y}\) provides \((\varepsilon ,d,\delta )\)-extended distribution privacy \({\textit{(}}{{{\textsf {\textit{XDistP}}}}}{\textit{)}}\) w.r.t. \(\varPsi \) if the lifting \({ A }^{\#}\) provides \((\varepsilon ,d,\delta )\)-XDP w.r.t. \(\varPsi \), i.e., for all \((\lambda , \lambda ')\in \varPsi \) and all \(R\subseteq \mathcal {Y}\), we have:

$$\begin{aligned} { A }^{\#}(\lambda )[R] \le e^{\varepsilon d(\lambda ,\lambda ')}\cdot { A }^{\#}(\lambda ')[R]+ \delta {.} \end{aligned}$$

3.3 Interpretation by Bayes Factor

The interpretation of DP has been explored in previous work [16, 18] using the notion of Bayes factor. Similarly, the meaning of DistP can also be explained in terms of Bayes factor, which compares the attacker’s prior and posterior beliefs.

Assume that an attacker has some belief on the input distribution before observing the output values of an obfuscater \( A \). We denote by \(p(\lambda )\) the prior probability that a distribution \(\lambda \) is chosen as the input distribution. By observing an output y of \( A \), the attacker updates his belief on the input distribution. We denote by \(p(\lambda | y)\) the posterior probability of \(\lambda \) being chosen, given an output y.

For two distributions \(\lambda _0, \lambda _1\), the Bayes factor \(K(\lambda _0, \lambda _1, y)\) is defined as the ratio of the two posteriors divided by that of the two priors: \(K(\lambda _0, \lambda _1, y) = \frac{p(\lambda _0|y)}{p(\lambda _1|y)} \big / \frac{p(\lambda _0)}{p(\lambda _1)}\). If the Bayes factor is far from 1 the attacker significantly updates his belief on the distribution by observing a perturbed output y of \( A \).

Assume that \( A \) provides \((\varepsilon ,0)\)-DistP. By Bayes’ theorem, we obtain:

$$\begin{aligned} K(\lambda _0, \lambda _1, y) = \textstyle \frac{p(\lambda _0|y)}{p(\lambda _1|y)}\cdot \frac{p(\lambda _1)}{p(\lambda _0)} = \frac{p(y|\lambda _0)}{p(y|\lambda _1)} = \frac{{ A }^{\#}(\lambda _0)[y]}{{ A }^{\#}(\lambda _1)[y]} \le e^\varepsilon {.} \end{aligned}$$

Intuitively, if the attacker believes that \(\lambda _0\) is k times more likely than \(\lambda _1\) before the observation, then he believes that \(\lambda _0\) is \(k\cdot e^\varepsilon \) times more likely than \(\lambda _1\) after the observation. This means that for a small value of \(\varepsilon \), DistP guarantees that the attacker does not gain information on the distribution by observing y.

In the case of XDistP, the Bayes factor \(K(\lambda _0, \lambda _1, y)\) is bounded above by \(e^{\varepsilon d(\lambda _0, \lambda _1)}\). Hence the attacker gains more information for a larger distance \(d(\lambda _0, \lambda _1)\).

Table 1. Summary of basic properties of DistP.

3.4 Privacy Guarantee for Attackers with Close Beliefs

In the previous sections, we assume that we know the distance between two actual input distributions, and can determine the amount of noise required for distribution obfuscation. However, an attacker may have different beliefs on the distributions that are closer to the actual ones, e.g., more accurate distributions obtained by more observations and specific situations (e.g., daytime/nighttime).

To see this, for each \(\lambda \in \mathbb {D}\mathcal {X}\), let \(\tilde{\lambda }\) be an attacker’s belief on \(\lambda \). We say that an attacker has \((c, d )\)-close beliefs if each distribution \(\lambda \) satisfies \( d (\lambda , \tilde{\lambda }) \le c\). Then extended distribution privacy in the presence of an attacker is given by:

Proposition 1

\({\varvec{(}}{{\mathbf {\mathsf{{XDistP}}}}}\) with close beliefs). Let \( A : \mathcal {X}\rightarrow \mathbb {D}\mathcal {Y}\) provide \((\varepsilon , d , 0)\)-XDistP w.r.t. some \(\varPsi \subseteq \mathcal {X}\times \mathcal {X}\). If an attacker has \((c, d )\)-close beliefs, then for all \((\lambda _0, \lambda _1)\in \varPsi \) and all \(R \subseteq \mathcal {Y}\), we have \( { A }^{\#}(\tilde{\lambda _0})[R] \le e^{\varepsilon \left( d (\lambda _0, \lambda _1) + 2c \right) } \cdot { A }^{\#}(\tilde{\lambda _1})[R] {.} \)

When the attacker’s beliefs are closer to ours, then c is smaller, hence a stronger distribution privacy is guaranteed. See [15] for a proposition with DistP. Note that assuming some attacker’s beliefs are inevitable also in many previous studies, e.g., when we want to protect the privacy of correlated data [19,20,21].

3.5 Difference from the Histogram Privacy

Finally, we present a brief remark on the difference between DistP and the differential privacy of histogram publication (e.g., [22]). Roughly, a histogram publication mechanism is a central mechanism that aims at hiding a single record \(x\in \mathcal {X}\) and outputs an obfuscated histogram, e.g., a distribution \(\mu \in \mathbb {D}\mathcal {Y}\), whereas a DistP mechanism is a local mechanism that aims at hiding an input distribution \(\lambda \in \mathbb {D}\mathcal {X}\) and outputs a single perturbed value \(y\in \mathcal {Y}\).

Note that neither of these implies the other. The \(\varepsilon \)-DP of a histogram publication mechanism means that for any two adjacent inputs \(x, x' \in \mathcal {X}\) and any histogram \(\mu \in \mathbb {D}\mathcal {Y}\), \( \frac{p(\mu | x)}{p(\mu | x')} \le e^\varepsilon . \) However, this does not derive \(\varepsilon \)-DistP, i.e., for any adjacent input distributions \(\lambda , \lambda ' \in \mathbb {D}\mathcal {X}\) and any output \(y \in \mathcal {Y}\), \( \frac{p(y | \lambda )}{p(y | \lambda ')} \le e^\varepsilon \).

4 Basic Properties of Distribution Privacy

In Table 1, we show basic properties of DistP. (See the arXiv version [15] for the full table with XDistP and their detailed proofs.)

The composition \(A_1 \mathbin {\odot }A_0\) means that an identical input x is given to two DistP mechanisms \(A_0\) and \(A_1\), whereas the composition \(A_1 \mathbin {\bullet }A_0\) means that independent inputs \(x_b\) are provided to mechanisms \( A _b\) [23]. The compositionality can be used to quantify the attribute privacy against an attacker who obtains multiple released data each obfuscated for the purpose of protecting a different attribute. For example, let \(\varPsi = \{ (\lambda _{{ male}}, \lambda _{{ female}}), (\lambda _{{ home}}, \lambda _{{ out}}) \}\), and \( A _0\) (resp. \( A _1\)) be a mechanism providing \(\varepsilon _0\)-DistP (resp. \(\varepsilon _1\)-DistP) w.r.t. \(\varPsi \). When \( A _0\) (resp. \( A _1\)) obfuscates a location \(x_0\) for the sake of protecting male/female (resp. home/out), then both male/female and home/out are protected with \((\varepsilon _0+\varepsilon _1)\)-DistP.

As for pre-processing, the stability notion is different from that for DP:

Definition 10

(Stability). Let \(c\in \mathbb {N}^{>0}\), \(\varPsi \subseteq \mathbb {D}\mathcal {X}\times \mathbb {D}\mathcal {X}\), and \( W \) be a metric over \(\mathbb {D}\mathcal {X}\). A transformation \(T:\mathbb {D}\mathcal {X}\rightarrow \mathbb {D}\mathcal {X}\) is \((c, \varPsi )\)-stable if for any \((\lambda _0,\lambda _1)\in \varPsi \), \(T(\lambda _0)\) can be reached from \(T(\lambda _1)\) at most c-steps over \(\varPsi \). Analogously, \(T:\mathbb {D}\mathcal {X}\rightarrow \mathbb {D}\mathcal {X}\) is \((c, W )\)-stable if for any \(\lambda _0,\lambda _1\in \mathbb {D}\mathcal {X}\), \( W (T(\lambda _0),T(\lambda _1)) \le c W (\lambda _0,\lambda _1)\).

We present relationships among privacy notions in [15]. An important property is that when the relation \(\varPsi \subseteq \mathbb {D}\mathcal {X}\times \mathbb {D}\mathcal {X}\) includes pairs of point distributions (i.e., distributions having single points with probability 1), \(\textsf {DistP}{}\) (resp. \(\textsf {XDistP}{}\)) implies \(\textsf {DP}{}\) (resp. \(\textsf {XDP}{}\)). In contrast, if \(\varPsi \) does not include pairs of point distributions, DistP (resp. \(\textsf {XDistP}{}\)) may not imply DP (resp. \(\textsf {XDP}{}\)), as in Sect. 6.

5 Distribution Obfuscation by Point Obfuscation

In this section we present how the point obfuscation mechanisms (including DP and XDP mechanisms) contribute to the obfuscation of probability distributions.

5.1 Distribution Obfuscation by DP Mechanisms

We first show every DP mechanism provides DistP. (See Definition 7 for \({\varPhi }^{\#}\).)

Theorem 1

(\((\varepsilon , \delta )\)-DP \(\Rightarrow \) \((\varepsilon ,\, \delta \cdot |\varPhi |)\)-DistP). Let \(\varPhi \subseteq \mathcal {X}\times \mathcal {X}\). If \( A :\mathcal {X}\rightarrow \mathbb {D}\mathcal {Y}\) provides \((\varepsilon , \delta )\)-DP w.r.t. \(\varPhi \), then it provides \((\varepsilon , \delta \cdot |\varPhi |)\)-DistP w.r.t. \({\varPhi }^{\#}\).

This means that the mechanism \( A \) makes any pair \((\lambda _0, \lambda _1)\in {\varPhi }^{\#}\) indistinguishable up to the threshold \(\varepsilon \) and with exceptions \(\delta \cdot |\varPhi |\). Intuitively, when \(\lambda _0\) and \(\lambda _1\) are adjacent w.r.t. the relation \({\varPhi }^{\#}\), we can construct \(\lambda _1\) from \(\lambda _0\) only by moving mass from \(\lambda _0[x_0]\) to \(\lambda _1[x_1]\) where \((x_0, x_1)\in \varPhi \) (i.e., \(x_0\) is adjacent to \(x_1\)).

Example 2

(Randomized response). By Theorem 1, the \((\varepsilon , 0)\)-DP randomized response [14] and RAPPOR [4] provide \((\varepsilon , 0)\)-DistP. When we use these mechanisms, the estimation of the input distribution is harder for a smaller \(\varepsilon \). However, these DP mechanisms tend to have small utility, because they add much noise to hide not only the input distributions, but everything about inputs.

5.2 Distribution Obfuscation by XDP Mechanisms

Compared to DP mechanisms, XDP mechanisms are known to provide better utility. Alvim et al. [24] show the planar Laplace mechanism [3] adds less noise than the randomized response, since XDP hides only closer locations. However, we show XDP mechanisms still need to add much noise proportionally to the \(\infty \)-Wasserstein distance between the distributions we want make indistinguishable.

The \(\infty \)-Wasserstein Distance \( W _{\infty , d }\) as Utility Distance. We first observe how much \(\varepsilon '\) is sufficient for an \(\varepsilon '\)-XDP mechanism (e.g., the Laplace mechanism) to make two distribution \(\lambda _0\) and \(\lambda _1\) indistinguishable in terms of \(\varepsilon \)-DistP.

Suppose that \(\lambda _0\) and \(\lambda _1\) are point distributions such that \(\lambda _0[x_0] = \lambda _1[x_1] = 1\) for some \(x_0,x_1\in \mathcal {X}\). Then an \(\varepsilon '\)-XDP mechanism \( A \) satisfies:

$$\begin{aligned} D _{\infty }({ A }^{\#}(\lambda _0)\! \parallel \!{ A }^{\#}(\lambda _1))&= D _{\infty }( A (x_0)\! \parallel \! A (x_1)) \le \varepsilon ' d (x_0,x_1) {.} \end{aligned}$$

In order for \( A \) to provide \(\varepsilon \)-DistP, \(\varepsilon '\) should be defined as \(\frac{\varepsilon }{ d (x_0,x_1)}\). That is, the noise added by \( A \) should be proportional to the distance between \(x_0\) and \(x_1\).

To extend this to arbitrary distributions, we need to define a utility metric between distributions. A natural possible definition would be the largest distance between values of \(\lambda _0\) and \(\lambda _1\), i.e., the diameter over the supports defined by:

$$\begin{aligned} \mathsf {diam}(\lambda _0, \lambda _1) = \max _{x_0\in \mathsf {supp}(\lambda _0), x_1\in \mathsf {supp}(\lambda _1)} d (x_0, x_1) {.} \end{aligned}$$

However, when there is an outlier in \(\lambda _0\) or \(\lambda _1\) that is far from other values in the supports, then the diameter \(\mathsf {diam}(\lambda _0, \lambda _1)\) is large. Hence the mechanisms that add noise proportionally to the diameter would lose utility too much.

To have better utility, we employ the \(\infty \)-Wasserstein metric \( W _{\infty , d }\). The idea is that given two distributions \(\lambda _0\) and \(\lambda _1\) over \(\mathcal {X}\), we consider the cost of a transportation of weights from \(\lambda _0\) to \(\lambda _1\). The transportation is formalized as a coupling \(\gamma \) of \(\lambda _0\) and \(\lambda _1\) (see Definition 5), and the cost of the largest move is \( \displaystyle \varDelta _{\mathsf {supp}(\gamma ), d } = \max _{(x_0, x_1)\in \mathsf {supp}(\gamma )} d (x_0,x_1), \) i.e., the sensitivity w.r.t. the adjacency relation \(\mathsf {supp}(\gamma )\subseteq \mathcal {X}\times \mathcal {X}\) (Definition 3). The minimum cost of the largest move is given by the \(\infty \)-Wasserstein metric: \( W _{\infty , d }(\lambda _0, \lambda _1)= \displaystyle \min _{\gamma \in \mathsf {cp}(\lambda _0, \lambda _1)} \varDelta _{\mathsf {supp}(\gamma ), d } {.} \)

XDP implies XDistP . We show every XDP mechanism provides XDistP with the metric \( W _{\infty , d }\). To formalize this, we define a lifted relation \({\varPhi }_{ W _{\infty }}^{\#}\) as the maximum relation over \(\mathbb {D}\mathcal {X}\) s.t. for any \((\lambda _0, \lambda _1)\in {\varPhi }_{ W _{\infty }}^{\#}\), there is a coupling \(\gamma \in \mathsf {cp}(\lambda _0, \lambda _1)\) satisfying \(\mathsf {supp}(\gamma )\subseteq \varPhi \) and \(\gamma \in {\varGamma _{\!{ \infty , d }}}(\lambda _0, \lambda _1)\). Then \({\varPhi }_{ W _{\infty }}^{\#}\subseteq {\varPhi }^{\#}\) holds.

Theorem 2

(\((\varepsilon , d , \delta )\)-XDP\(\,\Rightarrow \,(\varepsilon , W _{\infty , d }, \delta \!\cdot \!|\varPhi |)\)-XDistP). If \( A \!:\!\mathcal {X}\rightarrow \mathbb {D}\mathcal {Y}\) provides \((\varepsilon , d , \delta )\)-XDP w.r.t. \(\varPhi \subseteq \mathcal {X}\times \mathcal {X}\), it provides \((\varepsilon , W _{\infty , d }, {\delta \!\cdot \!|\varPhi |})\)-XDistP w.r.t. \({\varPhi }_{ W _{\infty }}^{\#}\).

figure a

By Theorem 2, when \(\delta > 0\), the noise required for obfuscation is proportional to \(|\varPhi |\), which is at most the domain size squared \(|\mathcal {X}|^2\). This implies that for a larger domain \(\mathcal {X}\), the Gaussian mechanism is not suited for distribution obfuscation. We will demonstrate this by experiments in Sect. 7.4.

In contrast, the Laplace/exponential mechanisms provide \((\varepsilon , W _{\infty , d }, 0)\)-DistP. Since \( W _{\infty , d }(\lambda _0, \lambda _1) \le \mathsf {diam}(\lambda _0, \lambda _1)\), the noise added proportionally to \( W _{\infty , d }\) can be smaller than \(\mathsf {diam}\). This implies that obfuscating a distribution requires less noise than obfuscating a set of data. However, the required noise can still be very large when we want to make two distant distributions indistinguishable.

6 Distribution Obfuscation by Random Dummies

In this section we introduce a local mechanism called a tupling mechanism to improve the tradeoff between DistP and utility, as motivated in Sect. 1.

6.1 Tupling Mechanism

We first define the tupling mechanism as a local mechanism that obfuscates a given input x by using a point perturbation mechanism \( A \) (not necessarily in terms of DP or XDP), and that also adds k random dummies \(r_1, r_2, \ldots , r_k\) to the output to obfuscate the input distribution (Algorithm 1). The probability that given an input x, the mechanism \( Q ^\mathsf{tp}_{k,\nu , A }\) outputs \(\bar{y}\) is given by \( Q ^\mathsf{tp}_{k,\nu , A }(x)[\bar{y}]\).

6.2 Privacy of the Tupling Mechanism

Next we show that the tupling mechanism provides DistP w.r.t. the following class of distributions. Given \(\beta , \eta \in [0, 1]\) and \( A : \mathcal {X}\rightarrow \mathbb {D}\mathcal {Y}\), we define \(\varLambda _{\beta ,\eta , A }\) by:

$$\begin{aligned} \varLambda _{\beta ,\eta , A }= \bigl \{ \lambda \in \mathbb {D}\mathcal {X}\mid \Pr \bigl [ y {\mathop {\leftarrow }\limits ^{\mathrm {\$}}}\mathcal {Y}: { A }^{\#}(\lambda )[y] \le \beta \bigr ] \ge 1 - \eta \bigr \} {.} \end{aligned}$$

For instance, a distribution \(\lambda \) satisfying \(\max _{x} \lambda [x] \le \beta \) belongs to \(\varLambda _{\beta ,0, A }\).

Theorem 3

(\( {{\mathbf {\mathsf{{DistP}}}}}\) of the tupling mechanism). Let \(k\in \mathbb {N}^{>0}\), \(\nu \) be the uniform distribution over \(\mathcal {Y}\), \( A :\mathcal {X}\rightarrow \mathbb {D}\mathcal {Y}\), and \(\beta , \eta \in [0, 1]\). Given an \(0< \alpha < \frac{k}{|\mathcal {Y}|}\), let \(\varepsilon _{\alpha } = \ln {\textstyle \frac{ k + (\alpha + \beta )\cdot |\mathcal {Y}| }{ k - \alpha \cdot |\mathcal {Y}| }}\) and \(\delta _{\alpha } = 2 e^{-\frac{2\alpha ^2}{k\beta ^2}} + \eta \). Then the \((k,\nu , A )\)-tupling mechanism provides \((\varepsilon _{\!\alpha }, \delta _{\!\alpha })\)-DistP w.r.t. \(\varLambda _{\beta ,\eta , A }^2\).

This claim states that just adding random dummies achieves DistP without any assumption on \( A \) (e.g., \( A \) does not have to provide DP). For a smaller range size \(|\mathcal {Y}|\) and a larger number k of dummies, we obtain a stronger DistP.

Note that the distributions protected by \( Q ^\mathsf{tp}_{k,\nu , A }\) belong to the set \(\varLambda _{\beta ,\eta , A }\).

  • When \(\beta = 1\), \(\varLambda _{\beta ,\eta , A }\) is the set of all distributions (i.e., \(\varLambda _{1,\eta , A } = \mathbb {D}\mathcal {X}\)) while \(\varepsilon _{\alpha }\) and \(\delta _{\alpha }\) tend to be large.

  • For a smaller \(\beta \), the set \(\varLambda _{\beta ,\eta , A }\) is smaller while \(\varepsilon _{\alpha }\) and \(\delta _{\alpha }\) are smaller; that is, the mechanism provides a stronger DistP for a smaller set of distributions.

  • If \( A \) provides \(\varepsilon _{ A }\)-DP\(\varLambda _{\beta ,\eta , A }\) goes to \(\mathbb {D}\mathcal {X}\) for \(\varepsilon _{ A } \rightarrow 0\). More generally, \(\varLambda _{\beta ,\eta , A }\) is larger when the maximum output probability \(\max _{y} { A }^{\#}(\lambda )[y]\) is smaller.

In practice, even when \(\varepsilon _{ A }\) is relatively large, a small number of dummies enables us to provide a strong DistP, as shown by experiments in Sect. 7.

We note that Theorem 3 may not imply DP of the tupling mechanism, depending on \( A \). For example, suppose that \( A \) is the identity function. For small \(\varepsilon _{\alpha }\) and \(\delta _{\alpha }\), we have \(\beta \ll 1\), hence no point distribution \(\lambda \) (where \(\lambda [x] = 1\) for some x) belongs to \(\varLambda _{\beta ,\eta , A }\), namely, the tupling mechanism does not provide \((\varepsilon _{\alpha }, \delta _{\alpha })\)-DP.

6.3 Service Quality Loss and Cost of the Tupling Mechanism

When a mechanism outputs a value y closer to the original input x, she obtains a larger utility, or equivalently, a smaller service quality loss \( d (x, y)\). For example, in an LBS (location based service), if a user located at x submits an obfuscated location y, the LBS provider returns the shops near y, hence the service quality loss can be expressed as the Euclidean distance \( d (x, y) {\mathop {=}\limits ^{\mathrm {def}}}\Vert x - y \Vert \).

Since each output of the tupling mechanism consists of \(k+1\) elements, the quality loss of submitting a tuple \(\bar{y} = (y_1, y_2, \ldots , y_{k+1})\) amounts to \( d (x, \bar{y}) \mathbin {:=} \min _{i} d (x, y_i)\). Then the expected quality loss of the mechanism is defined as follows.

Definition 11

(Expected quality loss of the tupling mechanism). For a \(\lambda \in \mathbb {D}\mathcal {X}\) and a metric \( d : \mathcal {X}\times \mathcal {Y}\rightarrow \mathbb {R}\), the expected quality loss of \( Q ^\mathsf{tp}_{k,\nu , A }\) is:

$$\begin{aligned} L\bigl ( Q ^\mathsf{tp}_{k,\nu , A }\bigr ) =\textstyle \!\sum _{x\in \mathcal {X}} \sum _{\bar{y}\in \mathcal {Y}^{k+1}} \lambda [x]\, Q ^\mathsf{tp}_{k,\nu , A }(x)[\bar{y}]\, \min _{i} d (x, y_i) {.} \end{aligned}$$

For a larger number k of random dummies, \(\min _{i} d (x, y_i)\) is smaller on average, hence \(L\bigl ( Q ^\mathsf{tp}_{k,\nu , A }\bigr )\) is also smaller. Furthermore, thanks to the distribution obfuscation by random dummies, we can instead reduce the perturbation noise added to the actual input x to obtain the same level of DistP. Therefore, the service quality is much higher than existing mechanisms, as shown in Sect. 7.

6.4 Improving the Worst-Case Quality Loss

As a point obfuscation mechanism \( A \) used in the tupling mechanism \( Q ^\mathsf{tp}_{k,\nu , A }{}\), we define the restricted Laplace (RL) mechanism below. Intuitively, \((\varepsilon _{\! A }, r)\)-RL mechanism adds \(\varepsilon _{\! A }\)-XDP Laplace noise only within a radius r of the original location x. This ensures that the worst-case quality loss of the tupling mechanisms is bounded above by the radius r, whereas the standard Laplace mechanism reports a location y that is arbitrarily distant from x with a small probability.

Fig. 2.
figure 2

Empirical DistP and quality loss of \( Q ^\mathsf{tp}_{k,\nu , A }{}\) for the attribute male/female.

Definition 12

(RL mechanism). Let \(\mathcal {Y}_{x,r} = \{ y'\in \mathcal {Y}\,|\, d (x, y') \le r \}\). We define \((\varepsilon _{\! A },r)\)-restricted Laplace (RL) mechanism as the \( A :\mathcal {X}\rightarrow \mathbb {D}\mathcal {Y}\) defined by: \( A (x)[y] = \frac{ e^{-\varepsilon d (x, y)} }{ \sum _{y'\in \mathcal {Y}_{x,r}} e^{-\varepsilon d (x, y')} }\) if \(y \in \mathcal {Y}_{x,r}\), and \( A (x)[y] = 0\) otherwise.

Since the support of \( A \) is limited to \(\mathcal {Y}_{x,r}\), \( A \) provides better service quality but does not provide DP. Nevertheless, as shown in Theorem 3\( Q ^\mathsf{tp}_{k,\nu , A }{}\) provides DistP, due to dummies in \(\mathcal {Y}\setminus \mathcal {Y}_{x,r}\). This implies that DistP is a relaxation of DP that guarantees the privacy of attributes while achieving higher utility by weakening the DP protection of point data. In other words, DistP mechanisms are useful when users want both to keep high utility and to protect the attribute privacy more strongly than what a DP mechanism can guarantee (e.g., when users do not mind revealing their actual locations outside home, but want to hide from robbers the fact that they are outside home, as motivated in Sect. 1).

7 Application to Attribute Privacy in LBSs

In this section we apply local mechanisms to the protection of the attribute privacy in location based services (LBSs) where each user submits her own location x to an LBS provider to obtain information relevant to x (e.g., shops near x).

7.1 Experimental Setup

We perform experiments on location privacy in Manhattan by using the Foursquare dataset (Global-scale Check-in Dataset) [25]. We first divide Manhattan into \(11 \times 10\) regions with \(1.0\,\mathrm {km}\) intervals. To provide more useful information to users in crowded regions, we further re-divide these regions to 276 regions by recursively partitioning each crowded region into four until each resulting region has roughly similar population density.Footnote 2 Let \(\mathcal {Y}\) be the set of those 276 regions, and \(\mathcal {X}\) be the set of the 228 regions inside the central \(10\,\mathrm {km} \times 9\,\mathrm {km}\) area in \(\mathcal {Y}\).

As an obfuscation mechanism Q, we use the tupling mechanism \( Q ^\mathsf{tp}_{k,\nu , A }{}\) that uses an \((\varepsilon _{\! A }, r)\)-RL mechanism \( A \) and the uniform distribution \(\nu \) over \(\mathcal {Y}\) to generate dummy locations. Note that \(\nu \) is close to the population density distribution over \(\mathcal {Y}\), because each region in \(\mathcal {Y}\) is constructed to have roughly similar population density. In the definitions of the RL mechanism and the quality loss, we use the Euclidean distance \(\Vert \cdot \Vert \) between the central points of the regions.

In the experiments, we measure the privacy of user attributes, formalized as DistP. For example, let us consider the attribute \({ male}/{ female}\). For each \(t\in \{ { male}, { female}\}\), let \(\lambda _{t}\in \mathbb {D}\mathcal {X}\) be the prior distribution of the location of the users having the attribute \(t\). Then, \(\lambda _{{ male}}\) (resp. \(\lambda _{{ female}}\)) represents an attacker’s belief on the location of the male (resp. female) users. We define these as the empirical distributions that the attacker can calculate from the above Foursquare dataset.

7.2 Evaluation of the Tupling Mechanism

Distribution Privacy. We demonstrate by experiments that the male users cannot be recognized as which of male or female in terms of DistP. In Fig. 2, we show the experimental results on the DistP of the tupling mechanism \( Q ^\mathsf{tp}_{k,\nu , A }{}\). For a larger number k of dummy locations, we have a stronger DistP (Fig. 2a). For a larger \(\varepsilon _{\! A }\)\((\varepsilon _{\! A }, 0.020)\)-RL mechanism \( A \) adds less noise, hence the tupling mechanism provides a weaker DistP (Fig. 2b)Footnote 3. For a larger radius r, the RL mechanism \( A \) spreads the original distribution \(\lambda _{{ male}}\) and thus provides a strong DistP (Fig. 2c). We also show the relationship between k and DistP in the eastern/western Tokyo and London, which have different levels of privacy (Fig. 3).

These results imply that if we add more dummies, we can decrease the noise level/radius of \( A \) to have better utility, while keeping the same level \(\varepsilon \) of DistP. Conversely, if \( A \) adds more noise, we can decrease the number k of dummies.

Expected Quality Loss. In Fig. 2d, we show the experimental results on the expected quality loss of the tupling mechanism. For a larger \(\varepsilon _{\! A }\), \( A \) adds less noise, hence the loss is smaller. We confirm that for more dummy data, the expected quality loss is smaller. Unlike the planar Laplace mechanism (\(\mathrm {PL}{}\)), \( A \) ensures that the worst quality loss is bounded above by the radius r. Furthermore, for a smaller radius r, the expected loss is also smaller as shown in Fig. 2d.

Fig. 3.
figure 3

k and DistP for male/female in different cities.

Fig. 4.
figure 4

DistP and ASR of the tupling (\(k = 10\), \(r=0.020\)).

Fig. 5.
figure 5

\((\varepsilon , 0.001)\)-DistP and expected loss for male/female and TM using \(k = 10\), \(r = 0.020\).

7.3 Appropriate Parameters

We define the attack success rate (ASR) as the ratio that the attacker succeeds to infer a user has an attribute when she does actually. We use an inference algorithm based on the Bayes decision rule [26] to minimize the identification error probability when the estimated posterior probability is accurate [26].

In Fig. 4, we show the relationships between DistP and ASR in Manhattan for the attribute home, meaning the users located at their home. In theory, \(\mathrm {ASR} = 0.5\) represents the attacker learns nothing about the attribute, whereas the empirical ASR in our experiments fluctuates around 0.5. This seems to be caused by the fact that the dataset and the number of locations are finite. From Fig. 4, we conclude that \(\varepsilon = 1\) is an appropriate parameter for \((\varepsilon , 0.001)\)-DistP to achieve \(\mathrm {ASR} = 0.5\) in our setting, and we confirm this for other attributes. However, we note that this is an empirical criterion possibly depending on our setting, and the choice of \(\varepsilon \) for DistP can be as controversial as that for DP and should also be investigated using approaches for DP (e.g., [27]) in future work.

7.4 Comparison of Obfuscation Mechanisms

We demonstrate that the tupling mechanism (TM) outperforms the popular mechanisms: the randomized response (RR), the planar Laplace (PL), and the planar Gaussian (PG). In Fig. 5 we compare these concerning the relationship between \(\varepsilon \)-DistP and expected quality loss. Since PG always has some \(\delta \), it provides a weaker DistP than PL for the same quality loss. We also confirm that PL has smaller loss than RR, since it adds noise proportionally to the distance.

Finally, we briefly discuss the computational cost of the tupling mechanism \( Q ^\mathsf{tp}_{k,\nu , A }{}\), compared to \(\mathrm {PL}{}\). In the implementation, for a larger domain \(\mathcal {X}\)\(\mathrm {PL}{}\) deals with a larger size \(|\mathcal {X}|\times |\mathcal {Y}|\) of the mechanism’s matrix, since it outputs each region with a non-zero probability. In contrast, since the RL mechanism \( A \) used in \( Q ^\mathsf{tp}_{k,\nu , A }{}\) maps each location x to a region within a radius r of x, the size of \( A \)’s matrix is \(|\mathcal {X}|\times |\mathcal {Y}_{x,r}|\), requiring much smaller memory space than \(\mathrm {PL}{}\).

Furthermore, the users of \(\mathrm {TM}{}\) can simply ignore the responses to dummy queries, whereas the users of \(\mathrm {PL}{}\) need to select relevant POIs (point of interests) from a large radius of x, which could cost computationally for many POIs. Therefore, \(\mathrm {TM}{}\) is more suited to be used in mobile environments than \(\mathrm {PL}{}\).

8 Related Work

Differential Privacy. Since the seminal work of Dwork [1] on DP, a number of its variants have been studied to provide different privacy guarantees; e.g., f-divergence privacy [28], d-privacy [16], Pufferfish privacy [20], local DP [2], and utility-optimized local DP [29]. All of these are intended to protect the input data rather than the input distributions. Note that distributional privacy [30] is different from DistP and does not aim at protecting the privacy of distributions.

To our knowledge, this is the first work that investigates the differential privacy of probability distributions lying behind the input. However, a few studies have proposed related notions. Jelasity et al. [31] propose distributional differential privacy w.r.t. parameters \(\theta \) and \(\theta '\) of two distributions, which aims at protecting the privacy of the distribution parameters but is defined in a Bayesian style (unlike DP and DistP) to satisfy that for any output sequence y, \(p(\theta | y) \le e^{\varepsilon } p(\theta ' | y)\). After a preliminary version of this paper appeared in arXiv [15], a notion generalizing DistP, called profile based privacy, is proposed in [32].

Some studies are technically related to our work. Song et al. [21] propose the Wasserstein mechanism to provide Pufferfish privacy, which protects correlated inputs. Fernandes et al. [33] introduce Earth mover’s privacy, which is technically different from DistP in that their mechanism obfuscates a vector (a bag-of-words) instead of a distribution, and perturbs each element of the vector. Sei et al. [34] propose a variant of the randomized response to protect individual data and provide high utility of database. However, we emphasize again that our work differs from these studies in that we aim at protecting input distributions.

Location Privacy. Location privacy has been widely studied in the literature, and its survey can be found in [35]. A number of location obfuscation methods have been proposed so far, and they can be broadly divided into the following four types: perturbation (adding noise) [3, 5, 36], location generalization (merging regions) [37, 38], and location hiding (deleting) [37, 39], and adding dummy locations [40,41,42]. Location obfuscation based on DP (or its variant) have also been widely studied, and they can be categorized into the ones in the centralized model [43, 44] and the ones in the local model [3, 5]. However, these methods aim at protecting locations, and neither at protecting users’ attributes (e.g., age, gender) nor activities (e.g., working, shopping) in a DP manner. Despite the fact that users’ attributes and activities can be inferred from their locations [6,7,8], to our knowledge, no studies have proposed obfuscation mechanisms to provide rigorous DP guarantee for such attributes and activities.

9 Conclusion

We have proposed a formal model for the privacy of probability distributions and introduced the notion of distribution privacy (DistP). Then we have shown that existing local mechanisms deteriorate the utility by adding too much noise to provide DistP. To improve the tradeoff between DistP and utility, we have introduced the tupling mechanism and applied it to the protection of user attributes in LBSs. Then we have demonstrated that the tupling mechanism outperforms popular local mechanisms in terms of attribute obfuscation and service quality.