2020 IEEE Symposium on Security and Privacy
Private resource allocators and their applications
Sebastian Angel
University of Pennsylvania
Sampath Kannan
University of Pennsylvania
Zachary Ratliff
Raytheon BBN Technologies
At a high level, allocation-based side channels exist because
a system’s resource allocator—which includes cluster managers [1], network rate limiters [58], storage controllers [76],
data center resource managers [7], flow coordinators [67], lock
managers [42], etc.—can leak information about how many
(and which) other processes are requesting service through the
allocation itself. As a simple example, a process that receives
only a fraction of the resources available from an allocator that
is work conserving (i.e., that allocates as many resources as
possible) can infer that other processes must have requested
the same resources concurrently. These observations can be
made even if the other processes do not use their allocated
resources at all.
While the information disclosed by allocations might seem
harmless at first glance, these allocation-based side channels can
I. I NTRODUCTION
be used as building blocks for more serious attacks. As a moBuilding systems that avoid unintentional information leak- tivating example, we show that allocation-based side channels
age is challenging since every action or operation—innocuous can be combined with traffic analysis attacks [5, 26, 27, 48, 49,
as it may be—can reveal sensitive information. This is 59, 66, 70, 77] to violate the guarantees of existing bidirectional
especially true in the wake of numerous side-channel attacks anonymous messaging systems (often called metadata-private
that exploit unexpected properties of a system’s design, im- messengers or MPMs) [6, 10, 52, 53, 55, 56, 78, 81]. This
plementation, or hardware. These attacks can be based on is significant because MPMs are designed precisely to avoid
analog signals such as the machine’s power consumption [50], side-channel attacks. In particular, Angel et al. [9] show that
sound produced [36], photonic emissions from switching these systems are secure only if none of the contacts with
transistors [72], temperature [43], and electromagnetic radiation whom a user communicates are compromised by an adversary;
emanated [4, 82], that arise as a result of the system performing otherwise, compromised contacts can learn information about
some sensitive operation. Or they may be digital and monitor the user’s other conversations. We expand on Angel et al.’s
the timing of operations [51], memory access patterns [38], observation in Section II, and show that it is an instance of an
the contention arising from shared resources (e.g., caches [47], allocation-based side-channel attack.
execution ports in simultaneous multithreading [19]), and the
To prevent allocation-based side channels, we introduce
variability of network traffic [70].
private variants of resource allocators (PRAs) that can assign
In the above cases, information is exposed as a result of a resources to processes without leaking to any processes
process in the system consuming a resource (e.g., sending a which or how many other processes received any units of
network packet, populating the cache, executing a conditional the resource. We formalize the properties of PRAs (§III),
branch instruction). We can think of these side channels as and propose several constructions that guarantee informationconsumption-based. In this paper, we are concerned with side theoretic, computational, and differential privacy under different
channels that exist during the allocation of the resource to settings (§IV-A–IV-C). We also discuss how privacy interacts
a process, and that are observable regardless of whether the with classic properties of resource allocation. For example,
process ultimately consumes the resource. As a result, these we show that privacy implies population monotonicity (§V).
allocation-based side channels can sometimes be exploited Finally, we prove an impossibility result (§III-B): there does
by attackers in systems that have been explicitly designed not exist a PRA when the number of concurrent requesting
to avoid consumption-based side channels (systems that pad processes is not bounded ahead of time. As a result, PRAs
all requests, regularize network traffic and memory accesses, must assume a polynomial bound on the number of requesting
have constant time implementations, clear caches after every processes (and this bound might leak).
operation, etc.). To prevent allocation-based side channels we
To showcase the benefits and costs of using PRAs, we
propose a new primitive called a private resource allocator integrate our constructions into Alpenhorn [57], which is a
(PRA) that guarantees that the mechanism by which the system system that manages conversations in MPMs. The result is the
first MPM system that is secure in the presence of compromised
allocates resources to processes leaks no information.
Abstract—This paper introduces a new cryptographic primitive called a private resource allocator (PRA) that can be used
to allocate resources (e.g., network bandwidth, CPUs) to a set
of clients without revealing to the clients whether any other
clients received resources. We give several constructions of PRAs
that provide guarantees ranging from information-theoretic to
differential privacy. PRAs are useful in preventing a new class
of attacks that we call allocation-based side-channel attacks.
These attacks can be used, for example, to break the privacy
guarantees of anonymous messaging systems that were designed
specifically to defend against side-channel and traffic analysis
attacks. Our implementation of PRAs in Alpenhorn, which is a
recent anonymous messaging system, shows that PRAs increase
the network resources required to start a conversation by up
to 16× (can be made as low as 4× in some cases), but add no
overhead once the conversation has been established.
© 2020, Sebastian Angel. Under license to IEEE.
DOI 10.1109/SP40000.2020.00065
372
friends. Interestingly, our implementation efforts reveal that
Protocol
Objective
naively introducing PRAs into MPMs would cripple these
Discover friends
Learn identifier or public key
systems’ functionality. For example, it would force clients to
abruptly end ongoing conversations, and would prevent honest
Add friend to contact list
Establish a shared secret
clients from ever starting conversations. To mitigate these issues,
we propose several techniques tailored to MPMs (§VI).
Dial a friend in contact list
Agree on session key and round r
Our evaluation of Alpenhorn shows that PRAs lead to
Converse with friend
Send message starting on round r
conversations taking 16× longer to get started (or alternatively
consuming 16× more network resources), though this number
can be reduced to 4× by prioritizing certain users. However, F IG . 1—MPM systems consist of four protocols: friend discovery,
once conversations have started, PRAs incur no additional add-friend, dialing, and conversation. Users can only converse once
overhead. While we admit that such delayed start (or bandwidth they are in an active session (agree on a session key and round).
increase) further hinders the usability of MPMs, compromised
friends are enough of a real threat to justify our proposal.
is done with a dialing protocol [6, 52, 57] whereby one user
In summary, the contributions of this work are:
“cold calls” another user and notifies them of their intention to
• The notion of Private Resource Allocators (PRA) that
start a conversation. The analogous situation in the non-private
assign resources to processes without leaking how many or setting is a dialing call on a VoIP or video chat service like
to which processes resources are allocated.
Skype. Creating a session boils down to agreeing on a time or
round to start the conversation, and generating a key that will
• An impossibility theorem that precisely captures under
be used to encrypt all messages in the session (derived from
what circumstances privacy cannot be achieved.
the shared secret and the chosen round).
• Several PRA constructions under varying assumptions.
Once a session between two friends has been established,
• A study of how privacy impacts other allocation properties.
the
participants can exchange messages using a conversation
• The integration of PRAs into an MPM to avoid leaking
protocol
(this is the protocol that actually differentiates most
information to compromised friends, and the corresponding
MPM
systems).
In all proposed conversation protocols, comexperimental evaluation.
munication occurs in discrete rounds—which is why part of
Finally, we believe that PRAs have applications beyond
creating a session involves identifying the round on which to
MPMs, and open up exciting theoretical and practical quesstart the conversation—during which a user sends and receives
tions (§IX). We hope that the framework we present in the
up to k messages. One can think of each of these k messages as
following sections serves as a good basis.
being placed in a different channel. To guarantee no metadata
leaks, users are forced to send and receive a message on
II. C ASE STUDY: M ETADATA - PRIVATE MESSENGERS
each channel in every round, even when the user is idle and
In the past few years, there has been a flurry of work on has nothing to send or receive (otherwise observers could
messaging systems that hide not just the content of messages determine when a user is not communicating). We summarize
but also the metadata that is associated with those messages [6, these protocols in Figure 1.
8, 10, 24, 53, 55, 56, 78, 81, 83]. These systems guarantee some
The above highlights a tension between performance and
variant of relationship (or third-party) unobservability [68], in network costs experienced by all MPM systems. Longer rounds
which all information (including the sender, recipient, time of increase the delay between two consecutive messages but
day, frequency of communication, etc.) is kept hidden from reduce the network overhead when a user is idle (due to fewer
anyone not directly involved in the communication. A key dummy messages). Having more channels improves throughput
driver for these systems is the observation that metadata is itself (more concurrent conversations per round or more messages
sensitive and can be used—and in fact has been used [22, 71]— per conversation) but at the cost of higher network overhead
to infer the content or at least the context of conversations for when the user is idle. Given that users are idle a large fraction
a variety of purposes [73]. For example, a service provider of the time, most MPMs choose long round duration (tens of
could infer that a user has some health condition if the user seconds) and a small number of channels (typically k = 1).
often communicates with health professionals. Other inferable
While these tradeoffs have long been understood, the impact
information typically considered sensitive includes religion, of the number of communication channels on privacy has
race, sexual orientation, and employment status [61].
received less attention. We discuss this next.
In these metadata-private messengers (MPMs), a pair of
users are considered friends only if they have a shared secret. A. Channel allocation can leak information
Users can determine which of their acquaintances are part of the
Prior works on MPMs have shown that the proposed contact
system using a contact discovery protocol [16, 20, 60], and can discovery, add-friend, dialing, and conversation protocols are
then exchange the secret needed to become friends with these secure and leak little information (negligible or bounded) on
acquaintances through an out-of-band channel (e.g., in person their own, but surprisingly, none had carefully looked at their
at a conference or coffee shop), or with an in-band add-friend composition. Indeed, recent work by Angel et al. [9] shows that
protocol [57]. A pair of friends can then initiate a session. This existing dialing and communication protocols do not actually
373
compose in the presence of compromised friends. The reason the attack for other rounds r′ , r′′ , etc. With each additional
is that the number of communication channels (k) is usually round, the adversary can construct intersections of active users
smaller than the number of friends that could dial the user at and shrink the set of possible sender-recipient pairs under the
any one time. As a result, when a user is dialed by n friends assumption that conversations span multiple rounds.
asking to start a conversation at the same time, the user must
In short, the described allocation-based side-channel attack
determine an allocation of the n friends to the k channels.
makes existing MPM systems vulnerable to traffic analysis. In
As one would expect, when n > k, not all of the n dialing the next section we formally model the leakage of information
requests can be allocated onto the k available channels since that results from allocating dialing friends to a limited number
each channel can only support one conversation (for example, a of channels. In Sections IV-A–IV-C we then give several
user in Skype can only accept one incoming call at a time since constructions of allocators that can be used by MPM systems
k = 1). If this allocation is not done carefully—defining what to establish sessions without leaking information.
“carefully” means formally is the subject of Section III—a user’s
III. P RIVATE RESOURCE ALLOCATORS (PRA S )
friends can learn information through dialing. In particular, a
caller who dials and receives a busy signal or no response at
The allocation-based side-channel attack described in the
all for a round r can infer that the callee has agreed to chat prior section essentially follows a pigeonhole-type argument
with other users during round r.1 For the more general case of whereby there are more friends than there are channels. This
k > 1, an attacker controlling k callers can dial the user and same idea applies to other situations. For example, whenever
observe whether all calls are answered or not; an attacker may there is a limited number of CPU cores and many threads,
even conduct a binary search over multiple rounds to learn the the way in which threads are scheduled onto cores leaks
exact number of ongoing conversations.
information to the threads. Specifically, a thread that was
The saving grace is that information that leaks is observed not scheduled could infer that other threads were, even if
only by a user’s dialing friends, as opposed to all users in the scheduled threads perform no operations and consume
the system or third-party observers (since friendship is a no resources. In this section we formalize this problem more
precondition for dialing). However, friends’ accounts can be generally and describe desirable security definitions.
compromised by an adversary, and users could be tricked
We begin with the notion of a private resource allocator,
into befriending malicious parties. In fact, not only is this which is an algorithm that assigns a limited number of resources
possible, it is actually a common occurrence: prior surveys to a set of processes that wish to use those resources. Privacy
of user behavior on online social networks show that users means that the outcome of the allocator does not reveal to any
are very willing to accept friend requests from strangers [69]. processes whether there were other processes concurrently
Furthermore, given recent massive leaks of personal data—3 requesting the same resource. Note that private allocators
billion accounts by Yahoo in 2013 [54]; 43 million accounts are concerned only with the information that leaks from the
by Equifax in 2017 [34]; 87 million users by Facebook in allocation itself; information that leaks from the use of the
2018 [74] and an additional 549 million records in 2019 [80]— resource is an orthogonal concern.
there is significant material for attackers to conduct social
In more detail, a resource allocator RA is an algorithm that
engineering and other attacks. Worse yet, many of these attacks takes as input a resource of capacity k, and a set of processes
can easily be automated [15].
P from a universe of processes M (P ⊆ M). RA outputs the set
of processes U ⊆ P that should be given a unit of the resource,
B. Traffic analysis makes things worse
such that |U| ≤ k. There are two desirable properties for an
The previous section describes how an attacker, via com- RA, informally given below.
promised friends, can learn whether a user is busy or not in
• Privacy: it is hard for an adversary controlling a set of
some round r (or get some confidence on this) by conducting
processes Pmal ⊆ P to determine whether there are other
an allocation-based side channel attack. While such leakage
processes (i.e., Pmal = P or Pmal ⊂ P) from observing the
is minor on its own, it can be composed with traffic analysis
allocations
of processes in Pmal .
techniques such as intersection [70] and disclosure [5] attacks
for
all sets of processes P, occasionally at least
•
Liveness:
(and their statistical variants [25]).
one
process
in
P receives a unit of the resource.
As a very simple example, imagine an adversary that can
compromise the friends of multiple users and can use those
The liveness property is the weakest definition of progress
compromised friends to determine which users are (likely)
needed for RAs to be useful, and helps to rule out an RA that
active in a given round r. The adversary can then reduce
achieves privacy by never allocating resources.
the set of possible sender-recipient pairs by ignoring all the
idle users (more sophisticated observations can also be made A. Formal definition
by targeting particular users). The adversary can then repeat
Notation. We use poly(λ) and negl(λ) to mean a polynomial
and negligible function2 of λ’s unary representation (1λ ). We
1 A lack of response does not always mean that a user is busy with others; the
user could be asleep. However, existing MPMs accept requests automatically.
Even if the user were involved, information would still leak and predicating
correctness on behavior that is hard to characterize is undesirable.
2 A function f : N → R is negligible if for all positive polynomials poly, there
exists an integer c such that for all integers x greater than c, |f (x)| < 1/poly(x).
374
symbol
description
C and A
Challenger and adversary in the security game resp.
b and b′
Challenger’s coin flip and adversary’s guess resp.
k
Amount of available resource
M
Universe of processes
P
Processes requesting service concurrently (⊆ M)
Phon
Honest processes in P (not controlled by A)
Pmal
Malicious processes in P (controlled by A)
U
Allocation (⊆ P) of size at most k
λ
Security parameter
βx
poly(λ) bound on variable x
The proposed liveness definition (Def. 3) is very weak. It
simply states that the allocator must occasionally output at least
one process. Notably, it says nothing about processes being
allocated resources with equal likelihood, or that every process
is eventually serviced (it allows starvation). Nevertheless, this
weak definition is sufficient to separate trivial from non-trivial
allocators; we discuss several other properties such as fairness
and resource monotonicity in Section V. To compare the
efficiency of non-trivial allocators, however, we need a stronger
notion that we call the allocator’s utilization.
F IG . 2—Summary of terms used in the security game, lemmas, and
proofs, and their corresponding meaning.
use βx to mean a poly(λ) bound on variable x. Upper case
letters denote sets of processes. Figure 2 summarizes all terms.
Security game. We define privacy with a game played between
an adversary A and a challenger C. The game is parameterized
by a resource allocator RA and a security parameter λ. RA
takes as input a set of processes P from the universe of all
processes M, a resource capacity k that is poly(λ), and λ. RA
outputs a set of processes U ⊆ P, such that |U| ≤ k.
1) A is given oracle access to RA, and can issue an arbitrary
number of queries to RA with arbitrary inputs P and k. For
each query, A can observe the result U ← RA(P, k, λ).
2) A picks a positive integer k and two disjoint sets of
processes Phon , Pmal ⊆ M and sends them to C. Here Phon
represents the set of processes requesting a resource that
are honest and are not compromised by the adversary. Pmal
represents the set of processes requesting a resource that
are compromised by the adversary.
3) C samples a random bit b uniformly in {0, 1}.
4) C sets P ← Pmal if b = 0 and P ← Pmal ∪ Phon if b = 1.
5) C calls RA(P, k, λ) to obtain U ⊆ P where |U| ≤ k.
6) C returns Umal = U ∩ Pmal to A.
7) A outputs its guess b′ , and wins the game if b = b′ .
In summary, the adversary’s goal is to determine if the
challenger requested resources for the honest processes or not.
Definition 1 (Information-theoretic privacy). An allocator RA
is IT-private if in the security game, for all algorithms A,
Pr[b = b′ ] = 1/2, where the probability is over the random
coins of C and RA.
Definition 4 (Utilization). The utilization of a resource allocator RA is the fraction of requests serviced by RA compared to
the number of requests that would have been serviced by a nonprivate allocator. Formally, given a set of processes P, capacity
k, and parameter λ, RA’s utilization is E(U)/ min(|P|, k), where
E(U) is the expected number of output processes of RA(P, k, λ).
B. Prior allocators fail
Before describing our constructions we discuss why straightforward resource allocators fail to achieve privacy.
FIFO allocator. A FIFO allocator simply allocates resources to
the first k processes. This is the type of allocator currently used
by MPM systems to assign dialing friends to channels (§II-A),
and is also commonly found in cluster job schedulers (e.g.,
Spark [84]). This allocator provides no privacy. To see why,
suppose that both Phon and Pmal are ordered sets, where the
order stems from the identity of the process. The adversary
can interleave the identity of processes in Phon and Pmal so
that the FIFO allocator’s output is k processes in Pmal when
b = 0, and k/2 processes in Pmal when b = 1.
Uniform allocator. Another common allocator is one that
picks k of the processes at random. At first glance this might
appear to provide privacy since processes are being chosen
uniformly. Nevertheless, this allocator leaks a lot of information.
In particular, when b = 0 the adversary expects k of its
processes to be allocated (since P = Pmal ), whereas when b = 1,
fewer than k of the malicious processes are likely to be allocated.
More formally, let X be the random variable describing the
cardinality of the set returned to A, namely |U ∩Pmal |. Suppose
|Pmal | = |Phon | = k. Then Pr[X < k | b = 0] = 0 and
Pr[X < k | b = 1] = 1 − (k! · k!)/(2k)! ≥ 1/2. As a result, A
can distinguish between b = 0 and b = 1 with non-negligible
advantage by simply counting the elements in U ∩ Pmal .
Uniform allocator with variable-sized output. One of the
issues with the prior allocator is that the size of the output
reveals too much. We could consider a simple fix that selects
Definition 2 (Computational privacy). An allocator RA is C- an output size s uniformly from the range [0, k], and allocates
private if in the security game given parameter λ, for all s processes at random. But this is also not secure.
probabilistic polynomial-time algorithms A, the advantage of
Let |Pmal | = |Phon | = k, and let X be the random variable
A is negligible: | Pr[b = b′ ] − 1/2| ≤ negl(λ), where the representing the cardinality of the set returned to A. We show
probability is over the random coins of C and RA.
that the probability that X = k is lower when b = 1. Observe
1
Definition 3 (Liveness). An allocator RA guarantees liveness that Pr[X = k | b = 0] = k+1 , whereas Pr[X = k | b = 1] =
1
for all k ≥ 1. Furthermore,
if given parameter λ, any non-empty set of processes P, and (k! · k!)/((k + 1)(2k)!) < k+1
1
−
positive resource capacity k, Pr[RA(P, k, λ) = ∅] ≥ 1/poly(λ). when k ≥ 1, (k! · k!)/((k + 1)(2k)!) ≤ 1/2. Therefore, k+1
375
allocator
leakage
utilization
assumptions
A. Slot-based resource allocator
We now discuss a simple slot-based resource allocator. It
guarantees information-theoretic privacy and liveness under
the assumption that the size of the universe of processes (|M|)
has a bound βM that is poly(λ). The key idea is to map each
process p ∈ M to a unique “allocation slot” (so there are at
most βM total slots), and grant resources to processes only if
they request them during their allocated slots. The chosen slots
are determined by a random λ-bit integer r.
SRA (§IV-A)
None
|P|
βM
• setup phase
• |M| ≤ βM
• p ∈ M identifiable
RRA (§IV-B)
None
|P|
βP
• |P| ≤ βP
1/g(λ)
|P|
|P|+h(λ)βhon
DPRA (§IV-C)
• |Phon | ≤ βhon
F IG . 3—Comparison of privacy guarantees, utilization, and assumptions of different PRAs. DPRA makes the weakest assumptions since
Phon ⊆ P ⊆ M and is the only one that tolerates an arbitrary number
of malicious processes. g and h are polynomial functions that control
the tradeoff between utilization and privacy (§IV-C).
1
1
(k! · k!)/((k + 1)(2k)!) ≥ k+1
· [1 − 1/2] = 2(k+1)
, which is
non-negligible. As a result, A can distinguish b = 0 and b = 1
with non-negligible advantage.
Allocator from a secret distribution. The drawback of
the prior allocator is that the adversary knows the expected
distribution under b = 0 and b = 1 for its choice of Phon , Pmal ,
and k. Suppose instead that the allocator has access to a secret
distribution not known to the adversary. The allocator then
uses the approach above (allocator with variable-sized output)
with the secret distribution instead of a uniform distribution.
This is also not secure; the proof is in Appendix A.
The intuition for the above result is that the perturbation
introduced by steps 4 and 6 of the security game cannot be
masked without additional assumptions. To formalize this, we
present the following impossibility result that states that without
a bound on the number of processes, an allocator cannot
simultaneously achieve privacy and our weak definition of
liveness. We focus on IT-privacy since C-privacy considers a
PPT adversary; by definition, the size of the sets of processes
that such an adversary can create is bounded by a polynomial.
Theorem 1 (Impossibility result). There does not exist a
resource allocator RA that achieves IT-privacy (Def. 1) and
Liveness (Def. 3) when k is poly(λ) and |P| is not poly(λ).
The proof is given in Appendix B.
IV. A LLOCATOR CONSTRUCTIONS
Slot-based resource allocator SRA:
• Pre-condition (setup): ∀p ∈ M, slot(p) ∈ [0, |M|)
• Inputs: P, k, λ
λ
• r ←R [0, 2 )
• U ←∅
• ∀p ∈ P, i ∈ [0, k), if slot(p) ≡ r + i mod |M|, add p to U
• Output: U
Lemma 1. SRA guarantees IT-privacy (Def. 1).
Proof. Observe that a process p ∈ P is added to U when
r ≤ slot(p) < (r + k) mod |M|, which occurs independently
of b. In particular, if we let Ep be the event that a process p ∈ P
is added to U, then Pr[Ep |b = 0] = Pr[Ep |b = 1] = k/|M|.
Since an adversary cannot observe differences in Pr[Ep ] when
P = Pmal versus P = Pmal ∪ Phon , privacy is preserved.
Lemma 2. SRA guarantees Liveness (Def. 3) if |M| ≤ βM .
Proof. SRA outputs at least one process when there is a p ∈ P
such that r ≤ slot(p) < (r + k) mod |M|. For a given r, this
occurs with probability ≥ k/|M|.
SRA achieves our desired goals. It guarantees privacy and
|P|
liveness, and achieves a utilization (Def. 4) of |M|
whenever
k ≤ |P|. But it also has several limitations. First, it assumes
that the cardinality of the universe of processes (|M|) is known
in advance, and that it can be bounded by βM . Second, it
assumes a preprocessing phase in which each process in M is
assigned a slot. Finally, it assumes that each individual process
is identifiable since SRA must be able to compute slot(p) for
every process p ∈ P.
Unfortunately, these limitations are problematic for many
applications. For instance, consider an MPM system (§II). M
represents the set of friends for a user (not just the ones dialing),
so it could be large. Furthermore, users cannot add new friends
without leaking information since this would change M (and
therefore the periodicity of allocations), which the adversary
can detect. As a result, users must bound the maximum set
of friends that they will ever have (βM ), use this bound in the
allocator (instead of |M|), and achieve a utilization of β|P|M .
Given the impossibility result in the prior section, we propose
several allocators that guarantee liveness and some variant of
privacy under different assumptions. As a bare minimum, all
constructions assume a poly(λ) bound, βhon , on |Phon |. In the
context of MPM systems, this basically means that a user never
receives more than a polynomial number of dial requests by B. Randomized resource allocator
honest users asking to start a conversation in the same round—
In this section we show how to relax most of the assumptions
which is an assumption that is easy to satisfy in practice. We that SRA makes while achieving better utilization. In particular,
note that none of our allocators can hide βhon from an adversary, we construct a randomized resource allocator RRA that guaranso it is best thought of as a public parameter. We summarize tees privacy and liveness under the assumption that there is a
the properties of our constructions in Figure 3.
poly(λ) bound, βP , for the number of simultaneous processes
376
requesting a resource (|P|). RRA does not need a setup phase,
RRA achieves privacy, liveness, and a utilization (Def. 4) of
and does not require uniquely identifying processes in M. More |P|/βP when k ≤ |P|, which is a factor of βM /βP improvement
importantly, RRA achieves both requirements even when the over SRA. However, it still requires a bound on the number of
universe of processes (M) is unbounded. These relaxations are concurrent processes (P). In the context of an MPM system, this
crucial since they make RRA applicable to situations in which requirement essentially asks the user to pick a bound (e.g., βP =
processes are created dynamically.
20), and assume that the adversary will not compromise more
At a high level, RRA works by padding the set of processes than, say, 18 of their friends, while simultaneously receiving
(P) with enough dummy processes to reach the upper bound fewer than 3 calls from honest friends. Otherwise, the adversary
(βP ). RRA then randomly permutes the padded set and outputs could simply flood the user with malicious calls and infer, via
the first k entries (removing any dummies from the allocation). an allocation-based side channel, that the user is talking to at
If the permutation is truly random, this allocator guarantees least one honest friend (§II). Although one could come up with
information-theoretic privacy since P is always padded to βP values of βP that are large enough to hold in practice (e.g.,
elements regardless of the challenger’s coin flip (b). However, users in social media have on average hundreds of friends [79],
it requires a source of more than βP random bits, which might so βP = 100 might suffice), this only works in applications
be too much in some scenarios. One way to address this is where the adversary cannot commandeer an arbitrary number
to generate the random permutations on the fly [18], which of processes via a sybil attack [30]. In such cases, there might
requires only O(k log(βP )) random bits. Alternatively, we can not be a useful bound (e.g., βP = 280 certainly holds in practice,
simply assume that the adversary is computationally bounded but results in essentially 0 utilization).
The above limitation is fundamental and follows from our
and allow a negligible leakage of information by making the
impossibility result. In the next section, however, we show
permutation pseudorandom instead.
that if one can tolerate a weaker privacy guarantee, there exist
Randomized resource allocator RRA:
allocators that require only a poly(λ) bound, βhon , on |Phon |.
• Inputs: P, k, λ
The number of malicious processes (|Pmal |), and therefore the
• Q ← set of dummy processes of size βP − |P|
number of total concurrent processes (|P|), can be unbounded.
• π ← random or pseudorandom permutation of P ∪ Q
C. Differentially private resource allocator
• U ← first k entries in π
In this section we relax the privacy guarantees of PRAs and
• Output: U ∩ P
require only that the leakage be at most inverse polynomial in
Lemma 3. RRA guarantees IT-privacy (Def. 1) if |P| ≤ βp λ, rather than negligible. We define this guarantee in terms of
and the permutation is truly random.
(ε, δ)-differential privacy [31].
Proof. Let Ep be the event that a process p is added to U. Then,
for all p ∈ P, Pr[Ep ] = k/βP . Since Pr[Ep ] remains constant
for all sets of processes P, an adversary has no advantage to
distinguish between P = Pmal and P = Pmal ∪ Phon .
Definition 5 (Differential privacy). An allocator RA is (ǫ, δ)differentially private [31] if in the security game of Section III,
given parameter λ, for all algorithms A and for all Umal :
Lemma 4. RRA guarantees C-privacy (Def. 2) against all
probabilistic polynomial-time (PPT) adversaries if |P| ≤ βP .
where Umal is the set of processes returned from C to A in
Step 6 of the security game, and C(b) means an instance of C
where the random bit is b; similarly for C(b̄) where b̄ = 1 − b.
The probability is over the random coins of C and RA.
Proof. We use a simple hybrid argument. Consider the variant
of RRA that uses a random permutation instead of a PRP.
Lemma 3 shows the adversary has no advantage to distinguish
between b = 0 and b = 1. A PPT adversary distinguishes
between the above RRA variant and one that uses a PRP (with
security parameter λ) with negl(λ) advantage.
Lemma 5. RRA guarantees Liveness (Def. 3) if |P| ≤ βP .
Proof. RRA outputs at least one process if there exists a p ∈ P
in the first k elements of π. This follows a hypergeometric
distribution since we sample k out of βP processes without
replacement, and processes in P are considered a “success”.
The probability of at least one success is therefore:
|P| |Q|
k
i
≥ 1/βP
βPk−i
i=1
which is non-negligible.
k
Pr[C(b) returns Umal ] ≤ eε · Pr[C(b̄) returns Umal ] + δ
We show that if there is a poly(λ) bound, βhon , for the
number of honest processes (|Phon |), then there is an RA
that achieves (ε, δ)-differential privacy and Liveness (Def. 3).
Before introducing our construction, we discuss a subtle
property of allocators that we have ignored thus far: symmetry.
Definition 6 (Symmetry). An allocator is symmetric if it does
not take into account the features, identities, or ordering of
processes when allocating resources. This is an adaptation of
symmetry in games [21, 35], in which the payoff of a player
depends only on the strategy it uses, and not on the player’s
identity. Concretely, given an ordered set of processes P where
the only difference between processes is their position in P, RA
is symmetric if Pr[RA(P, k, λ) = p] = Pr[RA(π(P), k, λ) = p],
for all p and all permutations π. This argument extends to
other identifying features (process id, permissions, time that a
process is created, how many times a process has retried, etc.).
377
For example, the (non-private) uniform allocator of Section III-B and the private RRA (§IV-B) are symmetric: they
allocate resources without inspecting processes. On the other
hand, the (non-private) FIFO allocator of Section III-B and
the private SRA (§IV-A) are not symmetric; FIFO takes into
account the ordering of processes, and SRA requires computing
the function slot on each process. While symmetry places some
limits on what an allocator can do, in Section V-A we show
that many features (e.g., heterogeneous demands, priorities)
can still be implemented.
Construction. Recall from Section III that RA receives one of
two requests from C depending on the bit b that C samples. The
request is either Pmal or Pmal ∪ Phon . We can think of these sets
as two neighboring databases. Our concern is that the processes
in Pmal that are allocated the resource might convey too much
information about which of these two databases was given to
RA, and in turn reveal b. To characterize this leakage, we derive
the sensitivity of an RA that allocates resources uniformly.
Our key observation is that if RA is symmetric, then the only
useful information that the adversary gets is the number of
processes in Pmal that are allocated (i.e., |Umal |); the allocation
is independent of the particular processes in Pmal . If RA adds
no dummy processes and allocates resources uniformly, then
mal |
|Umal | = min(|Pmal |, k|P
|Pmal | ) when b = 0 and, in expectation,
k|Pmal |
|Pmal |+|Phon | )
min(|Pmal |,
when b = 1. By observing |Umal |,
the adversary learns the denominator in these fractions; the
sensitivity of this denominator—and of RA—is |Phon | ≤ βhon .
To limit the leakage, we design an allocator that samples
noise from an appropriate distribution and adds dummies based
on the sampled noise. We discuss the Laplace distribution here,
but other distributions (e.g., Poisson) would also work. The
Laplace distribution (Lap) with location parameter µ and scale
parameter s has the probability density function:
−|x − µ|
1
exp
Lap(x|µ, s) =
2s
s
Let g(λ) and h(λ) be polynomial functions of the allocator’s
security parameter λ. These functions will control the tradeoff
between privacy and utilization: ε = 1/g(λ) bounds how much
information leaks (a larger value of g(λ) leads to better privacy
but worse utilization), and the ratio h(λ)/g(λ) (which impacts
δ) determines how often the bound holds (a larger ratio provides
a stronger guarantee, but leads to worse utilization). Given these
two functions, the allocator works as follows.
(ε, δ)-differentially private resource allocator DPRA:
• Inputs: P, k, λ
• µ ← βhon · h(λ)
• s ← βhon · g(λ)
• n ← ⌈max(0, Lap(µ, s))⌉
• t ← |P| + n
• Q ← set of dummy processes of size n
• π ← random permutation of P ∪ Q
• U ← first min(t, k) processes in π
• Output: U ∩ P
In short, the allocator receives a number of requests that is
either |Pmal | or |Pmal ∪Phon |. It samples noise n from the Laplace
distribution, computes the noisy total number of processes
t = |P| + n, and allocates min(t, k) uniformly at random.
Lemma 6. DPRA is (ε, δ)-differentially private (Def. 5) for
1
ε = g(λ)
and δ = 21 exp( 1−h(λ)
g(λ) ) if |Phon | ≤ βhon .
Proof strategy. The proof that DPRA is differentially private
uses some of the ideas from the proof for the Laplace
mechanism by Dwork et al. [31]. A learns the total number
of processes in Pmal that are allocated, call it tmal . We show
that when the noise (n) is sufficiently large, for all ℓ ∈ [0, k],
Pr[tmal = ℓ|b = 0] is within a factor eε of Pr[tmal = ℓ|b = 1].
We then show that the noise fails to be sufficiently large with
probability ≤ δ. We give the full proof in Appendix C.
Corollary 7. If |Phon | ≤ βhon , the leakage or privacy loss that
results from observing the output of DPRA is bounded by
1/g(λ) with probability at least 1 − δ [32, Lemma 3.17].
In some cases, an adversary might interact with an allocator
multiple times, adapting Pmal in an attempt to learn more
information. We can reason about the leakage after i interactions
through differential privacy’s adaptive composition [33].
Lemma 8. DPRA is (ε′ , iδ + δ ′ )-differentially
private over i
interactions for δ ′ > 0 and ε′ = ε 2i ln(1/δ ′ ) + iε(eε − 1).
Proof. The proof follows from [33, Theorem III.3]. An optimal,
albeit more complex, bound also exists [44, Theorem 3.3].
Lemma 9. DPRA provides liveness (Def. 3) if |Phon | ≤ βhon .
Proof. The expected value of Lap is βhon · h(λ) ≤ poly2 (λ).
As a result, the number of dummy processes added by DPRA
is polynomial on average; at least one process in P is allocated
a resource with inverse polynomial probability.
DPRA is efficient in expectation since with high probability,
n does not exceed a small multiple of βhon ·h(λ) (Lemma 9). To
bound DPRA’s worst-case time and space complexity, we can
truncate the Laplace distribution and bound n by exp(λ) without
much additional leakage. However, even if |P| ∈ poly(λ), the
noise (n), and thus the total number of processes (t) can all
be exp(λ). This would require DPRA to have access to exp(λ)
random bits to sample the dummy processes and to perform
the permutation; the running time and space complexity would
also be exponential. Fortunately, the generation of dummy
processes, the set union, and the permutation can all be avoided
(we introduced them only for simplicity). DPRA can compute
U directly from P, k, and t as follows.
1: function R ANDOM A LLOCATION(P, k, t)
2:
U←∅
3:
for i = 0 to min(t, k) − 1 do
4:
r ←R [0, 1]
5:
if r < |P|/(t − i) then
6:
p ← Sample uniformly from P without replacement
7:
U = U ∪ {p}
8:
378
return U
Finally, sampling m elements from P without replacement is equivalent to generating the first m elements of a
random permutation of P on the fly, which can be done
with O(m log |P|) random bits in O(m log |P|) time and O(m)
space [18]. The same optimization (avoiding dummy processes
and permutations) applies to RRA (§IV-B) as well.
V. E XTENSIONS AND OTHER ALLOCATOR PROPERTIES
In addition to privacy and liveness, we ask whether PRAs
satisfy other properties that are often considered in resource
allocation settings. We study a few of them, listed below:
•
Resource monotonicity If the capacity of the allocator
increases, the probability of any of the requesting processes
to receive service should not decrease.
Population monotonicity When a process stops requesting
service, the probability of any of the remaining processes
to receive service should not decrease.
• Envy-freeness. A process should not prefer the allocation
probability of another process. This is our working definition
of fairness, though the notion of preference is quite subtle,
as we explain later.
•
•
Strategy-proofness. A process should not benefit by lying
about how many units of a resource it needs.
Before stating which allocators meet which properties, we
first describe a few generalizations to PRAs.
utility only if it receives all of its demand), or divisible (i.e., the
process derives positive utility even if it receives a fraction of
its demand). We describe two potential modifications to PRAs
that handle the divisible demands case and achieve different
notions of fairness; we leave a construction of PRAs for the
indivisible demands case to future work.
Probability in proportion to demands. In the non-binary
setting, the input to the allocator is no longer just the set
of processes P, but also their corresponding demands D. A
desirable notion of fairness might be to allocate resources in
proportion to processes’ demands. For example, if process p1
demands 100 units, and p2 demands 2 units, an allocation of 50
units to p1 and 1 unit to p2 may be fair. Our PRAs can achieve
this type of fairness for integral units by treating each process
as a set of processes of binary demand (the cardinality of each
set is given by the corresponding non-binary demand). The
bounds are therefore based on the sum of processes’ demands
rather than the number of processes.
Probability independent of demands. Another possibility is
to allocate each unit of a resource to processes independently
of how many units they demand. For example, if p1 demands
100 units and p2 demands 1 unit, both processes are equally
likely to receive the first unit of the resource. If p2 does not
receive the first unit, both processes have an equal chance to
get the second unit, etc.
A. Weighted allocators
To achieve this definition with PRAs, we propose to change
Our resource allocators are egalitarian and select which
the way that RRA and DPRA sample processes (i.e., Line 6 of
processes to allocate uniformly from all requesting processes.
the R ANDOM A LLOCATION function given in Section IV-C).
However, they can be extended to prioritize some processes
Instead of sampling processes uniformly without replacement
over others with the use of weights. Briefly, each process is
and giving the chosen processes all of their demanded resources,
associated with a weight, and allocation is done in proportion
the allocator samples processes from P uniformly with infinite
that weight: a request from a process with half of the weight
replacement, and gives each sampled process one unit of the
of a different process is picked up half as often. To implement
resource on every iteration. The allocator then assigns to each
weighted allocators, the poly(λ) bound on the number of
process pi the number of units sampled for pi at the end of the
process (e.g., βP in RRA) now represents the bound on the sum
algorithm or pi ’s demand, whichever is lower. This mechanism
of weights across all concurrent processes (normalized by the
preserves the privacy of the allocation since it is equivalent to
lowest weight of any of the processes), rather than the number
hypothetically running a PRA with a resource of capacity 1
of processes; padding is done by adding dummy processes
and the same set of binary-demand processes k times in a row.
until the normalized sum of their weights adds to the bound.
All of our privacy and liveness arguments carry over
A property of this definition is that the bounds on the number
straightforwardly to this setting. The only caveat is that of processes—βP in RRA (§IV-B) and βhon in DPRA (§IV-C)—
processes can infer their own assigned weight over time; just remain the same as in the binary-demand case (i.e., independent
like the bounds, none of our allocators can keep this information of processes’ demands) since the allocator does not expose
private. However, processes cannot infer the weight of other the results of the intermediate k hypothetical runs. However,
processes beyond the trivial upper bound (i.e., the sum of the the allocator assumes that processes have infinite demand
weights of any potential set of concurrent processes is βP ).
(and discards excess allocations at the end), which ensures
privacy but leads to worse utilization (based on the imbalance
B. Non-binary demands
of demands). A potentially less wasteful alternative is to do
Thus far we have considered only allocators for processes the sampling with a bounded number of replacements (i.e., a
that demand a single unit of a resource. A natural extension sampled process is not replaced if its demand has been met),
is to consider non-binary demands. For example, a client of a but we have not yet analyzed this case since it requires stateful
cloud service might request 5 machines to run a task. These reasoning (it is a Markov process); to our knowledge sampling
demands could be indivisible (i.e., the process derives positive with bounded replacement has not been previously studied.
379
C. Additional properties met by PRAs
Round of a dial protocol
All of our PRAs meet the first three properties listed earlier,
and SRA and RRA also meet strategy-proofness; our proofs are
in Appendix D, but we highlight the most interesting results.
We observe that privacy is intimately related to population
monotonicity. This is most evident in DPRA, since its differential privacy definition states that changes in the set of
processes have a bounded effect on the allocation. Indeed, we
prove in Appendix D that our strongest definition of privacy,
IT-privacy (Def. 1), implies population monotonicity.
SRA and RRA are trivially strategy-proof for binary demands
since processes have only two choices—to request or not
request the resource—and they derive positive utility only
if: (a) they receive the resource; or (b) they deny some other
process the resource (in some applications). Condition (b) is
nullified by IT-Privacy: the existence of other processes has no
impact on whether a process receives a resource (if it did, an
adversary could exploit it to win the security game with nonzero advantage). Furthermore, if the resource cannot be traded
(i.e., a process cannot give its resource to another process)
and demands are binary, IT-privacy implies group strategyproofness [12], which captures the notion of collusion between
processes (as otherwise a set of processes controlled by the
adversary could impact the allocation and violate privacy).
For non-binary demands, PRAs that meet our definition of
allocation probabilities being in proportion to demands are not
strategy-proof: processes have an incentive to request as many
units of a resource as possible regardless of how many units
they actually need. On the other hand, allocators that meet
the definition of allocation probability being independent of
demands are strategy-proof since the allocator assumes that all
processes have infinite demand anyway.
VI. B UILDING PRIVATE DIALING PROTOCOLS
In Section II we show that the composition of existing dialing
protocols with conversation protocols in MPM systems leaks
information. In this section we show how to incorporate the
PRAs from Section III into dialing protocols [6, 52, 57]. As an
example, we pick Alpenhorn [57] since it has a simple dialing
scheme, and describe the modifications that we make.
A. Alpenhorn’s dialing protocol
As we mention in Section II, a precondition for dialing is
that both parties, caller and callee, have a shared secret. We
do not discuss the specifics of how the secret is exchanged
since they are orthogonal (for simplicity, assume the secret is
exchanged out of band). Alpenhorn’s dialing protocol achieves
three goals. First, it synchronizes the state of users with the
current state of the system so that clients can dial their friends.
Second, it establishes an ephemeral key for a session so that
all data and metadata corresponding to that session enjoys
forward secrecy: if the key is compromised, the adversary
does not learn the content or metadata of prior sessions. Last,
it sets a round on which to start communication. The actual
communication happens via an MPM’s conversation protocol.
1 Synchronize
Keywheel for friend i
S1
S2
S3
S4
2 Send dial tokens
3 Get dial tokens
Dialing service
apply hash each round
F IG . 4—Overview of Alpenhorn’s dialing protocol [57]. Clients
deposit dial tokens for their friends into an untrusted dialing service
in rounds, and download all dial tokens sent at the end of a round.
Clients then locally determine which tokens were meant for them.
To derive dial tokens for a particular friend and round, clients use a
per-friend data structure called a keywheel (see text for details).
We discuss how Alpenhorn achieves these goals, and
summarize the steps in Figure 4.
Synchronizing state. Similarly to how conversation protocols
operate in rounds (as we briefly discuss in Section II),
dialing protocols also operate in rounds. However, the two
types of rounds are quantitatively and qualitatively different.
Quantitatively, dialing happens less frequently (e.g., once per
minute) whereas conversations happen often (e.g., every ten
seconds). Qualitatively, a round of dialing precedes several
rounds of conversation, and compromised friends can only
make observations at the granularity of dialing rounds.
To be able to dial other users, clients need to know the
current dialing round. Clients can do this by asking the dialing
service (which is typically an untrusted server or a network of
mix servers) for the current round. While the dialing service
could lie, it would only result in denial of service which none
of these systems aims to prevent anyway.
In addition to the current dialing round, clients in Alpenhorn
maintain a keywheel for each of their friends. A keywheel is
a hash chain where the first node in the chain corresponds to
the initial secret shared between a pair of users (we depict this
as “S1” in Figure 4) anchored to some round. Once a dialing
round advances, the client hashes the current node to obtain
the next node, which gives the shared secret to be used in the
new round. The client discards prior nodes to ensure forward
secrecy in case of a device compromise.
Generating a dial request. To dial a friend, a client synchronizes their keywheel to obtain the shared secret for the current
dialing round, and then applies a second hash function to the
shared secret. This yields a dialing token, which the client
sends to the dialing service. This token leaks no information
about who is being dialed except to a recipient who knows
the corresponding shared secret. To prevent traffic analysis
attacks, the client sends a dialing token every dialing round,
even when it has no intention to dial anyone (in such case the
client creates a dummy dial token by hashing random data).
Receiving calls. A client fetches from the dialing service all
of the tokens sent in a given dialing round by all users (this
leads to quadratic communication costs for the server which
380
is why dialing rounds happen infrequently)3 . For each friend
f in a client’s list, the client synchronizes the keywheel for f ,
uses the second hash function to compute the expected dial
token, and looks to see if the corresponding value is one of the
tokens downloaded from the dialing service. If there is a match,
this signifies that f is interested in starting a conversation in
the next conversation round. To derive the session key for a
conversation with f , the client computes a third hash function
(different from the prior two) on the round secret.
inputs P, k′ , λ, where k′ < k (representing a universe in which
the user is making at least one outgoing call). The answer is
yes. As a simple example, consider RRA (§IV-B). The output
from RRA(P, k = 1, λ) is very different from RRA(P, k = 0, λ)
when |Pmal | = βP and Phon = ∅. The former always outputs
one malicious process (since no padding is added and there are
no honest processes), whereas the latter never outputs anything.
Process outgoing calls first. We first consider an implementation in which the client subtracts each outgoing call from
the available channels (k) and then runs the PRA with the
remaining channels to select which incoming calls to answer.
This approach leaks information. The security game (§III)
chooses between two cases, one in which the adversary is
the only one dialing a user (P = Pmal ), and one in which
honest users are also dialing the user (P = Pmal ∪ Phon ). All
of our definitions of privacy require that the adversary cannot
distinguish between these two cases. However, with outgoing
calls there is another parameter that varies, namely the capacity
k; this variation is not captured by the security game.
To account for this additional variable, we ask whether an
adversary can distinguish the output of a resource allocator on
inputs P, k, λ (representing a universe in which the user is not
making any outgoing calls) and the output of the allocator on
C. Improving the fit
Process incoming calls first. Another approach is to reverse
the order in which channels are allocated. To do so, one can
Responding to a call. Observe that it is possible for a client first run the resource allocator on the incoming calls, and then
to receive many dial requests in the same dialing round. In use any remaining capacity for the outgoing calls. Since none
fact, a client can receive a dial request from every one of of our allocators achieve perfect utilization (Def. 4) anyway,
their friends. The client is then responsible for picking which there is left over capacity for outgoing calls. This keeps k
of the calls to answer. A typical choice is to pick the first constant, preventing the above attack.
While this approach preserves privacy and might be applicak friends whose tokens matched, where k is the number of
channels of the conversation protocol (typically 1, though some ble in other contexts, it cannot be applied to Alpenhorn. Recall
systems [8, 10] use larger values). Once the client chooses that users in Alpenhorn must send all of their dial tokens before
which calls to answer, the client derives the appropriate session they receive a single incoming call (see Figure 4). Consequently,
keys and exchanges messages using the conversation protocol. the allocator cannot possibly execute before the user decides
which or how many outgoing dial requests to send.
B. Incorporating private resource allocators
Process calls independently. The above suggests that to
The allocation mechanism used by Alpenhorn to select which securely compose Alpenhorn with a conversation protocol
calls to answer leaks information (it is the FIFO strawman of that operates in rounds (which is the case for existing MPM
Section III-B). We can instead replace it with a PRA like RRA systems), users should have dedicated channels. An implication
(§IV-B) to select which of the matching tokens (processes) of this is that the conversation protocol must, at a bare
to allocate to the k channels of the conversation protocol minimum, support two concurrent communication channels.
(resource). There is, however, one key issue with this proposal. We give a concrete proposal below.
We are using the resource allocator only for the incoming calls.
We assume that each user has k = in+out available channels
But what about outgoing calls? Observe that each outgoing for the conversation protocol, for some in, out ≥ 1. The in
call also consumes a communication channel. Specifically, channels are dedicated for incoming calls; the out channels
when a user dials another user, the caller commits to use the are for outgoing calls. When a user receives a set of incoming
conversation protocol for the next few conversation rounds dial requests, it uses a PRA and passes in as the capacity.
(until a new dial round). In contrast, the callee may choose Independently, the user can send up to out outgoing dial
not accept the caller’s call. In other words, the caller uses up requests each round (of course the user always sends out dialing
a communication channel even if the recipient rejects the call. tokens to preserve privacy, using dummies if necessary). This
Given the above, we study how outgoing calls impact the simple scheme preserves privacy since the capacity used in the
allocation of channels for incoming calls.
PRA is independent of outgoing calls.
3 Alpenhorn
reduces the constant terms using bloom filters [57].
The previous section discusses how to incorporate a PRA into
an existing dialing protocol. However, it introduces usability
issues (beyond the ones that commonly plague this space).
Conversations breaking up. Conversations often exhibit
inertia: when two users are actively exchanging messages, they
are more likely to continue to exchange messages in the near
future. Meanwhile, our modifications to Alpenhorn (§VI-B)
force clients to break up their existing conversations at the
start of every dialing round, which is abrupt.
The rationale for ending existing conversations for each new
dialing round is that our PRAs expect the capacity to remain
constant across rounds (so users need to free those channels).
Below we discuss ways to partially address this issue.
First, clients could use an allocator that has inertia built in.
For example, our slot-based resource allocator SRA (§IV-A)
does not need the integer r to be random or secret to guarantee
381
utilization
1
privacy. Consequently, if one sets r to be the current round,
RRA
0.8
SRA would assign k consecutive dialing rounds to the same
DPRA
SRA
0.6
caller. This allows conversations to continue smoothly across
rounds. The drawback is that if a conversation ends quickly
0.4
(prior to the k rounds), the user is unable to allocate someone
0.2
else’s call to that channel for the remaining rounds.
0
0
5
10
15
20
25
Second, clients could transition a conversation that is
βhon
consuming an incoming channel during one dial round to
a conversation that consumes an outgoing channel the next F IG . 5—Mean utilization of PRAs over 1M rounds as we vary βhon .
dial round. Intuitively, this is the moral equivalent of both The error bars represent the standard deviation. We fix βM = 2, 000
clients calling each other during the new round. Mechanistically, and make βp = 10βhon (the assumption modeled here is that 10%
clients simply send dummy dial requests (they do not dial each of the potential concurrent processes are honest). The number of
concurrent processes that request service in a given round follows a
other) which forces an outgoing channel to be committed to a Poisson distribution with a rate of 50 requests/round (but we bound
dummy conversation. Clients then synchronize their keywheels this by βP ). SRA and RRA guarantee IT-Privacy, and DPRA ensures
to the new dialing round, derive the session key, and hijack (ε, δ)-differential privacy for ε = ln(2) and δ = 10−4 .
the channel allocated to the dummy conversation.
Note that this transition can leak information. A compromised friend who is engaged in a long-term conversation with tests, for each of a user’s friends, whether the friend sent a
a target user could learn if the target has transitioned other dialing token. If so, it executes the client’s ReceivedCall
conversations from incoming to outgoing channels (or is dialing handler (a client-specific callback function that acts on the
other users) by observing whether a conversation ended abruptly call) with the appropriate session key. Our modification instead
across dialing rounds. Ultimately, outgoing channels are a finite collects all of the matching tokens, runs the PRA to select
resource and transitioning calls makes this resource observable at most k of these tokens, and then calls the ReceivedCall
to an attacker. Nevertheless, this is not quite rearranging the handler with the corresponding session keys.
deck chairs on the Titanic; the requirements to conduct this
A. Evaluation questions
attack are high: the attacker needs to be in a conversation with
None of our allocators are expensive in terms of memory or
the target that spans multiple dialing rounds, and convince the
computation.
Even when allocating resources to 1M processes,
target to transition the conversation into an outgoing channel.
their 95-percentile runtimes are 4.2µs, 10.8µs, and 6.9µs
Lack of priorities. In many cases, users may want to prioritize for SRA, RRA, DPRA respectively. The real impact of these
the calls of certain friends (e.g., close acquaintances over allocators is the reduction in utilization (compared to a nonsomeone the user met briefly during their travel abroad). This private variant). We therefore focus on three main questions:
is possible with the use of our weighted allocators (§V-A).
1) How does the utilization of different allocators compare
Users can give their close friends higher weights, and these
as their corresponding bounds vary?
friends’ calls will be more likely to be accepted. A drawback
2) What is the concrete tradeoff between utilization and
of this proposal is that callers can infer their assigned weight
leakage for the differentially private allocator?
based on how often their calls get through, which could lead
3) How much latency do allocators introduce before friends
to awkward situations (e.g., a user’s parents may be sad to
can start a conversation in Alpenhorn?
learn that their child has assigned them a low priority!).
We answer these questions in the context of the following
Lack of classes. Taking the idea of priorities a step further, experimental setup. We perform all of our measurements on
mobile carriers used to offer free text messaging within certain Azure D3v2 instances (2.4 GHz Intel Xeon E5-2673 v3, 14
groups (“family members” or “top friends”). We can generalize GB RAM) running Ubuntu Linux 18.04-LTS. We use Rust
the idea of incoming and outgoing channels to dedicate version 1.41 with the criterion benchmarking library [3], and
channels to particular sets of users. For example, there could Go version 1.12.5 for compiling and running Alpenhorn.
be a family-incoming channel with its corresponding PRA.
This channel is used to chat with only family members, and B. Utilization of different allocators
hence one can make strong assumptions about the bound on the
We start by asking how different allocators compare in
number of concurrent callers—allowing for better utilization.
terms of utilization. Since the parameter space here is vast
and utilization depends on the particular choice of parameters,
VII. I MPLEMENTATION AND EVALUATION
we mostly highlight the general trends. We set the maximum
We have implemented our allocators (including the weighted number of processes to βM = 2, 000, and assume that 10% of
variants of Section V-A) on top of Alpenhorn’s codebase [2] in processes requesting service at any given time are honest (i.e.,
about 600 lines of Go, and also in a standalone library written βP = 10βhon ). This setting is not unreasonable if we assume
in Rust. In Alpenhorn, we modify the scanBloomFilter that sybil attacks [30] are not possible. If, however, sybil
function, which downloads a bloom filter representing the attacks are possible in the target application, then comparing
dialing tokens from the dialing service. This function then the utilization of our allocators is a moot point: only DPRA
382
30000
βhon = 5
βhon = 10
βhon = 15
βhon = 20
0.6
frequency
utilization
1
0.8
0.4
0.2
b=0
b=1
20000
10000
0
0
0
1
10
100
1000
10000
λ
Results. SRA achieves low utilization across the board since
it is inversely proportional to βM and does not depend on βhon
(the utilization is much lower at βhon = 1 only because of the
truncation of |P| to ≤ 10). RRA, on the other hand, achieves
perfect utilization when βhon is small. This is simply because
|P| = βP with high probability (again, due to the way we are
setting and truncating |P|); in such case RRA adds no dummy
processes. For larger values of βhon , the difference between βP
and |P| increases, leading to a reduction in utilization.
As we expect, DPRA’s utilization is inversely proportional
to βhon . What is somewhat surprising about this experiment is
that DPRA achieves worse utilization than RRA, even though
it provides weaker guarantees. However, this is explained by
DPRA making a weaker assumption. One could view this
difference as the cost of working in a “permissionless” setting.
frequency
30000
b=0
b=1
20000
10000
0
0
2
4
6
8
malicious processes allocated (tmal)
10
(b) DPRA with ε = 0.10 and δ = 0.027
30000
frequency
can guarantee privacy in the presence of an unbounded number
of malicious processes.
To measure utilization (Definition 4), we have processes
request resources following a Poisson distribution with a rate
of 50 requests/round; this determines the value of |P|, which
we truncate at βP . We then depict the mean utilization over
1M rounds as we vary βhon (which impacts the value of βP as
explained above) in Figure 5.
10
(a) DPRA with ε = 0.20 and δ = 0.030
b=0
b=1
20000
10000
0
0
2
4
6
8
malicious processes allocated (tmal)
10
(c) DPRA with ε = ln(2) and δ = 10−4
80000
frequency
F IG . 6—Mean utilization of DPRA with 1M rounds as we vary the
bounds (βhon ) and the security parameter (λ) for a resource of capacity
k = 10. Here, g(λ) = λ and h(λ) = 3λ. The number of processes
requesting service (|P|) is fixed to 100.
2
4
6
8
malicious processes allocated (tmal)
b=0
b=1
60000
40000
20000
0
0
2
4
6
8
malicious processes allocated (tmal)
10
(d) RRA
C. Utilization versus privacy for DPRA
In the previous section we compare the utilization of DPRA F IG . 7—Histogram of malicious processes allocated by DPRA for
to other allocators for a particular value of ε, δ, and βhon . different values of ε and δ (Figures a–c) and RRA (Figure d) after
100K iterations. In b = 0, the allocators are called with Pmal ; in
Here we examine how λ can impact utilization for a variety of b = 1, the allocators are given Pmal ∪ Phon . Differences between
bounds by conducting the same experiment but varying λ and the two lines represents the leakage. Here βhon = 10, βP = 100,
βhon . We arbitrarily set g(λ) = λ and h(λ) = 3λ, which yields k = 10, |Phon | ∈R [0, 10], and |Pmal | = 100 − |Phon |. The parameters
in Figure (c) are those used by Vuvuzela [81] and Alpenhorn [57].
ε = 1/λ and δ = 21 exp( 1−3λ
λ ). The results are in Figure 6.
To
achieve ǫ = ln(2) and δ = 10−4 , we set g(λ) = λ/10 ln(2) and
We find that for high values of λ, the utilization is well below
h(λ) = 1.328λ for λ = 10.
10% regardless of βhon , which is too high a price to stomach—
especially since RRA leaks no information and achieves better
utilization. As a result, DPRA appears useful only in cases
where moderate leakage is acceptable (high values of ε and run 100K iterations of the security game (§III) and measure
δ), or when there is no other choice (when there are sybils, or how the resulting allocations differ based on the challenger’s
when the application is new and a bound cannot be predicted). choice of b and the value of ε and δ. We also conduct this
To answer whether a given ε and δ are a good choice in terms experiment with RRA (with βP = 100) for comparison. The
of privacy and utilization, we can reason about it analytically results are depicted in Figure 7.
using the expressions in the last row of Figure 3. However, it
If an allocator has negligible leakage, the two lines (b = 0
is also useful to visualize how DPRA works. To do this, we and b = 1) should be roughly equivalent (this is indeed the
383
RRA
DPRA
Baseline
80
60
40
20
0
0
5
10
15
incoming channels (in)
20
25
F IG . 8—Average number of rounds required to establish a session in
Alpenhorn when the recipient is using a PRA with a varying number
of incoming channels (“in” in the terminology of Section VI-B).
βP = 100, βhon = 10, ε = ln(2), δ = 10−4 .
session start delay (rounds)
session start delay (rounds)
100
20
RRA (weighted)
DPRA (weighted)
Baseline
15
10
5
0
0
5
10
15
incoming channels (in)
20
25
F IG . 9—Average number of rounds required to establish a session in
Alpenhorn when the recipient is using a weighted PRA (§VI-C) and
the caller has a priority 5× higher than all other users. βP = 100,
βhon = 10, ε = ln(2), and δ = 10−4 .
case with RRA). Since DPRA is leaky, there are observable
differences, even to the naked eye (e.g., Figure a and c). We
also observe a few trends. If we fix g(λ) = λ and h(λ) = 3λ
in DPRA (Figure 7a and b), as λ doubles (from Figure a to b),
the frequency of values concentrates more around the mean,
and the mean shifts closer to 0. Indeed, for λ = 1000 (not
depicted), the majority of the mass is clustered around 0 and
1. RRA is heavily concentrated around tmal = 10 because our
setting of |P| = βP guarantees perfect utilization (cf. §VII-B),
and roughly 90% of the chosen processes are malicious (so
they count towards tmal ). For other values of βP , the lines would
|
concentrate around k|Pβmal
.
P
rate to in because we expect that as the system becomes
more popular and users start demanding more concurrent
conversations, the default per-round capacity of the system
will be increased. We emphasize that this choice only helps
the baseline: the number of callers (|P|) has no impact on the
probability of a particular process being chosen in SRA or RRA,
and has only a bounded impact in DPRA. In contrast, the value
of |P| has a significant impact on when a process (e.g., the
last process) is chosen in the FIFO allocator (lower is better).
We then label one caller c ∈ P at random as a distinguished
caller, and have all callers dial the callee; whenever a caller’s
call is picked up, we remove that caller from P. Finally, we
measure how many rounds it takes for c’s call to be answered
D. Conversation start latency in Alpenhorn
and repeat this experiment 100 times. The results for the
To evaluate our modified version of Alpenhorn, we choose
baseline, RRA, and DPRA are given in Figure 8. We do not
privacy parameters that are at least as good as those in the
depict SRA since it requires over 10× more rounds.
−4
original Alpenhorn evaluation [57] (ε = ln(2) and δ = 10 ,
When there is a single incoming channel available (in = 1),
see Figure 7 for details on the polynomial functions that
we use), and pick bounds based on a previous study of it takes c on average 102 rounds to establish a connection with
Facebook’s social graph [79]4 . We set the maximum number RRA and 271 rounds for DPRA; it takes the baseline roughly
of friends (βM ) to 5,000, the maximum number of concurrent 1.5 rounds since the number of processes is very small. For
dialing friends (βP ) to 100, and the maximum number of in = 5, which is reasonable in a setting in which rounds are
concurrent honest dialing friends (βhon ) to 20. We think these infrequent, c must wait for about 20 and 52 rounds, for RRA
numbers are reasonable for MPMs: if dialing rounds are on and DPRA respectively.
Given this high delay, we ask whether prioritization (§VI-C)
the order of a minute, the likelihood of a user receiving a call
from 21 different uncompromised friends while the adversary can provide some relief. We perform the same experiment
simultaneously compromises at least 80 of the users’ friends is but assume that the caller c is classified as a high priority
relatively low. Of course, the adversary could exploit software friend (5× higher weight). Indeed, prioritization cuts down the
or hardware vulnerabilities in clients’ end devices to invalidate average session start proportional to the caller’s weight. For
this assumption, but crucially, MPM systems are at least not in = 5, the average session start is 4.4 rounds in RRA versus
1.2 rounds in the baseline (a 3.6× latency hit).
vulnerable to sybils (dialing requires a pre-shared secret).
We quantify the disruption of PRAs in Alpenhorn by
Alternate tradeoffs. It takes callers in our modified Alpenhorn
measuring how many dialing rounds it takes a particular
16× longer than the baseline to establish a connection with
caller to establish a session with a friend as a function of
their friends (when in = 5 and there is no prioritization). If
the allocator’s capacity (in). The baseline for comparison is
rounds are long (minutes or tens of minutes), this dramatically
the original Alpenhorn system which uses the FIFO allocator
hinders usability. An alternative is to trade other resources for
described in Section III-B. Our experiment first samples a
latency: clients can increase the number of conversations they
number of concurrent callers following a Poisson distribution
can handle by 16× (paying a corresponding network cost due
with an average rate of in processes/round. We set the average
to dummy messages) to regain the lower latency. Equivalently,
the system can decrease the dialing round duration (again, at a
4 While Facebook is different from a messaging app, Facebook Messenger
CPU and network cost increase for all clients and the service).
relies on users’ Facebook contacts and has over 1.3 billion monthly users [23].
384
VIII. R ELATED WORK
resources (network, storage, middleboxes) for their VMs to a
centralized controller via a small dedicated channel before the
VMs can use the shared data center resources. While the use
of the shared resources is vulnerable to consumption-based
side channels, the request for resources and the corresponding
allocation might be vulnerable to allocation-based side channels.
Indeed, we believe that systems that make a distinction between
the data plane and control plane are good targets to study for
potential allocation-based side channels.
Several prior works study privacy in resource allocation
mechanisms, including matchings and auctions [11, 13, 17, 40,
63, 65, 75, 85], but the definition of privacy, the setting, and
the guarantees are different from those studied in this work;
the proposed solutions would not prevent allocation-based side
channels. Beaude et al. [13] allow clients to jointly compute
an allocation without revealing their demands to each other
via secure multiparty computation. Zhang and Li [85] design
a type of searchable encryption that allows an IoT gateway to Enhancements to PRAs. Note that PRAs naturally use reforward tasks coming from IoT devices (e.g., smart fridges) to sources to compute allocations: they execute CPU instructions,
the appropriate fog or cloud computing node without learning sample randomness, access memory, etc. As a result, even
anything about the tasks. Similarly, other works [17, 40, 63, though the allocation itself might reveal no information, the
65, 75] study how to compute auctions while hiding clients’ way in which PRAs compute that allocation is subject to
bids. Unlike PRAs, the goal of all of these works is to hide standard consumption-based side-channel attacks (e.g., timing
the inputs from the allocator or some auditor (or to replace attacks). For example, a process might infer how many other
the allocator with a multi-party protocol), and not to hide the processes there are based on how long it took the PRA to
existence of clients.
compute the allocation. It is therefore desirable to ensure that
The work of Hsu et al. [41] is related to DPRA (§IV-C). They PRA implementations are constant time and take into account
show how to compute matchings and allocations that guarantee the details of the hardware on which they run. To illustrate
joint-differential privacy [46] and hide the preferences of an how critical this is, observe that DPRA (§IV-C) samples
agent from other agents. However, their setting, techniques, noise from the Laplace distribution assuming infinite precision.
and assumptions are different. We highlight a few of these However, real hardware has finite precision and rounding effects
differences: (1) their mechanism has a notion of price, and for floating point numbers that violates differential privacy
converges when agents stop bidding for additional resources unless additional safeguards are used [28, 62]. Beyond these
because the price is too high for them to derive utility. In enhancements, we consider two other future directions.
our setting, processes do not have a budget and there is no
Private multi-resource allocators. In some settings there is
notion of prices. (2) Their scheme assumes that the allocator’s a need to allocate multiple types of resources to clients with
capacity is at least logarithmic in the number of agents. (3) heterogeneous demands. For example, suppose there are three
Their setting does not distinguish between honest or malicious resources R1, R2, R3 (each with its own capacity). Client c1
agents, so the sensitivity is based on all agents’ demands. In wants two units of R1 and one unit of R2, and client c2 wants
the presence of sybils (which as we show in Section VII is one unit of R1 and three units of R3. How can we allocate
the only setting that makes sense for DPRA), assumption (2) resources to clients without leaking information and ensuring
cannot be met, and (3) leads to unbounded sensitivity.
different definitions of fairness [14, 29, 37, 39, 45, 64]? A naive
approach of using a PRA for each resource independently is
IX. D ISCUSSION AND FUTURE WORK
neither fair (for any of the proposed fairness definitions) nor
We introduce private resource allocators (PRA) to deal with
optimal in terms of utilization.
allocation-based side-channel attacks, and evaluate them on
Private distributed resource allocators. Many allocators
an existing metadata-private messenger. While PRAs might be
operate in a distributed setting. For example, the transmission
useful in other contexts, we emphasize that their guarantees
control protocol (TCP) allocates network capacity fairly on a
are limited to hiding which processes received resources from
per-flow basis without a central allocator. Can we distribute
the allocation itself. Processes could learn this information
the logic of our PRAs while still guaranteeing privacy and
through other means (this is not an issue in MPM systems
liveness with minimal or no coordination?
since by design they hide all other metadata). For example,
even if one uses a PRA to allocate threads to a fixed set of
ACKNOWLEDGMENTS
CPUs, the allocated threads could learn whether other CPUs
were allocated by observing cache contention, changes to the
We thank the anonymous reviewers for their thoughtful
filesystem state, etc.
feedback, which significantly improved this paper. We also
Other applications in which allocation-based side channels thank Aaron Roth for pointing us to related work, and Andrew
could play a role are those in which processes ask for Beams for his comments on an earlier draft of this paper.
permission to consume a resource before doing so. One example
is FastPass [67], which is a low-latency data center architecture
D ISCLAIMER
in which VMs first ask a centralized arbiter for permission and
instructions on how to send a packet to ensure that their packets
This document does not contain technology or technical data
will not contribute to queue build up in the network. Similarly, controlled under either the U.S. International Traffic in Arms
Pulsar [7] works in two phases: cloud hypervisors ask for Regulations or the U.S. Export Administration Regulations.
385
R EFERENCES
[1] Apache Hadoop. https://hadoop.apache.org.
[2] Alpenhorn: Bootstrapping secure communication without
leaking metadata. https://github.com/vuvuzela/alpenhorn,
Nov. 2018. commit 3284950.
[3] Criterion: Statistics-driven microbenchmarking in Rust.
https://github.com/japaric/criterion.rs, Apr. 2019.
[4] D. Agrawal, B. Archambeault, J. R. Rao, and P. Rohatgi.
The EM side-channel(s). In Proceedings of the
Workshop on Cryptographic Hardware and Embedded
Systems (CHES), Aug. 2002.
[5] D. Agrawal and D. Kesdogan. Measuring anonymity:
The disclosure attack. IEEE Security & Privacy, 1(6),
Nov. 2003.
[6] N. Alexopoulos, A. Kiayias, R. Talviste, and
T. Zacharias. MCMix: Anonymous messaging via secure
multiparty computation. In Proceedings of the USENIX
Security Symposium, Aug. 2017.
[7] S. Angel, H. Ballani, T. Karagiannis, G. O’Shea, and
E. Thereska. End-to-end performance isolation through
virtual datacenters. In Proceedings of the USENIX
Symposium on Operating Systems Design and
Implementation (OSDI), Oct. 2014.
[8] S. Angel, H. Chen, K. Laine, and S. Setty. PIR with
compressed queries and amortized query processing. In
Proceedings of the IEEE Symposium on Security and
Privacy (S&P), May 2018.
[9] S. Angel, D. Lazar, and I. Tzialla. What’s a little leakage
between friends? In Proceedings of the ACM Workshop
on Privacy in the Electronic Society (WPES), Oct. 2018.
[10] S. Angel and S. Setty. Unobservable communication
over fully untrusted infrastructure. In Proceedings of the
USENIX Symposium on Operating Systems Design and
Implementation (OSDI), Nov. 2016.
[11] S. Angel and M. Walfish. Verifiable auctions for online
ad exchanges. In Proceedings of the ACM SIGCOMM
Conference, Aug. 2013.
[12] S. Barbera. A note on group strategy-proof decision
schemes. Econometrica, 47(3), May 1979.
[13] O. Beaude, P. Benchimol, S. Gauberts, P. Jacquot, and
A. Oudjane. A privacy-preserving method to optimize
distributed resource allocation. arXiv:1908/03080, Aug.
2019. http://arxiv.org/abs/1908.03080.
[14] A. A. Bhattacharya, D. Culler, E. Friedman, A. Ghodsi,
S. Shenker, and I. Stoica. Hierarchical scheduling for
diverse datacenter workloads. In Proceedings of the
ACM Symposium on Cloud Computing (SOCC), Oct.
2013.
[15] L. Bilge, T. Strufe, D. Balzarotti, and E. Kirda. All your
contacts are belong to us: Automated identity theft
attacks on social networks. In International World Wide
Web Conference (WWW), Apr. 2009.
[16] N. Borisov, G. Danezis, and I. Goldberg. DP5: A private
presence service. In Proceedings of the Privacy
Enhancing Technologies Symposium (PETS), June 2015.
[17] F. Brandt. How to obtain full privacy in auctions.
International Journal of Information Security, 5(4), Oct.
2006.
[18] G. Brassard and S. Kannan. The generation of random
permutations on the fly. Information Processing Letters,
28(4), July 1988.
[19] A. Cabrera Aldaya, B. B. Brumley, S. ul Hassan,
C. Pereida García, and N. Tuveri. Port contention for
fun and profit. In Proceedings of the IEEE Symposium
on Security and Privacy (S&P), May 2019.
[20] H. Chen, Z. Huang, K. Laine, and P. Rindal. Labeled
PSI from fully homomorphic encryption with malicious
security. In Proceedings of the ACM Conference on
Computer and Communications Security (CCS), Oct.
2018.
[21] S.-F. Cheng, D. M. Reeves, Y. Vorobeychik, and M. P.
Wellman. Notes on equilibria in symmetric games. In
Proceedings of the International Workshop on Game
Theoretic and Decision Theoretic Agents (GTDT), 2004.
[22] D. Cole. We kill people based on metadata.
http://goo.gl/LWKQLx, May 2014. The New York
Review of Books.
[23] J. Constine. Facebook messenger will get desktop apps,
co-watching, emoji status. https://techcrunch.com/2019/
04/30/facebook-messenger-desktop-app/.
[24] H. Corrigan-Gibbs and B. Ford. Dissent: Accountable
anonymous group messaging. In Proceedings of the
ACM Conference on Computer and Communications
Security (CCS), Oct. 2010.
[25] G. Danezis. Statistical disclosure attacks. In Proceedings
of the IFIP Information Security Conference, May 2003.
[26] G. Danezis, C. Diaz, and C. Troncoso. Two-sided
statistical disclosure attack. In Proceedings of the
Workshop on Privacy Enhancing Technologies (PET),
June 2007.
[27] G. Danezis and A. Serjantov. Statistical disclosure or
intersection attacks on anonymity systems. In
Proceedings of the International Workshop on
Information Hiding, May 2004.
[28] Y. Dodis, A. López-Alt, I. Mironov, and S. Vadhan.
Differential privacy with imperfect randomness. In
Proceedings of the International Cryptology Conference
(CRYPTO), Aug. 2012.
[29] D. Dolev, D. G. Feitelson, J. Y. Halpern, R. Kupferman,
and N. Linial. No justied complaints: On fair sharing of
multiple resources. In Proceedings of the Innovations in
Theoretical Computer Science (ITCS) Conference, Aug.
2012.
[30] J. R. Douceur. The sybil attack. In Proceedings of the
International Workshop on Peer-to-Peer Systems, Mar.
2002.
[31] C. Dwork, F. McSherry, K. Nissim, and A. Smith.
Calibrating noise to sensitivity in private data analysis.
In Proceedings of the Theory of Cryptography
Conference (TCC), Mar. 2006.
[32] C. Dwork and A. Roth. The Algorithmic Foundations of
386
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[40]
[41]
[42]
[43]
[44]
[45]
[46]
Differential Privacy. Foundations and Trends in
Theoretical Computer Science. Now Publishers Inc,
2014.
C. Dwork, G. N. Rothblum, and S. Vadhan. Boosting
and differential privacy. In Proceedings of the IEEE
Symposium on Foundations of Computer Science
(FOCS), Oct. 2010.
Equifax. 2017 cybersecurity incident & important
consumer information.
https://www.equifaxsecurity2017.com, Sept. 2017.
D. Gale, H. W. Kuhn, and A. W. Tucker. On symmetric
games. In Contributions to the Theory of Games, Annals
of Mathematics Studies. Princeton University Press,
1952.
D. Genkin, A. Shamir, and E. Tromer. RSA key
extraction via low-bandwidth acoustic cryptanalysis. In
Proceedings of the International Cryptology Conference
(CRYPTO), Aug. 2014.
A. Ghodsi, M. Zaharia, B. Hindman, A. Konwinski,
S. Shenker, and I. Stoica. Dominant resource fairness:
Fair allocation of multiple resource types. In
Proceedings of the USENIX Symposium on Networked
Systems Design and Implementation (NSDI), Mar. 2011.
O. Goldreich and R. Ostrovsky. Software protection and
simulation on oblivious RAMs. Journal of the ACM,
43(3), May 1996.
A. Gutman and N. Nisan. Fair allocation without trade.
In Proceedings of the International Conference on
Autonomous Agents and Multiagent Systems (AAMAS),
June 2012.
M. Harkavy, J. D. Tygar, and H. Kikuchi. Electronic
auctions with private bids. In Proceedings of the
USENIX Workshop on Electronic Commerce, Aug. 1998.
J. Hsu, Z. Huang, A. Roth, T. Roughgarden, and Z. S.
Wu. Private matchings and allocations. In Proceedings
of the ACM Symposium on Theory of Computing
(STOC), May 2014.
P. Hunt, M. Konar, F. P. Junqueira, and B. Reed.
ZooKeeper: Wait-free coordination for internet-scale
systems. June 2010.
M. Hutter and J.-M. Schmidt. The temperature side
channel and heating fault attacks. In Proceedings of the
International Conference on Smart Card Research and
Advanced Applications (CARDIS), Nov. 2013.
P. Kairouz, S. Oh, and P. Viswanath. The composition
theorem for differential privacy. In Proceedings of the
International Conference on Machine Learning (ICML),
June 2014.
I. Kash, A. D. Procaccia, and N. Shah. No agent left
behind: Dynamic fair division of multiple resources. In
Proceedings of the International Conference on
Autonomous Agents and Multiagent Systems (AAMAS),
May 2013.
M. Kearns, M. Pai, A. Roth, and J. Ullman. Mechanism
design in large games: Incentives and privacy. In
Proceedings of the Innovations in Theoretical Computer
Science (ITCS) Conference, 2014.
[47] J. Kelsey, B. Schneier, D. Wagner, and C. Hall. Side
channel cryptanalysis of product ciphers. In Proceedings
of the European Symposium on Research in Computer
Security (ESORICS), Sept. 1998.
[48] D. Kesdogan, D. Mölle, S. Ritchter, and P. Rossmanith.
Breaking anonymity by learning a unique minimum
hitting set. In Proceedings of the International Computer
Science Symposium in Russia (CSR), Aug. 2009.
[49] D. Kesdogan and L. Pimenidis. The hitting set attack on
anonymity protocols. In Proceedings of the International
Workshop on Information Hiding, May 2004.
[50] P. Kocher, J. Jaffe, and B. Jun. Differential power
analysis. In Proceedings of the International Cryptology
Conference (CRYPTO), Aug. 1999.
[51] P. C. Kocher. Timing attacks on implementations of
Diffie-Hellman, RSA, DSS, and other systems. In
Proceedings of the International Cryptology Conference
(CRYPTO), Aug. 1996.
[52] A. Kwon, H. Corrigan-Gibbs, S. Devadas, and B. Ford.
Atom: Horizontally scaling strong anonymity. In
Proceedings of the ACM Symposium on Operating
Systems Principles (SOSP), Oct. 2017.
[53] A. Kwon, D. Lu, and S. Devadas. XRD: Scalable
messaging system with cryptographic privacy.
arXiv:1901/04368, Jan. 2019.
http://arxiv.org/abs/1901.04368.
[54] S. Larson. Every single yahoo account was hacked—3
billion in all.
https://money.cnn.com/2017/10/03/technology/business/
yahoo-breach-3-billion-accounts/index.html, Oct. 2017.
[55] D. Lazar, Y. Gilad, and N. Zeldovich. Karaoke:
Distributed private messaging immune to passive traffic
analysis. In Proceedings of the USENIX Symposium on
Operating Systems Design and Implementation (OSDI),
Oct. 2018.
[56] D. Lazar, Y. Gilad, and N. Zeldovich. Yodel: Strong
metadata security for voice calls. In Proceedings of the
ACM Symposium on Operating Systems Principles
(SOSP), 2019.
[57] D. Lazar and N. Zeldovich. Alpenhorn: Bootstrapping
secure communication without leaking metadata. In
Proceedings of the USENIX Symposium on Operating
Systems Design and Implementation (OSDI), Nov. 2016.
[58] R. Mahajan, S. M. Bellovin, S. Floyd, J. Ioannidis,
V. Paxson, and S. Shenker. Controlling high bandwidth
aggregates in the network. In ACM SIGCOMM CCR,
2002.
[59] N. Mallesh and M. Wright. The reverse statistical
disclosure attack. In Proceedings of the International
Workshop on Information Hiding, June 2010.
[60] M. Marlinspike. Technology preview: Private contact
discovery for Signal.
https://signal.org/blog/private-contact-discovery/, Sept.
2017.
[61] J. Mayer, P. Mutchler, and J. C. Mitchell. Evaluating the
387
[62]
[63]
[64]
[65]
[66]
[67]
[68]
[69]
[70]
[71]
[72]
[73]
[74]
[75]
privacy properties of telephone metadata. Proceedings of
the National Academy of Sciences of the United States
of America (PNAS), 113(20), May 2016.
I. Mironov. On significance of the least significant bits
for differential privacy. In Proceedings of the ACM
Conference on Computer and Communications Security
(CCS), Oct. 2012.
M. Naor, B. Pinkas, and R. Summer. Privacy preserving
auctions and mechanism design. In Proceedings of the
ACM Conference on Electronic Commerce (EC), Nov.
1999.
D. C. Parkes, A. D. Procaccia, and N. Shah. Beyond
dominant resource fairness: Extensions, limitations, and
indivisibilities. In Proceedings of the ACM Conference
on Electronic Commerce (EC), June 2012.
D. C. Parkes, M. O. Rabin, S. M. Shieber, and C. A.
Thorpe. Practical secrecy-preserving, verifiably correct
and trustworthy auctions. In Proceedings of the ACM
Conference on Electronic Commerce (EC), Aug. 2006.
F. Pérez-González and C. Troncoso. Understanding
statistical disclosure: A least squares approach. In
Proceedings of the Privacy Enhancing Technologies
Symposium (PETS), July 2012.
J. Perry, A. Ousterhout, H. Balakrishnan, D. Shah, and
H. Fugal. Fastpass: A centralized “zero-queue”
datacenter network. In Proceedings of the ACM
SIGCOMM Conference, 2014.
A. Pfitzmann and M. Hansen. A terminology for talking
about privacy by data minimization: Anonymity,
unlinkability, undetectability, unobservability,
pseudonymity, and identity management. http://dud.inf.
tu-dresden.de/literatur/Anon_Terminology_v0.34.pdf,
Aug. 2010.
H. Rashtian, Y. Boshmaf, P. Jaferian, and K. Beznosov.
To befriend or not? A model of friend request acceptance
on Facebook. In Proceedings of the USENIX Symposium
on Usable Privacy and Security (SOUPS), July 2014.
J.-F. Raymond. Traffic analaysis: Protocols, attacks,
design issues, and open problems. In Proceedings of the
International Workshop on Design Issues in Anonymity
and Unobservability, July 2000.
A. Rusbridger. The Snowden leaks and the public.
http://goo.gl/VOQL86, Nov. 2013. The New York
Review of Books.
A. Schlösser, D. Nedospasov, J. Krämer, S. Orlic, and
J.-P. Seifert. Simple photonic emission analysis of AES.
In Proceedings of the Workshop on Cryptographic
Hardware and Embedded Systems (CHES), Aug. 2012.
B. Schneier. Data and Goliath: The Hidden Battles to
Collect Your Data and Control Your World. W.W.
Norton & Company, Mar. 2015.
M. Schroepfer. An update on our plans to restrict data
access on Facebook. https://newsroom.fb.com/news/
2018/04/restricting-data-access/, Apr. 2018.
Y.-E. Sun, H. Huang, X.-Y. Li, Y. Du, M. Tian, H. Xu,
and M. Xiao. Privacy-preserving strategyproof auction
[76]
[77]
[78]
[79]
[80]
[81]
[82]
[83]
[84]
[85]
388
mechanisms for resource allocation in wireless
communications. In International Conference on Big
Data Computing and Communications (BIGCOM), 2016.
E. Thereska, H. Ballani, G. O’Shea, T. Karagiannis,
A. Rowstron, T. Talpey, R. Black, and T. Zhu. Ioflow: A
software-defined storage architecture. In Proceedings of
the ACM Symposium on Operating Systems Principles
(SOSP), Nov. 2013.
C. Troncoso, B. Gierlichs, B. Preneel, and
I. Verbauwhede. Perfect matching disclosure attack. In
Proceedings of the Privacy Enhancing Technologies
Symposium (PETS), July 2008.
N. Tyagi, Y. Gilad, D. Leung, M. Zaharia, and
N. Zeldovich. Stadium: A distributed metadata-private
messaging system. In Proceedings of the ACM
Symposium on Operating Systems Principles (SOSP),
Nov. 2017.
J. Ugander, B. Karrer, L. Backstrom, and C. Marlow.
The anatomy of the Facebook social graph.
arXiv:1111/4503, Nov. 2011.
http://arxiv.org/abs/1111.4503.
UpGuard. Losing face: Two more cases of third-party
Facebook app data exposure. https:
//www.upguard.com/breaches/facebook-user-data-leak,
Apr. 2019.
J. van den Hooff, D. Lazar, M. Zaharia, and
N. Zeldovich. Vuvuzela: Scalable private messaging
resistant to traffic analysis. In Proceedings of the ACM
Symposium on Operating Systems Principles (SOSP),
Oct. 2015.
W. van Eck. Electromagnetic radiation from video
display units: An eavesdropping risk? Computers &
Security, 4, 1985.
D. I. Wolinsky, H. Corrigan-Gibbs, B. Ford, and
A. Johnson. Dissent in numbers: Making strong
anonymity scale. In Proceedings of the USENIX
Symposium on Operating Systems Design and
Implementation (OSDI), Oct. 2012.
M. Zaharia, M. Chowdhury, T. Das, A. Dave, J. Ma,
M. McCauley, M. J. Franklin, S. Shenker, and I. Stoica.
Resilient distributed datasets: A fault-tolerant abstraction
for in-memory cluster computing. In Proceedings of the
USENIX Symposium on Networked Systems Design and
Implementation (NSDI), Apr. 2012.
L. Zhang and J. Li. Enabling robust and
privacy-preserving resource allocation in fog computing.
IEEE Access, 6(2018), Sept. 2018.
A PPENDIX
superpoly(λ). Let X be a set with a super-polynomial number of
processes.
By Claim 11, RA allocates an overwhelming fraction
A. Secret distribution allocator is insecure
of the p ∈ X with negligible probability, so the adversary
Section III-B describes an allocator that chooses which
can identify one such process. The adversary can then set
processes to service based on a secret distribution that is not
Pmal = {p} and Phon = X − {p}. This satisfies the condition
known to the adversary. We prove that this allocator does not
of Claim 12, so we have that Pr[p ∈ U|b = 0] ≥ 1/poly(λ).
guarantee C-privacy (Definition 2) and Liveness (Definition 3).
Finally, when b = 1, the challenger passes P = Pmal ∪ Phon
Proof. Suppose that A sets Pmal = {pi } and |Phon | ≥ k. The as input to RA. Notice that P = X, so by Claim 11, Pr[p ∈
allocator then picks s ∈ [0, k] and U ← RA(P, k, λ) from a U|b = 1] ≤ negl(λ). As a result, the advantage of the adversary
secret distribution not known to A. Assume for purposes of is inversely polynomial, which contradicts the claim that RA
guarantees IT-privacy.
contradiction that RA satisfies C-Privacy and Liveness.
A key observation is that Pr[pi ∈ U|b = 0 ∧ s > 0] = 1,
since s processes in Pmal will be allocated a resource when C. DPRA guarantees differential privacy
b = 0. However, Pr[pi ∈ U|b = 1 ∧ s > 0] = xi . Since RA
We prove that DPRA (§IV-C) is (ε, δ)-differentially private
1
guarantees C-Privacy, Pr[s > 0] · (1 − xi ) is negligible for any (Def. 5) for ε = g(λ)
and δ = 21 exp( 1−h(λ)
g(λ) ) if |Phon | ≤ βhon .
choice of Pmal = {pi }.
To simplify the notation, let β = βhon . Let f (S) = |S| be a
function that computes the cardinality of a set. Let P be the
Claim 10. Pr[s > 0] = 1/poly(λ).
This follows from the fact that RA guarantees Liveness (Def- set of processes as a function of the challenger’s bit b:
inition 3), and so it must allocate resources to s > 0 processes
Pmal
if b = 0
with non-negligible probability.
P(b) =
Pmal ∪ Phon if b = 1
By Claim 10 and C-Privacy, (1 − xi ) must then be negligible
It is helpful to think of P(0) as a database with one row and
since Pr[s > 0] · (1 − xi ) = (1 − xi )/poly(λ) = negl(λ).
Observe that since (1 − xi ) is negligible for every choice entry Pmal , and P(1) as a database with two rows and entries
of Pmal = {pi }, this implies that every process is allocated Pmal and Phon . Accordingly, f (P(b)) is a function that sums
a resource with probability close to 1 when b = 1, which the entries in all rows of the databases. Since we are given
contradicts the capacity of RA since |P| > k. Therefore, the that |Phon | ≤ β, the ℓ1 -sensitivity of f (·) is bounded by β.
To begin, let us analyze the first half of DPRA. Assume the
difference in conditional probabilities Pr[s > 0] · (1 − xi ) must
be non-negligible for some choice of Pmal = {pi }, which algorithm finishes after sampling the noise n, and the output is
contradicts that RA satisfies C-Privacy. Furthermore, finding one t (i.e., we are ignoring choosing the processes for now). Also,
we will ignore the ceiling operator when computing n, since
such pi is efficient as there are only |P| = poly(λ) elements.
post-processing a differentially private function by rounding
B. Proof of impossibility result
it up keeps it differentially private [32, Proposition 2.1]. So
Theorem 1 (Impossibility result). There does not exist a in what follows, n and therefore t are both real numbers. Call
resource allocator RA that achieves both IT-privacy (Def. 1) and this algorithm M:
Liveness (Def. 3) when k is poly(λ) and |P| is superpoly(λ)
Algorithm M:
(i.e., |P| ∈ ω(λc ) for all constants c).
• Inputs: P(b), k
Proof. We start with two simple claims.
• n ← max(0, Lap(µ, s))
Claim 11. If |P| = superpoly(λ), then an overwhelming
• Output: t ← |P(b)| + n
fraction of the p ∈ P have Pr[p ∈ RA(P, k, λ)] ≤ negl(λ).
where µ and s are the location and scale parameters of the
This follows from a simple pigeonhole argument: there are a
Laplace distribution (in DPRA they are functions of λ and β,
super-polynomial number of processes requesting service, and
but we will keep them abstract for now).
at most a polynomial number of them can have a non-negligible
probability mass.
Theorem 2. M is (ε, δ)-differentially private for ε = β/s
β
Claim 12. If RA guarantees liveness, then in the security and δ = −∞ Lap(w|µ, β/ε)dw. Specifically, for any subset of
game, the conditional probability Pr[p ∈ U|b = 0 ∧ Pmal = values L in the range, [f (P(0)), ∞) of M:
{p}] must be non-negligible. This follows since for all RA, if
P = {p}, RA(P, k, λ) must output U = {p} with non-negligible
probability (recall liveness holds for any set P, including the
set with only one process). Therefore, in the security game
when b = 0, the call to RA(Pmal , k, λ) must return {p} with
non-negligible probability.
We now prove Theorem 1 by contradiction. Suppose that an
allocator RA achieves both liveness and privacy when |P| =
Pr[M(P(0), k) ∈ L] ≤ eε · Pr[M(P(1), k) ∈ L] + δ
and
Pr[M(P(1), k) ∈ L] ≤ eε · Pr[M(P(0), k) ∈ L]
Proof. Let x = f (P(0)), y = f (P(1)).
We partition L into two sets: L1 = L ∩ [y, ∞) and L2 =
L − L1 = L ∩ [x, y).
389
Let dP(b),k (·) be the probability density function for M’s
output when the sampled bit is b. For ease of notation, we will
denote this function by db (·).
For any particular value ℓ ∈ L1 , we show that d0 (ℓ) ≤
eε · d1 (ℓ) and d1 (ℓ) ≤ eε · d0 (ℓ). Integrating each of these
inequalities over the values in L1 , we get
Lemma 14. Pr[M(P(0), k) ≤ y] ≤ δ.
Proof.
Pr[M(P(0), k) ≤ y] = Pr[x + n ≤ y]
≤ Pr[x + n ≤ x + β]
since y − x ≤ β
= Pr[n ≤ β]
= Pr[Lap(µ, s) ≤ β]
Pr[M(P(0), k) ∈ L1 ] ≤ eε · Pr[M(P(1), k) ∈ L1 ]
β
and
=
Pr[M(P(1), k) ∈ L1 ] ≤ eε · Pr[M(P(0), k) ∈ L1 ]
=
Values in L1 are easy to handle because M can produce these
values regardless of whether the bit b is 0 or 1 and we are
able to bound pointwise the ratio of the probability densities
of producing each of these values because we choose a large
enough scale parameter.
Values in L2 can only be output by M if b = 0, and if such
values are output, information about bit b would be leaked.
Because we choose a large enough location parameter, we can
show that Pr[M(P(0), k) ∈ [x, y)] ≤ δ.
The theorem follows by combining these two cases. We first
deal with L1 .
ε
−ε|ℓ − µ|
Lap(ℓ|µ, β/ε) =
· exp(
)
2β
β
For all ℓ ∈ L1 we have that:
β
)
ε
ε
ε|ℓ − (µ + x)|
=
· exp(−
)
2β
β
β
d1 (ℓ) = Lap(ℓ|µ + y, )
ε
ε|ℓ − (µ + y)|
ε
=
· exp(−
)
2β
β
d0 (ℓ) = Lap(ℓ|µ + x,
It follows that for all ℓ ∈ L1 :
exp(− ε|ℓ−(µ+x)|
)
d0 (ℓ)
β
=
ε|ℓ−(µ+y)|
d1 (ℓ)
)
exp(−
β
−ε|ℓ − (µ + x)| + ε|ℓ − (µ + y)|
)
β
ε(|ℓ − (µ + y)| − |ℓ − (µ + x)|)
)
= exp(
β
ε|x − y|
≤ exp(
) by triangle ineq.
β
≤ exp(ε) by def. of ℓ1 sensitivity
= exp(
A similar calculation bounds the ratio
d1 (ℓ)
d0 (ℓ) .
We now prove that Pr[M(P(0), k) ∈ [x, y)] ≤ δ.
Lap(w|µ, β/ε)dw
−∞
=δ
Finally, we can prove the theorem. For any set L:
Pr[M(P(b), k) ∈ L] = Pr[M(P(b), k) ∈ L2 ]
+ Pr[M(P(b), k) ∈ L1 ]
≤ δ + Pr[M(P(b), k) ∈ L1 ]
≤ δ + eε Pr[M(P(b̄), k) ∈ L1 ]
≤ δ + eε Pr[M(P(b̄), k) ∈ L]
Lemma 13. For any set L1 ⊆ [y, ∞): Pr[M(P(b), k) ∈ L1 ] ≤
eε · Pr[M(P(b̄), k) ∈ L1 ]
Proof. Recall the Laplace distribution with s = β/ε:
Lap(w|µ, s)dw
−∞
β
The above shows M is (ε, δ)-differentially private. Note that:
1
β
exp( ε(β−µ)
)
if β < µ
β
δ=
Lap(w|µ, β/ε)dw = 2 1
ε(µ−β)
exp(
)
if β ≥ µ
1
−
−∞
2
β
If we set µ = β · h(λ) for h(λ) > 1, and s = β · g(λ), this gives
1
us the desired values of ε = g(λ)
and δ = 21 · exp( 1−h(λ)
g(λ) ).
We now show that the rest of DPRA (the uniform selection
of processes), remains (ε, δ)-differentially private.
Let X be a random variable denoting the number of processes
in Pmal that get the allocation. Since the adversary only learns
which processes in Pmal were allocated the resource, from his
point of view dummy processes and processes in Phon are
indistinguishable. Thus for each value ℓ ∈ [0, k],
Pr[X = ℓ | M(P(b), k) = t ∧ b = 0] = Pr[X =
ℓ | M(P(b), k) = t ∧ b = 1]. Combined with the inequalities
governing the probabilities that M outputs each value of t for
b = 0 and b = 1 respectively, we have that Pr[X = ℓ | b =
0] ≤ eε Pr[X = ℓ | b = 1] + δ and similarly with the values of
b exchanged. Thus the distribution of the number of malicious
processes allocated are very close for b = 0 and b = 1.
Finally, since our allocator is symmetric (§IV-C), the actual
identity of the malicious processes allocated does not reveal
any more information about b than this number.
D. Proofs of other allocation properties
Lemma 15. Any resource allocator that achieves IT-Privacy
satisfies population monotonicity.
Proof. We prove the contrapositive. If RA fails to achieve
population monotonicity, then there exists two processes pi and
390
pj such that when pj stops requesting allocation, the probability
that RA allocates pi a resource decreases. An adversary A
can thus construct P in the security game such that pi ∈ Pmal
and Phon = {pj }. As a result, Pr[pi ∈ Umal |b = 0] < Pr[pi ∈
Umal |b = 1] and RA fails to satisfy IT-Privacy.
Lap(βhon , ε). Assuming |P| > k, for all p ∈ P it follows that
Pr[p ∈ U] = k/(|P| + |U|). Since Pr[pi ∈ U] = Pr[pj ∈ U] for
all pi , pj ∈ M, it follows that DPRA satisfies envy-freeness.
Lemma 16. SRA and RRA satisfy population monotonicity.
This follows from the fact that SRA and RRA are IT-Private.
Lemma 17. DPRA satisfies population monotonicity.
Proof. Observe that the probability a given process pi is
allocated a resource is Pr[pi ∈ U] = min(t, k)/t where t is
drawn from |P|+n and n is the noise sampled from the Laplace
distribution. As a process pj stops requesting service, we have
t = (|P| − 1) + n. Since min(t, k)/t ≤ min(t − 1, k)/(t − 1), this
implies Pr[pi ∈ U] is strictly increasing as |P| decreases.
Lemma 18. SRA satisfies resource monotonicity.
Proof. When SRA’s capacity increases from k to k + c (with
c positive), the allocator can accommodate c more processes.
With |M| fixed, this implies the probability of a process getting
allocated a resource is increased by c / |M|.
Lemma 19. RRA satisfies resource monotonicity.
Proof. When RRA’s capacity increases from k to k + c (with
c positive), the allocator can accommodate at most c more
processes. With βp fixed, this implies the probability of a
process getting allocated a resource is increased by c / βp .
Lemma 20. DPRA satisfies resource monotonicity.
Proof. When DPRA’s capacity increases from k to k+c (with c
positive), the allocator is able to accommodate at most c more
processes. Although the capacity of the allocator is increasing,
the distribution Lap(βhon /ε) remains constant. Recall that the
probability a given process pi is allocated a resource is Pr[pi ∈
U] = min(t, k) / t where t is drawn from |P| + n and n is the
noise sampled from the Laplace distribution. Since min(t, k)
≤ min(t, k + c) for all t, k, we have that Pr[pi ∈ U] is strictly
increasing as k increases.
Lemma 21. SRA satisfies envy-freeness.
Proof. SRA assigns every process pi ∈ M a unique allocation
slot in [0, |M|), giving each process a constant k/|M| probability
of being allocated a resource for a uniformly random r ∈ N≥0 .
Since Pr[pi ∈ U] = Pr[pj ∈ U] for all pi , pj ∈ M, it follows
that SRA satisfies envy-freeness.
Lemma 22. RRA satisfies envy-freeness.
Proof. RRA pads up to βP each round with dummy processes,
so that each process pi ∈ P has probability 1/βP of being
allocated a resource. Since Pr[pi ∈ U] = Pr[pj ∈ U] for all
pi , pj ∈ M, it follows that RRA satisfies envy-freeness.
Lemma 23. DPRA satisfies envy-freeness.
Proof. DPRA samples k random processes from P∪U where U
consists of dummy processes added by sampling the distribution
391