Enforcing Fair Sharing of Peer-to-Peer Resources
Enforcing Fair Sharing of Peer-to-Peer Resources
Enforcing Fair Sharing of Peer-to-Peer Resources
Remote: F1 F4 D
While the smart card and quota manager designs A F4
are focused on enforcing quotas, an alternative ap-
proach is to require nodes to maintain their own F1 F3
records and publish them, such that other nodes can F2
audit those records. Of course, nodes have no inher- B C
ent reason to publish their records accurately. This Local: A F1
Local: B F2
Bandwidth (bps)
due to storage accounting. In particular, we ex- 50
clude the cost of p2p overlay maintenance and stor-
40
ing/fetching of files, since it is not relevant to our 30
comparison. Unless otherwise specified, all simula- 20
tions are done with 10,000 nodes, 285 files stored Auditing w/o caching
10 Auditing w/ caching
per nodes, and an average node lifetime of 14 days. Quota managers
0
1000 10000 100000
4.1 Results No. of nodes (log scale)
Figure 3 shows the average upstream bandwidth re- Figure 3: Overhead with different number of nodes.
quired per node, as a function of the number of
nodes (the average required downstream bandwidth
is identical). The per-node bandwidth requirement 200
180 Auditing w/o caching
is almost constant, thus all systems scale well with Auditing w/ caching
the size of the overlay network. Bandwidth (bps) 160 Quota managers
140
Figure 4 shows the bandwidth requirement as a 120
function of the number of files stored per node. The 100
overheads grow linearly with the number of files, 80
but for auditing without caching, it grows nearly 60
40
twice as fast as the other two designs. Since p2p
20
storage systems are typically used to store large
0
files, this overhead is not a concern. Also, the sys- 0 100 200 300 400 500 600 700
tem could charge for an appropriate minimum file Average number of files stored per node
size to give users an incentive to combine small files
into larger archives prior to storing them. Figure 4: Overhead with different number of files
stored per node.
Figure 5 shown the overhead versus average node
lifetime. The overhead for quota managers grows
rapidly when the node lifetime gets shorter, mostly 350
Auditing w/o caching
from the cost in joining and leaving manager sets 300 Auditing w/ caching
and from voting for file insertions for new nodes. Quota managers
Bandwidth (bps)
250
Our simulations have also shown that quota man-
200
agers are more affected by the file turnover rate, due
to the higher cost for voting. Also, the size of man- 150
ager sets determines the vulnerability of the quota 100
manager design. To tolerate more malicious nodes, 50
we need to increase the size of manager sets, which
0
would result in a higher cost. 0 5 10 15 20 25
In summary, auditing with caching has performance Average node lifetime (days)
comparable to quota managers, but is not subject to
bribery attacks and is less sensitive to the fraction of Figure 5: Overhead with different average node life-
malicious nodes. Furthermore, in a variety of con- time.
ditions, the auditing overhead is quite low — only a
fraction of a typical p2p node’s bandwidth.
5 Related Work References
[1] E. Adar and B. Huberman. Free riding on Gnutella. First
Tangler [15] is designed to provide censorship- Monday, 5(10), Oct. 2000.
resistant publication over a small number of servers [2] R. Anderson. The Eternity service. In Proc. 1st Int’l Conf.
(i.e., 30), exchanging data frequently with one an- on the Theory and Applications of Cryptology, pages
242–252, Prague, Czech Republic, Oct. 1996.
other. To maintain fairness, Tangler requires servers
[3] M. Castro, P. Druschel, A. Ganesh, A. Rowstron, and
to obtain “certificates” from other servers which can D. S. Wallach. Security for structured peer-to-peer over-
be redeemed to publish files for a limited time. A lay networks. In Proc. OSDI’02, Boston, MA, Dec. 2002.
new server can only obtain these certificates by pro- [4] M. Castro and B. Liskov. Practical Byzantine fault toler-
viding storage for the use of other servers and is not ance. In Proc. OSDI’99, New Orleans, LA, Feb. 1999.
allowed to publish anything for its first month on- [5] B. F. Cooper and H. Garcia-Molina. Bidding for storage
line. As such, new servers must have demonstrated space in a peer-to-peer data preservation system. In Proc.
22nd Int’l Conf. on Distributed Computing Systems, Vi-
good service to the p2p network before being al- enna, Austria, July 2002.
lowed to consume any network services. [6] F. Dabek, M. F. Kaashoek, D. Karger, R. Morris, and
I. Stoica. Wide-area cooperative storage with CFS. In
The Eternity Service [2] includes an explicit notion
Proc. SOSP’01, Chateau Lake Louise, Banff, Canada,
of electronic cash, with which users can purchase Oct. 2001.
storage space. Once published, a document cannot [7] P. Druschel and A. Rowstron. PAST: A large-scale, per-
be deleted, even if requested by the publisher. sistent peer-to-peer storage utility. In Proc. 8th Workshop
on Hot Topics in Operating Systems, Schoss Elmau, Ger-
Fehr and Gachter’s study considered an economic many, May 2001.
game where selfishness was feasible but could eas- [8] E. Fehr and S. Gachter. Altruistic punishment in humans.
ily be detected [8]. When their human test subjects Nature, (415):137–140, Jan. 2002.
were given the opportunity to spend their money to [9] J. Feigenbaum and S. Shenker. Distributed algorithmic
punish selfish peers, they did so, resulting in a sys- mechanism design: Recent results and future directions.
In Proc. 6th Int’l Workshop on Discrete Algorithms and
tem with less selfish behaviors. This result helps Methods for Mobile Computing and Communications,
justify that users will be willing to pay the costs of pages 1–13, Atlanta, GA, Sept. 2002.
random audits. [10] P. Maymounkov and D. Mazières. Kademlia: A peer-to-
peer information system based on the XOR metric. In
Proc. IPTPS’02, Cambridge, MA, Mar. 2002.
6 Conclusions [11] S. Ratnasamy, P. Francis, M. Handley, R. Karp, and
S. Shenker. A scalable content-addressable network. In
This paper has presented two architectures for Proc. SIGCOMM’01, pages 161–172, San Diego, CA,
Aug. 2001.
achieving fair sharing of resources in p2p networks.
[12] M. K. Reiter and A. D. Rubin. Crowds: Anonymity for
Experimental results indicate small overheads and web transactions. ACM Transactions on Information and
scalability to large numbers of files and nodes. In System Security, 1(1):66–92, 1998.
practice, auditing provides incentives, allowing us [13] A. Rowstron and P. Druschel. Pastry: Scalable, dis-
to benefit from its increased resistance to collusion tributed object address and routing for large-scale peer-
to-peer systems. In Proc. IFIP/ACM Int’l Conf. on Dis-
and bribery attacks.
tributed Systems Platforms, pages 329–350, Heidelberg,
Germany, Nov. 2001.
[14] I. Stoica, R. Morris, D. Karger, M. F. Kaashoek, and
Acknowledgments H. Balakrishnan. Chord: A scalable peer-to-peer lookup
service for Internet applications. In Proc. SIGCOMM’01,
We thank Moez A. Abdel-Gawad, Shu Du, and San Diego, CA, Aug. 2001.
Khaled Elmeleegy for their work on an earlier ver- [15] M. Waldman and D. Mazières. Tangler: A censorship-
resistant publishing system based on document entangle-
sion of the quota managers design. We also thank ments. In Proc. 8th ACM Conf. on Computer and Com-
Andrew Fuqua and Hervé Moulin for helpful dis- munications Security, Nov. 2001.
cussions on economic incentives. This research was [16] B. Y. Zhao, J. D. Kubiatowicz, and A. D. Joseph.
supported by NSF grant CCR-9985332, Texas ATP Tapestry: An infrastructure for fault-resilient wide-area
grants 003604-0053-2001 and 003604-0079-2001, address and routing. Technical Report UCB//CSD-01-
1141, U. C. Berkeley, Apr. 2001.
and a gift from Microsoft Research.