Efficient URL caching for world wide web crawling

AZ Broder, M Najork, JL Wiener - … of the 12th international conference on …, 2003 - dl.acm.org
AZ Broder, M Najork, JL Wiener
Proceedings of the 12th international conference on World Wide Web, 2003dl.acm.org
Crawling the web is deceptively simple: the basic algorithm is (a) Fetch a page (b) Parse it to
extract all linked URLs (c) For all the URLs not seen before, repeat (a)-(c). However, the size
of the web (estimated at over 4 billion pages) and its rate of change (estimated at 7% per
week) move this plan from a trivial programming exercise to a serious algorithmic and
system design challenge. Indeed, these two factors alone imply that for a reasonably fresh
and complete crawl of the web, step (a) must be executed about a thousand times per …
Crawling the web is deceptively simple: the basic algorithm is (a) Fetch a page (b) Parse it to extract all linked URLs (c) For all the URLs not seen before, repeat (a)-(c). However, the size of the web (estimated at over 4 billion pages) and its rate of change (estimated at 7% per week) move this plan from a trivial programming exercise to a serious algorithmic and system design challenge. Indeed, these two factors alone imply that for a reasonably fresh and complete crawl of the web, step (a) must be executed about a thousand times per second, and thus the membership test (c) must be done well over ten thousand times per second against a set too large to store in main memory. This requires a distributed architecture, which further complicates the membership test.A crucial way to speed up the test is to cache, that is, to store in main memory a (dynamic) subset of the "seen" URLs. The main goal of this paper is to carefully investigate several URL caching techniques for web crawling. We consider both practical algorithms: random replacement, static cache, LRU, and CLOCK, and theoretical limits: clairvoyant caching and infinite cache. We performed about 1,800 simulations using these algorithms with various cache sizes, using actual log data extracted from a massive 33 day web crawl that issued over one billion HTTP requests.Our main conclusion is that caching is very effective - in our setup, a cache of roughly 50,000 entries can achieve a hit rate of almost 80%. Interestingly, this cache size falls at a critical point: a substantially smaller cache is much less effective while a substantially larger cache brings little additional benefit. We conjecture that such critical points are inherent to our problem and venture an explanation for this phenomenon.
ACM Digital Library