Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/1378533.1378593acmconferencesArticle/Chapter ViewAbstractPublication PagesspaaConference Proceedingsconference-collections
research-article

Tight competitive ratios for parallel disk prefetching and caching

Published: 14 June 2008 Publication History

Abstract

We consider the natural extension of the well-known single disk caching problem to the parallel disk I/O model (PDM) [17]. The main challenge is to achieve as much parallelism as possible and avoid I/O bottlenecks. We are given a fast memory (cache) of size M memory blocks along with a request sequence Σ =(b1,b2,...,bn) where each block bi resides on one of D disks. In each parallel I/O step, at most one block from each disk can be fetched. The task is to serve Σ in the minimum number of parallel I/Os. Thus, each I/O is analogous to a page fault. The difference here is that during each page fault, up to D blocks can be brought into memory, as long as all of the new blocks entering the memory reside on different disks. The problem has a long history [18, 12, 13, 26]. Note that this problem is non-trivial even if all requests in Σ are unique. This restricted version is called read-once. Despite the progress in the offline version [13, 15] and read-once version [12], the general online problem still remained open. Here, we provide comprehensive results with a full general solution for the problem with asymptotically tight competitive ratios.
To exploit parallelism, any parallel disk algorithm needs a certain amount of lookahead into future requests. To provide effective caching, an online algorithm must achieve o(D) competitive ratio. We show a lower bound that states, for lookahead LM, any online algorithm must be Ω(D)-competitive. For lookahead L greater than M(1+1/ε), where ε is a constant, the tight upper bound of O(√MD/L) on competitive ratio is achieved by our algorithm SKEW. The previous algorithm tLRU [26] was O((MD/L)2/3)-competitive and this was also shown to be tight [26] for an LRU-based strategy. We achieve the tight ratio using a fairly different strategy than LRU. We also show tight results for randomized algorithms against oblivious adversary and give an algorithm achieving better bounds in the resource augmentation model.

References

[1]
D. D. Sleator and R. E. Tarjan. Amortized efficiency of the list update and paging rules. Communications of the ACM, 28:202--208, 1985.]]
[2]
N. Young. Competitive paging and dual-guided on-line weighted caching andmatching algorithms. Ph.D. thesis, Princeton University, 1991. Available as CS Tech. Report CS-TR-348--91.]]
[3]
L. A. Belady. A study of replacement algorithms for virtual storage computers. IBM Systems Journal, 5:78--101, 1966.]]
[4]
A. R. Karlin, M. S. Manasse, L. Rudolph and D. D. Sleator. Competitive snoopy caching. Algorithmica, 3(1): 79--119, 1988.]]
[5]
P. Cao, E. W. Felton, A. R. Karlin and K. Li. A Study of integrated prefetching and caching strategies. In Proc. of the joint Intl. Conf. on measurement andmodeling of computer systems, 188--197, ACM press. May 1995.]]
[6]
S. Albers. On the influence of lookahead in competitive paging algorithms. Algorithmica, 18(3):283--305, 1997.]]
[7]
A. Fiat, R. Karp, M. Luby, L. McGoech, D. D. Sleator and N. E. Young. Competitive Paging Algorithms. Journal of Algorithms, 12(4):685--699, Dec. 1991.]]
[8]
A. Aggarwal and J. S. Vitter. The input/output complexity of sorting and related problems. Communications of the ACM, 31(9):1116--1127, Sept. 1988.]]
[9]
S. Albers, N. Garg and S. Leonardi. Minimizing stall time in single and parallel disk systems. In Proc. of 30th Annual ACM Symp. on Theory of Computing(STOC 98), 454--462, 1998.]]
[10]
J. S. Vitter and E. A. M. Shriver. Optimal algorithms for parallel memory I: Two-level memories. Algorithmica, 12(2--3): 110--147, 1994.]]
[11]
P. J. Varman and R. M. Verma. Tight bounds for prefetching and buffer management algorithmsfor parallel I/O systems. IEEE trans. on Parallel and Distributed Systems, 10:1262--1275,Dec. 1999.]]
[12]
M. Kallahalla and P. J. Varman. Optimal read-once parallel disk scheduling. In Proc. of Sixth ACM Workshop on I/O in Parallel andDistributed Systems, 68--77, 1999.]]
[13]
M. Kallahalla and P. J. Varman. Optimal prefetching and caching for parallel I/O systems. In Symposium on Parallelism in Algorithms and Architectures, 2001.]]
[14]
R. D. Barve, E. F. Grove and J. S. Vitter. Simple randomized mergesort on parallel disks. Parallel Computing, 23(4):601--631, June 1996.]]
[15]
D. A. Hutchinson, P. Sanders and J. S. Vitter. Duality between prefetching and queued writing with applicationto integrated caching and prefetching and to external sorting. In European Symposium on Algorithms (ESA 2001), LNCS 2161, 2001.]]
[16]
J. S. Vitter and D. A. Hutchinson. Distribution sort with randomized cycling. In Symposium on Discrete Algorithms (SODA), 77--86, 2001.]]
[17]
J. S. Vitter. External memory algorithms and data structures: Dealing withmassive data. ACM Computing surveys, Vol. 33, 2:209--271, June 2001.]]
[18]
R. Barve, M. Kallahalla, P. J. Varman and J. S. Vitter. Competitive parallel disk prefetching and buffer management. In Proc. of Fifth Workshop on I/O in parallel andDistributed Systems, 47--56. November 1997.]]
[19]
A. Roy. Prefetching and caching with lookahead. Manuscript, 2001.]]
[20]
M. Kallahalla and P. J. Varman. Red-black prefetching: An approximation algorithm for paralleldisk scheduling. In Foundations of Software Technology and Theoretical Computer Science, LNCS 1530, 66--77, December 1998.]]
[21]
D. Breslauer. On competitive online paging with lookahead. TCS 209(1--2), 365--375, 1998.]]
[22]
T. Kimbrel and A. R. Karlin. Near optimal parallel prefetching and caching. In Foundations of Computer Science (FOCS), 540--549,1996.]]
[23]
S. Albers and M. Buttner. Integrated prefetching and caching in single and parallel disk systems. In Symposium on Parallelism in Algorithms and Architectures(SPAA), 109--117, 2003.]]
[24]
S. Albers and C. Witt. Minimizing stall time in single and parallel disk systems using multicommodity network flows. In RANDOM-APPROX, 2001.]]
[25]
T. Kimbrel, P. Cao, E. W. Felten, A. R. Karlin and K. Li. Integrated parallel prefetching and caching. In Conference on Measurement and Modeling of Computer Systems (SIGMETRICS), 1996.]]
[26]
R. Shah, P. J. Varman, and J. S. Vitter. Online algorithms for caching and prefetching on parallel disks. In Symposium of Parallelism in Algorithms and Architectures, 2004.]]
[27]
A. Borodin and R. El-Yaniv. Online Computation and Competitive Analysis, Cambridge University Press, 1998.]]

Cited By

View all
  • (2008)Algorithms and data structures for external memoryFoundations and Trends® in Theoretical Computer Science10.1561/04000000142:4(305-474)Online publication date: 1-Jan-2008

Index Terms

  1. Tight competitive ratios for parallel disk prefetching and caching

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    SPAA '08: Proceedings of the twentieth annual symposium on Parallelism in algorithms and architectures
    June 2008
    380 pages
    ISBN:9781595939739
    DOI:10.1145/1378533
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 14 June 2008

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. competitive analysis
    2. online algorithms
    3. parallel disk model

    Qualifiers

    • Research-article

    Conference

    SPAA08

    Acceptance Rates

    Overall Acceptance Rate 447 of 1,461 submissions, 31%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 09 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2008)Algorithms and data structures for external memoryFoundations and Trends® in Theoretical Computer Science10.1561/04000000142:4(305-474)Online publication date: 1-Jan-2008

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media