Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2486159.2486174acmconferencesArticle/Chapter ViewAbstractPublication PagesspaaConference Proceedingsconference-collections
research-article

On-the-fly pipeline parallelism

Published: 23 July 2013 Publication History

Abstract

Pipeline parallelism organizes a parallel program as a linear sequence of s stages. Each stage processes elements of a data stream, passing each processed data element to the next stage, and then taking on a new element before the subsequent stages have necessarily completed their processing. Pipeline parallelism is used especially in streaming applications that perform video, audio, and digital signal processing. Three out of 13 benchmarks in PARSEC, a popular software benchmark suite designed for shared-memory multiprocessors, can be expressed as pipeline parallelism.
Whereas most concurrency platforms that support pipeline parallelism use a "construct-and-run" approach, this paper investigates "on-the-fly" pipeline parallelism, where the structure of the pipeline emerges as the program executes rather than being specified a priori. On-the-fly pipeline parallelism allows the number of stages to vary from iteration to iteration and dependencies to be data dependent. We propose simple linguistics for specifying on-the-fly pipeline parallelism and describe a provably efficient scheduling algorithm, the Piper algorithm, which integrates pipeline parallelism into a work-stealing scheduler, allowing pipeline and fork-join parallelism to be arbitrarily nested. The Piper algorithm automatically throttles the parallelism, precluding "runaway" pipelines. Given a pipeline computation with T1 work and T span (critical-path length), Piper executes the computation on P processors in TPT1/P + O(T + lg P) expected time. Piper also limits stack space, ensuring that it does not grow unboundedly with running time.
We have incorporated on-the-fly pipeline parallelism into a Cilk-based work-stealing runtime system. Our prototype Cilk-P implementation exploits optimizations such as lazy enabling and dependency folding. We have ported the three PARSEC benchmarks that exhibit pipeline parallelism to run on Cilk-P. One of these, x264, cannot readily be executed by systems that support only construct-and-run pipeline parallelism. Benchmark results indicate that Cilk-P has low serial overhead and good scalability. On x264, for example, Cilk-P exhibits a speedup of 13.87 over its respective serial counterpart when running on 16 processors.

References

[1]
K. Agrawal, C. E. Leiserson, and J. Sukha. Executing task graphs using work-stealing. In IPDPS, pp. 1--12. IEEE, 2010.
[2]
N. S. Arora, R. D. Blumofe, and C. G. Plaxton. Thread scheduling for multiprogrammed multiprocessors. Theory of Computing Systems, pp. 115--144, 2001.
[3]
H. C. Baker, Jr. and C. Hewitt. The incremental garbage collection of processes. SIGPLAN Notices, 12(8):55--59, 1977.
[4]
C. Bienia, S. Kumar, J. P. Singh, and K. Li. The PARSEC benchmark suite: Characterization and architectural implications. In PACT, pp. 72--81. ACM, 2008.
[5]
C. Bienia and K. Li. Characteristics of workloads using the pipeline programming model. In ISCA, pp. 161--171. Springer-Verlag, 2010.
[6]
G. E. Blelloch and M. Reid-Miller. Pipelining with futures. In SPAA, pp. 249--259. ACM, 1997.
[7]
R. D. Blumofe and C. E. Leiserson. Space-efficient scheduling of multithreaded computations. SIAM Journal on Computing, 27(1):202--229, Feb. 1998.
[8]
R. D. Blumofe and C. E. Leiserson. Scheduling multithreaded computations by work stealing. JACM, 46(5):720--748, 1999.
[9]
R. L. Bocchino, Jr., V. S. Adve, S. V. Adve, and M. Snir. Parallel programming must be deterministic by default. In First USENIX Conference on Hot Topics in Parallelism, 2009.
[10]
F. W. Burton and M. R. Sleep. Executing functional programs on a virtual tree of processors. In FPCA, pp. 187--194. ACM, 1981.
[11]
C. Consel, H. Hamdi, L. Réveillère, L. Singaravelu, H. Yu, and C. Pu. Spidle: a DSL approach to specifying streaming applications. In GPCE, pp. 1--17. Springer-Verlag, 2003.
[12]
T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein. Introduction to Algorithms. The MIT Press, third edition, 2009.
[13]
R. Finkel and U. Manber. DIB - A distributed implementation of backtracking. ACM TOPLAS, 9(2):235--256, 1987.
[14]
D. Friedman and D. Wise. Aspects of applicative programming for parallel processing. IEEE Transactions on Computers, C-27(4):289--296, 1978.
[15]
M. Frigo, C. E. Leiserson, and K. H. Randall. The implementation of the Cilk-5 multithreaded language. In PLDI, pp. 212--223. ACM, 1998.
[16]
J. Giacomoni, T. Moseley, and M. Vachharajani. FastForward for efficient pipeline parallelism: A cache-optimized concurrent lock-free queue. In PPoPP, pp. 43--52. ACM, 2008.
[17]
M. I. Gordon, W. Thies, and S. Amarasinghe. Exploiting coarse-grained task, data, and pipeline parallelism in stream programs. In ASPLOS, pp. 151--162. ACM, 2006.
[18]
E. A. Hauck and B. A. Dent. Burroughs' B6500/B7500 stack mechanism. Proceedings of the AFIPS Spring Joint Computer Conference, pp. 245--251, 1968.
[19]
Y. He, C. E. Leiserson, and W. M. Leiserson. The cilkview scalability analyzer. In SPAA, pp. 145--156, 2010.
[20]
Intel Corporation. Intel® Cilk™ Plus Language Extension Specification, Version 1.1, 2013. Document 324396-002US. Available from http://cilkplus.org/sites/default/files/open_specifications/Intel_Cilk_plus_lang_spec_2.htm.
[21]
D. A. Kranz, R. H. Halstead, Jr., and E. Mohr. Mul-T: A high-performance parallel Lisp. In PLDI, pp. 81--90. ACM, 1989.
[22]
M. Lam. Software pipelining: an effective scheduling technique for VLIW machines. In PLDI, pp. 318--328. ACM, 1988.
[23]
I.-T. A. Lee, S. Boyd-Wickizer, Z. Huang, and C. E. Leiserson. Using memory mapping to support cactus stacks in work-stealing runtime systems. In PACT, pp. 411--420. ACM, 2010.
[24]
C. E. Leiserson. The Cilk++concurrency platform. J. Supercomputing, 51(3):244--257, 2010.
[25]
S. MacDonald, D. Szafron, and J. Schaeffer. Rethinking the pipeline as object-oriented states with transformations. In HIPS, pp. 12--21. IEEE, 2004.
[26]
W. R. Mark, R. S. Glanville, K. Akeley, and M. J. Kilgard. Cg: a system for programming graphics hardware in a C-like language. In SIGGRAPH, pp. 896--907. ACM, 2003.
[27]
M. McCool, A. D. Robison, and J. Reinders. Structured Parallel Programming: Patterns for Efficient Computation. Elsevier, 2012.
[28]
A. Navarro, R. Asenjo, S. Tabik, and C. Caşcaval. Analytical modeling of pipeline parallelism. In PACT, pp. 281--290. IEEE, 2009.
[29]
OpenMP Application Program Interface, Version 3.0, 2008. Available from http://www.openmp.org/mp-documents/spec30.pdf.
[30]
G. Ottoni, R. Rangan, A. Stoler, and D. I. August. Automatic thread extraction with decoupled software pipelining. In MICRO, pp. 105--118. IEEE, 2005.
[31]
A. Pop and A. Cohen. A stream-computing extension to OpenMP. In HiPEAC, pp. 5--14. ACM, 2011.
[32]
R. Rangan, N. Vachharajani, M. Vachharajani, and D. I. August. Decoupled software pipelining with the synchronization array. In PACT, pp. 177--188. ACM, 2004.
[33]
E. C. Reed, N. Chen, and R. E. Johnson. Expressing pipeline parallelism using TBB constructs: a case study on what works and what doesn't. In SPLASH, pp. 133--138. ACM, 2011.
[34]
R. Rojas. Konrad Zuse's legacy: The architecture of the Z1 and Z3. IEEE Annals of the History of Computing, 19(2):5--16, Apr. 1997.
[35]
D. Sanchez, D. Lo, R. M. Yoo, J. Sugerman, and C. Kozyrakis. Dynamic fine-grain scheduling of pipeline parallelism. In PACT, pp. 22--32. IEEE, 2011.
[36]
B. Stroustrup. The C++ Programming Language. Addison-Wesley, fourth edition, 2013.
[37]
M. A. Suleman, M. K. Qureshi, Khubaib, and Y. N. Patt. Feedback-directed pipeline parallelism. In PACT, pp. 147--156. ACM, 2010.
[38]
W. Thies, V. Chandrasekhar, and S. Amarasinghe. A practical approach to exploiting coarse-grained pipeline parallelism in C programs. In MICRO, pp. 356--369. IEEE, 2007.
[39]
T. Wiegand, G. J. Sullivan, G. Bjøntegaard, and A. Luthra. Overview of the H.264/AVC video coding standard. IEEE Transactions on Circuits and Systems for Video Technology, 13(7):560--576, 2003.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
SPAA '13: Proceedings of the twenty-fifth annual ACM symposium on Parallelism in algorithms and architectures
July 2013
348 pages
ISBN:9781450315722
DOI:10.1145/2486159
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 23 July 2013

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. cilk
  2. multicore
  3. multithreading
  4. on-the-fly pipelining
  5. parallel programming
  6. pipeline parallelism
  7. scheduling
  8. work stealing

Qualifiers

  • Research-article

Conference

SPAA '13

Acceptance Rates

SPAA '13 Paper Acceptance Rate 31 of 130 submissions, 24%;
Overall Acceptance Rate 447 of 1,461 submissions, 31%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)6
  • Downloads (Last 6 weeks)1
Reflects downloads up to 24 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2019)HyperqueuesACM Transactions on Parallel Computing10.1145/33656606:4(1-35)Online publication date: 19-Nov-2019
  • (2019)Proactive work stealing for futuresProceedings of the 24th Symposium on Principles and Practice of Parallel Programming10.1145/3293883.3295735(257-271)Online publication date: 16-Feb-2019
  • (2018)Race Detection in Two DimensionsACM Transactions on Parallel Computing10.1145/32646184:4(1-22)Online publication date: 7-Sep-2018
  • (2018)Efficient parallel determinacy race detection for two-dimensional dagsACM SIGPLAN Notices10.1145/3200691.317851553:1(368-380)Online publication date: 10-Feb-2018
  • (2018)Efficient parallel determinacy race detection for two-dimensional dagsProceedings of the 23rd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming10.1145/3178487.3178515(368-380)Online publication date: 10-Feb-2018
  • (2016)Work stealing for interactive services to meet target latencyACM SIGPLAN Notices10.1145/3016078.285115151:8(1-13)Online publication date: 27-Feb-2016
  • (2016)Provably Good and Practically Efficient Parallel Race Detection for Fork-Join ProgramsProceedings of the 28th ACM Symposium on Parallelism in Algorithms and Architectures10.1145/2935764.2935801(83-94)Online publication date: 11-Jul-2016
  • (2016)Extending the Nested Parallel Model to the Nested Dataflow Model with Provably Efficient SchedulersProceedings of the 28th ACM Symposium on Parallelism in Algorithms and Architectures10.1145/2935764.2935797(49-60)Online publication date: 11-Jul-2016
  • (2016)Performance implications of transient loop-carried data dependences in automatically parallelized loopsProceedings of the 25th International Conference on Compiler Construction10.1145/2892208.2892214(23-33)Online publication date: 17-Mar-2016
  • (2016)Well-Structured Futures and Cache LocalityACM Transactions on Parallel Computing10.1145/28586502:4(1-20)Online publication date: 9-Feb-2016
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media