Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2141702.2141711acmconferencesArticle/Chapter ViewAbstractPublication PagesppoppConference Proceedingsconference-collections
research-article

Function flow: making synchronization easier in task parallelism

Published: 26 February 2012 Publication History

Abstract

Expressing synchronization in task parallelism remains a significant challenge because of the complicated relationships between tasks. In this paper, we propose a novel parallel programming model, namely function flow, where synchronization is easier to express. We release the burden of synchronizing by the virtue of parallel functions and functional wait. In function flow, parallel functions are defined to represent parallel tasks. Functional wait is designed to coordinate relationships of parallel functions at task-level. The main aspects of functional waits are boolean-expressions which are coupled with parallel functions' invocations. The functional wait mechanisms, based on parallel functions, provide powerful semantic supports and compiling time checking on relationships of parallel tasks. Our preliminary result of function flow shows that a wide range of realistic parallel programs can be easily expressed with performance coming close to well-tuned multi-threaded programs using barriers/sync. The overhead of task-level coordination is very low, not exceed 8%.

References

[1]
C++0x, the next of iso c++ standard http://www2.research.att.com/bs/C++0xFAQ.html.
[2]
ISO/IEC 9899, Programming Language C.
[3]
D. Arvind and D. Culler. Dataflow architectures. Annual review of computer science, 1986.
[4]
C. Bienia and K. Li. Parsec 2.0: A new benchmark suite for chipmultiprocessors. In Proceedings of the 5th Annual Workshop on Modeling Benchmarking and Simulation, 2009.
[5]
R. Blumofe, C. Joerg, B. Kuszmaul, C. Leiserson, K. Randall, and Y. Zhou. Cilk: An efficient multithreaded runtime system. In Proceedings of the fifth ACM SIGPLAN Symposium on Principles and Practice Of Parallel Programming, pages 207--216. ACM, 1995.
[6]
L. Dagum and R. Menon. OpenMP: an industry standard API for shared-memory programming. Computational Science & Engineering, IEEE, 5(1):46--55, 2002.
[7]
D. Dice, O. Shalev, and N. Shavit. Transactional locking II. Distributed Computing, pages 194--208, 2006.
[8]
B. T. L. Documentation. http://www.boost.org/doc/libs/1_46_1/doc/html/thread.html.
[9]
M. Isard, M. Budiu, Y. Yu, A. Birrell, and D. Fetterly. Dryad: distributed data-parallel programs from sequential building blocks. In Proceedings of the 2nd ACM SIGOPS/EuroSys European Conference on Computer Systems 2007, pages 59--72. ACM, 2007.
[10]
Z. Jin and D. Brian. Bamboo: A data-centric, object-oriented approach to many-core software. In Proceedings of the 2010 ACM SIGPLAN Conference on Programming Language Design and Implementation, pages 388--399. ACM, June 2010.
[11]
L. Kale and S. Krishnan. Charm++: A portable concurrent object oriented system based on C++. In Proceedings of the eighth Annual Conference on Object-Oriented Programming Systems, Languages, and Applications, pages 91--108. ACM, 1993.
[12]
P. Kambadur, A. Gupta, A. Ghoting, H. Avron, and A. Lumsdaine. Pfunc: modern task parallelism for modern high performance computing. In Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis, page 43. ACM, 2009.
[13]
P. Kambadur, A. Gupta, T. Hoefler, and A. Lumsdaine. Demand-driven execution of static directed acyclic graphs using task parallelism. In International Conference on High Performance Computing (HiPC), pages 284--293, 2009.
[14]
C. Lattner and V. Adve. Llvm: A compilation framework for lifelong program analysis and transformation. In Proceedings of the 2004 International Symposium on Code Generation and Optimization, 2004.
[15]
A. Tanenbaum and A. Tannenbaum. Modern operating systems, 3rd Edition. Prentice Hall Englewood Cliffs, NJ, 2007.
[16]
G. Upadhyaya, S. Midkiff, and V. Pai. Using data structure knowledge for efficient lock generation and strong atomicity. In Proceeding of the 15th ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming, volume 45, pages 281--292. ACM, 2010.
[17]
R. Van Nieuwpoort, G. Wrzesińska, C. Jacobs, and H. Bal. Satin: A high-level and efficient grid programming model. ACM Transactions on Programming Languages and Systems (TOPLAS), 32(3):1--39, 2010.
[18]
M. Vanneschi. The programming model of ASSIST, an environment for parallel and distributed portable applications. Parallel Computing, 28(12):1709--1732, 2002.
[19]
S. Woo, M. Ohara, E. Torrie, J. Singh, and A. Gupta. The SPLASH-2 programs: Characterization and methodological considerations. In Proceedings of the 22nd Annual International Symposium on Computer Architecture, pages 24--36. ACM, 1995.

Cited By

View all
  • (2019)FunctionFlowFrontiers of Computer Science: Selected Publications from Chinese Universities10.1007/s11704-016-6286-813:1(73-85)Online publication date: 1-Feb-2019
  • (2016)High-performance computing environment: a review of twenty years of experiments in ChinaNational Science Review10.1093/nsr/nww0013:1(36-48)Online publication date: 20-Jan-2016
  • (2014)An Adaptive Task Granularity Based Scheduling for Task-centric ParallelismProceedings of the 2014 IEEE Intl Conf on High Performance Computing and Communications, 2014 IEEE 6th Intl Symp on Cyberspace Safety and Security, 2014 IEEE 11th Intl Conf on Embedded Software and Syst (HPCC,CSS,ICESS)10.1109/HPCC.2014.32(165-172)Online publication date: 20-Aug-2014

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
PMAM '12: Proceedings of the 2012 International Workshop on Programming Models and Applications for Multicores and Manycores
February 2012
180 pages
ISBN:9781450312110
DOI:10.1145/2141702
  • Conference Chairs:
  • Minyi Guo,
  • Zhiyi Huang
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 26 February 2012

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. function flow
  2. functional wait
  3. parallel programming
  4. synchronization

Qualifiers

  • Research-article

Funding Sources

Conference

PPoPP '12
Sponsor:

Acceptance Rates

Overall Acceptance Rate 53 of 97 submissions, 55%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)4
  • Downloads (Last 6 weeks)1
Reflects downloads up to 18 Aug 2024

Other Metrics

Citations

Cited By

View all
  • (2019)FunctionFlowFrontiers of Computer Science: Selected Publications from Chinese Universities10.1007/s11704-016-6286-813:1(73-85)Online publication date: 1-Feb-2019
  • (2016)High-performance computing environment: a review of twenty years of experiments in ChinaNational Science Review10.1093/nsr/nww0013:1(36-48)Online publication date: 20-Jan-2016
  • (2014)An Adaptive Task Granularity Based Scheduling for Task-centric ParallelismProceedings of the 2014 IEEE Intl Conf on High Performance Computing and Communications, 2014 IEEE 6th Intl Symp on Cyberspace Safety and Security, 2014 IEEE 11th Intl Conf on Embedded Software and Syst (HPCC,CSS,ICESS)10.1109/HPCC.2014.32(165-172)Online publication date: 20-Aug-2014

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media