Dynamic program parallelization
Abstract
Static program analysis limits the performance improvements possible from compile-time parallelization. Dynamic program parallelization shifts a portion of the analysis from complie-time to run-time, thereby enabling optimizations whose static detection is overly expensive or impossible. Lambda tagging and heap resolution are two new techniques for finding loop and non-loop parallelism in imperative, sequential languages with first-class procedures and destructive heap operations (e.g., ML and Scheme).
Lambda tagging annotates procedures during compilation with a tag that describes the side effects that a procedure's application may cause. During program execution, the program refines and examines tags to identify computations that may safely execute in parallel. Heap resolution uses reference counts to dynamically detect potential heap aliases and to coordinate parallel access to shared structures. An implementation of lambda tagging and heap resolution in an optimizing ML compiler for a shared memory parallel computer demonstrates that the overhead incurred by these run-time methods is easily offset by dynamically-exposed parallelism and that non-trivial procedures can be automatically parallelized with these techniques.
References
[1]
A. W. Appel and D. B. MacQueen. A Standard ML compiler. Functional Programming Languages and Computer Architecture, 274:301-324, 1987.
[2]
C. Chambers and D. Ungar. Customization: Optimizing compiler technology for SELF, a dynamically-typed object-oriented programming language. In A CM ${GPLAN Conference on Programming Language Design and Implementation, pages 146-160, June 1989.
[3]
D. R. Chase, M. Wegman, and F. K. Zadeck. Analysis of pointers and structures. In A CM SIC- PLAN Conference on Programming Language Design and Implementation, pages 296-310, June 1990.
[4]
E. C. Cooper and J. G. Morrisett. Adding threads to Standard ML. Technical Report CMU-CS-90- 186, School of Computer Science, Carnegie Mellon University, December 1990.
[5]
R. H. Halstead, jr. Multilisp: A language for concurrent symbolic computation. A CM Transaclions on Programming Languages and Systems, 7(4):501-538, 1985.
[6]
W. L. Harrison. The interprocedural analysis and automatic parallelization of Scheme programs. Lisp and Symbolic Computation, 2(3/4):179-396, October 1989.
[7]
L. J. Hendren and A. Nicolau. Parallelizing programs with recursive data structures. IEEE Transactions on Parallel and Distributed Systems, 1(1):35-47, January 1990.
[8]
S. Horwitz, P. Pfeiffer, and T. Reps. Dependence analysis for pointer variables. In A CM SIG- PLAN Conference on Programming Language Design and Implementation, June 1989.
[9]
P. Jouvelot and D. K. Gifford. Algebraic reconstruction of types and effects. In Conference Record of the Eighteenth Annual A UM Symposium on Principles of Programming Languages, pages 303-310, January 1991.
[10]
J. R. Larus. Compiling Lisp programs for parallel execution. Lisp and Symbolic Computation, 4:29- 99, 1991.
[11]
L. Lu and M. C. Chen. Parallelizing loops with indirect array references or pointers. In Freliminary Proc. of the dth Workshop on Languages and Compilers for Parallel Computing, August 1991.
[12]
J. M. Lucassen and D. K. Gifford. Polymorphic effect systems. In Conference Record of the Fifteenth Annual A CM Symposium on Principles of Programming Languages, pages 47-57, January 1988.
[13]
H. G. Mairson. Deciding ML typability is complete for deterministic exponential time. In Conference Record of the Seventeenth Annual A CM Symposium on Principles of Programming Languages, pages 382-401, January 1990.
[14]
R. Milner. A theory of type polymorphism in programming. Journal of Computer and System Sciences, 17:348-375, 1978.
[15]
R. Milner, M. Tofte, and R. Harper. The Definition of Standard ML. MIT Press, 1990.
[16]
A. Neirynck. Static Analysis and Side Effects in Higher-Order Languages. PhD thesis, Cornell University, February 1988.
[17]
J. Rees and W. Clinger (eds.). Reviseda report on the algorithmic language Scheme. SIGPLAN Notices, 21(12):37-79, December 1986.
[18]
O. Shivers. Control-Flow Analysis of Higher- Order Languages. PhD thesis, CMU, May 1991.
[19]
D. Tarditi, A. Acharya, and P. Lee. No assembly required: Compiling Standard ML to C. Technical Report CMU-CS-90-187, School of Computer Science, Carnegie Mellon University, November 1990.
[20]
P. Tinker and M. Katz. Parallel execution of sequential Scheme with Paratran. In Proceedings of the 1988 A CM Conference on LISP and Functional Programming, pages 28-39, July 1988.
Index Terms
- Dynamic program parallelization
Recommendations
Dynamic program parallelization
LFP '92: Proceedings of the 1992 ACM conference on LISP and functional programmingStatic program analysis limits the performance improvements possible from compile-time parallelization. Dynamic program parallelization shifts a portion of the analysis from complie-time to run-time, thereby enabling optimizations whose static ...
Comments
Information & Contributors
Information
Published In
![cover image ACM SIGPLAN Lisp Pointers](/cms/asset/a1aed427-fe2e-4564-935f-e802c2829811/1317265.cover.gif)
Copyright © 1992 ACM.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]
Publisher
Association for Computing Machinery
New York, NY, United States
Publication History
Published: 01 January 1992
Published in SIGPLAN-LISPPOINTERS Volume V, Issue 1
Check for updates
Qualifiers
- Article
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- View Citations28Total Citations
- 396Total Downloads
- Downloads (Last 12 months)22
- Downloads (Last 6 weeks)2
Other Metrics
Citations
View Options
Get Access
Login options
Check if you have access through your login credentials or your institution to get full access on this article.
Sign in