Abstract
Scalability and cost considerations suggest that distributed and distributed shared memory parallel computers will dominate future parallel architectures. These machines could not be used effectively unless efficient automatic and static solutions to the data partitioning and placement problem become available. Significant progress toward this end has been made in the last few years, but we are still far from having general solutions which are efficient for all classes of applications. In this paper we propose the data partitioning graph (DPG) as an intermediate representation for parallelizing compilers, which augments previous intermediate representations, and provides a framework for carrying out partitioning and placement of not only regular data structures (such as arrays), but also of irregular structures and scalar variables. Although recent approaches to task-graph-based intermediate representations focus on representing data and control dependencies between tasks, they largely ignore the use of program variables by the different tasks. Traditional data partitioning methods usually employ algorithm-dependent techniques, and are considered independently of processor assignments (which ought to be handled simultaneously with data partitioning). Moreover, approaches to data partitioning concentrate exclusively on array structures. By explicitly encapsulating the use of program variables by the task nodes, the DPG provides a framework for handling data partitioning as well as processor assignment in the same context. We also discuss the hierarchical data partitioning graph (HDPG) which encapsulates the hierarchy of the compiled programs and is used to map the hierarchy of computations to massively parallel computers with distributed memory system.
Preview
Unable to display preview. Download preview PDF.
References
A.V.Aho, R.Sethi and J.D.Ullman: “Compilers”, Addison Wesley, 1986.
U.Banerjee: “Dependence Analysis for Supercomputing”, Kluwer Academic Publishers, 1988.
M.B.Girkar: “Functional Parallelism Theoretical Foundations and Implementations ”, Ph.D Thesis No.1182, Center for Supercomputing Research and Development, University of Illinois at Urbana-Champaign, 1991.
H.Kasahara, H.Honda, A.Mogi, A.Ogura, K.Fujiwara and S.Narita: “A Multi-grain Parallelizing Compilation Scheme on OSCAR”, Proc. 4th Workshop on Languages and Compilers for Parallel Computing, pp. 283–297, Aug. 1991.
M.B.Girkar and C.D.Polychronopoulos: “Automatic Extraction of Functional Parallelism from Ordinary Programs”, IEEE Trans. on Parallel and Distributed Systems, Vol. 3, No. 2, pp. 166–178, Mar. 1992.
M.Gupta and P.Banerjee: “Demonstration of automatic data partitioning techniques for parallelizing compilers on multicomputers”, IEEE Trans. on Parallel and Distributed Systems, vol. 3, pp. 179–193, Mar. 1992.
P.D.Holvland and L.M.Ni: “A Model for Automatic Data Partitioning”, Proc. of Int'l Conf. on Parallel Processing, 1993.
T.S.Chen and J.P.Sheu: “Communication-Free Data Allocation Techniques for Parallelizing Compilers on Multicomputers”, Proc. of Int'l Conf. on Parallel Processing, 1993.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1995 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Nakanishi, T., Joe, K., Saito, H., Polychronopoulos, C.D., Fukuda, A., Araki, K. (1995). The data partitioning graph: Extending data and control dependencies for data partitioning. In: Pingali, K., Banerjee, U., Gelernter, D., Nicolau, A., Padua, D. (eds) Languages and Compilers for Parallel Computing. LCPC 1994. Lecture Notes in Computer Science, vol 892. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0025878
Download citation
DOI: https://doi.org/10.1007/BFb0025878
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-58868-9
Online ISBN: 978-3-540-49134-7
eBook Packages: Springer Book Archive