The importance of parallel processing in the computational community is increasing. The difficulties of programming parallel processors, however, have thwarted their exploitation. Two approaches are receiving attention as possible solutions: supercompilers for extant languages, and new languages. In the latter area, researchers have produced several applicative languages for parallel processing. In the applicative model, only data dependencies constrain evaluation order, so many operations can execute simultaneously if hardware is available. Unfortunately, preserving applicative semantics has required implementations to copy data when deriving one value from another, and in the presence of large arrays, copy costs can become prohibitively expensive. In addition to copying, applicative programs suffer from the same inefficiencies as their imperative counterparts.This dissertation discusses several compilation techniques for high performance parallel applicative computing, with emphasis on update-in-place. All the algorithms take data flow graphs as input and produce improved data flow graphs as output. We have implemented them for SISAL, an applicative language for parallel numerical computation, with encouraging results. Most programs, including those manipulating two-dimensional arrays, run in-place after optimization. Further they achieve execution times competitive with FORTRAN, C, and Pascal on one processor, and good parallel efficiency when more than one processor contributes to execution. This dissertation shows that applicative programming is a powerful approach to programming parallel computers as long as compilers support at least the optimizations of our SISAL compiler.
Cited By
- Vießmann H, Šinkarovs A and Scholz S Extended Memory Reuse Proceedings of the 30th Symposium on Implementation and Application of Functional Languages, (107-118)
- Bernecky R, Herhut S and Scholz S Symbiotic expressions Proceedings of the 21st international conference on Implementation and application of functional languages, (107-124)
- Gaudiot J, DeBoni T, Feo J, Böhm W, Najjar W and Miller P The Sisal project Compiler optimizations for scalable parallel systems, (45-72)
- Gaudiot J, DeBoni T, Feo J, Böhm W, Najjar W and Miller P The Sisal Model of Functional Programming and its Implementation Proceedings of the 2nd AIZU International Symposium on Parallel Algorithms / Architecture Synthesis
- Li Z and Kirkham C Efficient implementation of aggregates in united functions and objects Proceedings of the 33rd annual ACM Southeast Conference, (73-82)
- Sastry A and Clinger W (1994). Parallel destructive updating in strict functional languages, ACM SIGPLAN Lisp Pointers, VII:3, (263-272), Online publication date: 1-Jul-1994.
- Sastry A and Clinger W Parallel destructive updating in strict functional languages Proceedings of the 1994 ACM conference on LISP and functional programming, (263-272)
- Cann D (1992). Retire Fortran?, Communications of the ACM, 35:8, (81-89), Online publication date: 1-Aug-1992.
- Cann D Retire Fortran? A debate rekindled Proceedings of the 1991 ACM/IEEE conference on Supercomputing, (264-272)
- Cann D and Feo J SISAL versus FORTRAN Proceedings of the 1990 ACM/IEEE conference on Supercomputing, (626-636)
- Sarkar V and Cann D (1990). POSC—a partitioning and optimizing SISAL compiler, ACM SIGARCH Computer Architecture News, 18:3b, (148-164), Online publication date: 1-Sep-1990.
- Sarkar V (1990). Instruction reordering for fork-join parallelism, ACM SIGPLAN Notices, 25:6, (322-336), Online publication date: 1-Jun-1990.
- Sarkar V Instruction reordering for fork-join parallelism Proceedings of the ACM SIGPLAN 1990 conference on Programming language design and implementation, (322-336)
- Sarkar V and Cann D POSC—a partitioning and optimizing SISAL compiler Proceedings of the 4th international conference on Supercomputing, (148-164)
Recommendations
Part-compilation in high-level languages
AbstractMany programming languages include the ability to divide large programs into smaller segments, which are compiled separately. When a small modification is made to a large program, then the affected segment only has to be re-compiled.
This paper ...
High performance Fortran compilation techniques for parallelizing scientific codes
SC '98: Proceedings of the 1998 ACM/IEEE conference on SupercomputingWith current compilers for High Performance Fortran (HPF), substantial restructuring and hand-optimization may be required to obtain acceptable performance from an HPF port of an existing Fortran application. A key goal of the Rice dHPF compiler project ...