Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
Skip header Section
Parallel processing for super-computers and artificial intelligenceJanuary 1989
Publisher:
  • McGraw-Hill, Inc.
  • Professional Book Group 11 West 19th Street New York, NY
  • United States
ISBN:978-0-07-031606-5
Published:03 January 1989
Pages:
673
Skip Bibliometrics Section
Bibliometrics
Abstract

No abstract available.

Cited By

  1. Yeh C and Varvarigos E (1998). Macro-Star Networks, IEEE Transactions on Parallel and Distributed Systems, 9:10, (987-1003), Online publication date: 1-Oct-1998.
  2. Yeh C and Parhami B Design of High-Performance Massively Parallel Architectures Under Pin Limitations and Non-Uniform Propagation Delay Proceedings of the 2nd AIZU International Symposium on Parallel Algorithms / Architecture Synthesis
  3. ACM
    Whitson G, Wu C, Taylor J and Ermongkonchai A CANS: an interactive neural network system for CRAY supercomputers Proceedings of the 1992 ACM/SIGAPP symposium on Applied computing: technological challenges of the 1990's, (665-668)
  4. ACM
    Chang C and Wallace V Multiple feedback queue as a model of general purpose multiprocessor systems Proceedings of the 1992 ACM annual conference on Communications, (493-500)
  5. ACM
    Hwang K, Dubois M, Panda D, Rao S, Shang S, Uresin A, Mao W, Nair H, Lytwyn M, Hsieh F, Liu J, Mehrotra S and Cheng C (1990). OMP, ACM SIGARCH Computer Architecture News, 18:3b, (7-22), Online publication date: 1-Sep-1990.
  6. ACM
    Hwang K, Dubois M, Panda D, Rao S, Shang S, Uresin A, Mao W, Nair H, Lytwyn M, Hsieh F, Liu J, Mehrotra S and Cheng C OMP Proceedings of the 4th international conference on Supercomputing, (7-22)
  7. Mehrotra S Developing a simulator for the USC orthogonal multiprocessor Proceedings of the 22nd conference on Winter simulation, (857-862)
  8. Hwang K, Kim D and Tseng P (1989). An Orthogonal Multiprocessor for Parallel Scientific Computations, IEEE Transactions on Computers, 38:1, (47-61), Online publication date: 1-Jan-1989.
  9. Xu Z and Hwang K (1989). Molecule, IEEE Transactions on Software Engineering, 15:5, (587-599), Online publication date: 1-May-1989.
  10. ACM
    Ghosh J and Hwang K (1988). Critical issues in mapping neural networks on message-passing multicomputers, ACM SIGARCH Computer Architecture News, 16:2, (3-11), Online publication date: 17-May-1988.
  11. Ghosh J and Hwang K Critical issues in mapping neural networks on message-passing multicomputers Proceedings of the 15th Annual International Symposium on Computer architecture, (3-11)
Contributors
  • Chinese University of Hong Kong
  • Leiden University

Reviews

Peter C. Patton

Computational science promises to become a third branch of discovery alongside theoretical science and experimental science. Its foundations are beginning to appear, if primarily in the context of individual published examples of its utility (and one Nobel Prize, awarded to Kenneth Wilson) and in books like those in this review. This little library of supercomputing books measures and records the maturation of vector supercomputing as well as the emergence of parallel processing. These current reference books acknowledge a handful of precursor volumes, most of which are no longer in print: Fernbach [1], DeVreese [2], Numich [3], Karin and Parker Smith [4], and Schneck [5]. With the exception of Christopher Lazous book, all of the reviewed books and most of their precursors are editorial compilations, mostly of research papers presented at conferences or symposia. Each has a somewhat different focus or integrating theme and may attract a different readership. From one point of view, the evolution of these conference themes over five years from 1985 to 1990 is an interesting yardstick for measuring the growth of computational science as a methodology during its most rapid years of development. While it has been said that one cannot compare apples and oranges, when presented with a tempting basket of fruit, some comparisons can be made. The books cluster generally into five categories: history and introduction to the field, early conferences, expert workshops, performance evaluation, and artificial intelligence. Lazou Lazous book is unique not just because it is a monograph, but because it is also a history, primer, and textbook that has already become a classic. First published in hardback in 1986, it was revised in 1988 as a paperback and reprinted again in 1989. I hope that the author will continue to update and re-release the book; it is a useful introduction for the scientific computing specialist who is trying to sort out a dazzling array of new computer architectures and their applications. Following a brief introduction, Lazou covers supercomputing architecture, building blocks, enabling technologies, and history from 1970. He then launches into such practical issues as program optimization, program vectorization, and performance tuning. Chapter 11 presents 12 application examples ranging from oceanography to film production using fractals. The last three chapters deal with future developments and new architectures; an appendix presents the features of FORTRAN 8X in the vector supercomputing context. Matsen and Tajima; Kartashev and Kartashev These two books are proceedings of early conferences on supercomputing held at the University of Texas in March, 1985 and in Santa Clara, California in May, 1988. Matsen and Tajima is a conference proceedings of papers in camera-ready form; it consists of 26 papers covering a variety of topics. Several of these papers are historically important because the ideas they presented have had substantial impact: examples are the early paper on numerical cosmology by Joan Centrella and her colleagues, Geoffrey Foxs paper on the Caltech hypercube, the paper by Karl-Heinz Winkler and his colleagues on numerical fluid dynamics simulation, a paper on the Gibbs Project by Kenneth Wilson, and Stephen Wolframs paper on SMP, a precursor of Mathematica. Table 1: Descriptive data Number of pages Number of chapters or papers Intended purpose Audience Lazou 263 14 Introductory text End users Matsen and Tajima 480 26 Proceedings End users The Kartashevs 622 12 Reference Researchers Lichnewsky and Saguez 479 21 Proceedings End users Kowalik 425 29 Proceedings Scientists; applied mathematicians Carey 287 18 Advanced text Scientists; applied mathematicians Martin 419 14 Handbook Performance evaluators van der Steen 289 18 Survey Performance evaluators Hwang and De Groot 671 16 Reference AI researchers The book by the Kartashevs is a well-edited selection of papers addressing three topics: (1) the evolution and implementation of supercomputing architectures, (2) supercomputer hardware and software design, and (3) supercomputer applications and performance evaluation. The book is nicely produced and will be an enduring reference work. Of the 12 lengthy multi-author chapters, the Kartashevs introductory chapter on the evolution of supercomputer architecture, chapter 2 on the design of the Columbia Homogeneous Parallel Processor, and chapter 3 on the design of the Unisys Integrated Scientific Processor are particularly valuable from a technological historical perspective. The paper by Milt Halem and his colleagues at Goddard Space Flight Center covers an interesting range of applications for both pipelined and massively parallel styles of supercomputing. The counterpoint paper on hypercube applications from Goddards rival sister institution JPLCaltech is also a high water mark. Lichnewsky and Saguez; Kowalik; Carey These three edited proceedings of small expert workshops sponsored by government research agencies form a distinct group. Lichnewsky and Saguezs book is based on an INRIA conference in Toulouse, France in 1986. Kowaliks book covers a NATO–sponsored advanced research workshop in Trondheim, Norway in 1989. A National Science Foundation–sponsored workshop in Dallas in 1988 is the basis for Careys book. The INRIA conference was one of the first conferences to focus on parallel supercomputing; specifically, parallel architectures, detection and exploitation of concurrency in software, parallel processor performance evaluation, and numerical algorithms able to exploit the potential gain of parallel processors. Up to that time, both general conferences and expert workshops had concentrated on vector supercomputers and their application. Lichnewsky and Saguezs book is a workshop proceedings of the 21 presentations, 12 of which concern parallel processing methodology, two concern vector supercomputing, and the rest cover algorithm and performance evaluation. The NATO conference, which was heavily attended by supercomputing providers, was evenly divided between presentations on supercomputer centers and networks, and presentations on parallel and distributed computing. Supercomputing vendors and centers also made presentations, mostly on topics in hardware and software optimization. The National Science Foundation–sponsored workshop was devoted exclusively to the methods, algorithms, and applications of parallel supercomputing. The theme of the workshop was, “Now that parallel processing is a practical reality, how can scientists and engineers employ these new resources to solve problems not solvable on earlier systems ” Most of the papers are practical how-to-solve stories with worked examples and performance data. Careys work is organized as a multi-author book rather than a conference proceedings. After two introductory chapters by computer architects, the remaining 16 chapters focus on new algorithms for parallel supercomputers. Martin; van der Steen Martins and van der Steens books are devoted to performance evaluation of both established vector and advanced parallel supercomputers. Martin treats performance factors, measurements and metrics, and methods and models. The book contains five chapters on performance factors, four on measurements and metrics, and five on methods, models, and research directions in supercomputer performance evaluation. The material adds up to a pragmatic, state-of-the-art handbook on the subject. Van der Steen is devoted to strategies for exploiting, evaluating, and benchmarking advanced computer architectures. The 18 chapters by a variety of authors represent a far wider—in fact, international—range of opinions on performance measurement. The editor presents the book as an overview or collection of current thinking on the topic and that is exactly what it is. Although less practical than Martin, it is considerably more entertaining, thanks to several polemical chapters. Table 2: Topics covered History Applications Hardware software design Performance evaluation Parallel processing Algorithms Lazou Yes Yes No No No No Matsen and Tajima Yes Yes Yes No No No The Kartashevs No Yes Yes Yes Yes No Lichnewsky and Saguez No Yes Yes Yes Yes Yes Kowalik No Yes Yes No Yes Yes Carey No Yes Yes No Yes Yes Martin No No No Yes Yes No van der Steen No No No Yes Yes No Hwang and De Groot No Yes Yes No Yes No Hwang and DeGroot This book is in a class by itself, treating applications of high-performance multiprocessors primarily to problems in artificial intelligence. Seven chapters are devoted to numerical applications and the machines and software technology that support them. The remaining nine focus on parallel processing hardware and software for the solution of artificial intelligence problems. Although clearly aimed at the AI researcher looking to novel architectures for more computing power, the book covers traditional supercomputer applications well. Comparison This little library of supercomputing books measures and records the maturation of vector supercomputing and software during 1985–1990 and the emergence of parallel processing technology from its research infancy and childhood into its practical adolescence. Which is the best book It depends on your needs. Lazou is the best introductory text or primer, but Carey would be the best advanced text for a course on current issues in computational science and engineering, or even an applied mathematics course on supercomputer applications. As a valuable reference work of papers documenting a range of architectural options, the Kartashevs book is good. Martin provides the most exhaustive treatment of supercomputing performance available. Hwang and DeGroot is peerless as a reference for parallel processing applications in artificial intelligence; the numerical supercomputing chapters are good in both coverage and depth. Or do you want to survey this technology by reading only three books Read Lazou, the Kartashevs, and Carey. Are you interested in preserving the good ideas that either werent implemented or didnt survive in the marketplace Read Matsen and Tajima, followed by the Kartashevs. From the perspective of a bibliophile, the best book pedagogically is clearly Lazou, and the best produced book is the Kartashevs handsome work. The most useful to the scientist or applied mathematician is a toss-up between Carey and Kowalik—with perhaps an edge to Carey, which reads like a book and not just a conference proceedings.

Access critical reviews of Computing literature here

Become a reviewer for Computing Reviews.

Recommendations