Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
Skip header Section
The Future of Computing Performance: Game Over or Next Level?March 2011
Publisher:
  • National Academy Press
  • Div. of Natl. Academy of Sciences 2101 Constitution Ave. N.W. Washington, DC
  • United States
ISBN:978-0-309-15951-7
Published:21 March 2011
Pages:
200
Skip Bibliometrics Section
Bibliometrics
Skip Abstract Section
Abstract

The end of dramatic exponential growth in single-processor performance marks the end of the dominance of the single microprocessor in computing. The era of sequential computing must give way to a new era in which parallelism is at the forefront. Although important scientific and engineering challenges lie ahead, this is an opportune time for innovation in programming systems and computing architectures. We have already begun to see diversity in computer designs to optimize for such considerations as power and throughput. The next generation of discoveries is likely to require advances at both the hardware and software levels of computing systems. There is no guarantee that we can make parallel computing as common and easy to use as yesterday's sequential single-processor computer systems, but unless we aggressively pursue efforts suggested by the recommendations in this book, it will be "game over" for growth in computing performance. If parallel programming and related software efforts fail to become widespread, the development of exciting new applications that drive the computer industry will stall; if such innovation stalls, many other parts of the economy will follow suit. The Future of Computing Performance describes the factors that have led to the future limitations on growth for single processors that are based on complementary metal oxide semiconductor (CMOS) technology. It explores challenges inherent in parallel computing and architecture, including ever-increasing power consumption and the escalated requirements for heat dissipation. The book delineates a research, practice, and education agenda to help overcome these challenges. The Future of Computing Performance will guide researchers, manufacturers, and information technology professionals in the right direction for sustainable growth in computer performance, so that we may all enjoy the next level of benefits to society.

Cited By

  1. ACM
    Lawson A, Kraemer E, Che S and Kennedy C A Multi-Level Study of Undergraduate Computer Science Reasoning about Concurrency Proceedings of the 2019 ACM Conference on Innovation and Technology in Computer Science Education, (210-216)
  2. Casas I, Taheri J, Ranjan R, Wang L and Zomaya A (2017). A balanced scheduler with data reuse and replication for scientific workflows in cloud computing systems, Future Generation Computer Systems, 74:C, (168-178), Online publication date: 1-Sep-2017.
  3. Geist A and Reed D (2017). A survey of high-performance computing scaling challenges, International Journal of High Performance Computing Applications, 31:1, (104-113), Online publication date: 1-Jan-2017.
  4. Maginot P, Ragusa J and Morel J (2016). High-order solution methods for grey discrete ordinates thermal radiative transfer, Journal of Computational Physics, 327:C, (719-746), Online publication date: 15-Dec-2016.
  5. Rajbhandari S, Kim J, Krishnamoorthy S, Pouchet L, Rastello F, Harrison R and Sadayappan P A domain-specific compiler for a parallel multiresolution adaptive numerical simulation environment Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, (1-12)
  6. Kadric E, Gurniak P and DeHon A (2016). Accurate Parallel Floating-Point Accumulation, IEEE Transactions on Computers, 65:11, (3224-3238), Online publication date: 1-Nov-2016.
  7. Masliah I, Abdelfattah A, Haidar A, Tomov S, Baboulin M, Falcou J and Dongarra J High-Performance Matrix-Matrix Multiplications of Very Small Matrices Proceedings of the 22nd International Conference on Euro-Par 2016: Parallel Processing - Volume 9833, (659-671)
  8. Kingsford C (2016). Teaching Computation to Biologists, Computing in Science and Engineering, 18:1, (4-5), Online publication date: 1-Jan-2016.
  9. Kendall R, Post D, Atwood C, Newmeyer K, Votta L, Gibson P, Borovitcky D, Miller L, Meakin R, Hurwitz M, Dey S, D'Angelo J, Vogelsong R, Goldfarb O and Allwerdt S (2016). A Risk-Based, Practice-Centered Approach to Project Management for HPCMP CREATE, Computing in Science and Engineering, 18:1, (40-51), Online publication date: 1-Jan-2016.
  10. ACM
    Yamazaki I, Kurzak J, Luszczek P and Dongarra J Randomized algorithms to update partial singular value decomposition on a hybrid CPU/GPU cluster Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, (1-12)
  11. Heath M (2015). A tale of two laws, International Journal of High Performance Computing Applications, 29:3, (320-330), Online publication date: 1-Aug-2015.
  12. Post D and Kendall R (2015). Enhancing Engineering Productivity, Computing in Science and Engineering, 17:4, (4-6), Online publication date: 1-Jul-2015.
  13. Day C (2015). Days of Endless Time, Computing in Science and Engineering, 17:4, (80-80), Online publication date: 1-Jul-2015.
  14. ACM
    Elango V, Rastello F, Pouchet L, Ramanujam J and Sadayappan P (2015). On Characterizing the Data Access Complexity of Programs, ACM SIGPLAN Notices, 50:1, (567-580), Online publication date: 11-May-2015.
  15. Aliaga J, Anzt H, Castillo M, Fernández J, León G, Pérez J and Quintana-Ortí E (2015). Unveiling the performance-energy trade-off in iterative linear system solvers for multithreaded processors, Concurrency and Computation: Practice & Experience, 27:4, (885-904), Online publication date: 25-Mar-2015.
  16. ACM
    Ballard G, Demmel J and Knight N (2015). Avoiding Communication in Successive Band Reduction, ACM Transactions on Parallel Computing, 1:2, (1-37), Online publication date: 18-Feb-2015.
  17. ACM
    Elango V, Rastello F, Pouchet L, Ramanujam J and Sadayappan P On Characterizing the Data Access Complexity of Programs Proceedings of the 42nd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, (567-580)
  18. ACM
    Elango V, Sedaghati N, Rastello F, Pouchet L, Ramanujam J, Teodorescu R and Sadayappan P (2015). On Using the Roofline Model with Lower Bounds on Data Movement, ACM Transactions on Architecture and Code Optimization, 11:4, (1-23), Online publication date: 9-Jan-2015.
  19. Koanantakool P and Yelick K A computation- and communication-optimal parallel direct 3-body algorithm Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, (363-374)
  20. Zandrahimi M and Al-Ars Z A Survey on Low-Power Techniques for Single and Multicore Systems Proceedings of the 3rd International Conference on Context-Aware Systems and Applications, (69-74)
  21. ACM
    Snir M The future of supercomputing Proceedings of the 28th ACM international conference on Supercomputing, (261-262)
  22. ACM
    Vishkin U (2014). Is multicore hardware for general-purpose parallel processing broken?, Communications of the ACM, 57:4, (35-39), Online publication date: 1-Apr-2014.
  23. ACM
    Ballard G, Demmel J, Holtz O and Schwartz O (2014). Communication costs of Strassen's matrix multiplication, Communications of the ACM, 57:2, (107-114), Online publication date: 1-Feb-2014.
  24. ACM
    Fauzia N, Elango V, Ravishankar M, Ramanujam J, Rastello F, Rountev A, Pouchet L and Sadayappan P (2013). Beyond reuse distance analysis, ACM Transactions on Architecture and Code Optimization, 10:4, (1-29), Online publication date: 1-Dec-2013.
  25. ACM
    Sridharan S, Gupta G and Sohi G Holistic run-time parallelism management for time and energy efficiency Proceedings of the 27th international ACM conference on International conference on supercomputing, (337-348)
  26. ACM
    Hill M (2013). Research directions for 21st century computer systems, ACM SIGPLAN Notices, 48:4, (459-460), Online publication date: 23-Apr-2013.
  27. ACM
    Hill M (2013). Research directions for 21st century computer systems, ACM SIGARCH Computer Architecture News, 41:1, (459-460), Online publication date: 29-Mar-2013.
  28. ACM
    Hill M Research directions for 21st century computer systems Proceedings of the eighteenth international conference on Architectural support for programming languages and operating systems, (459-460)
  29. ACM
    DeHon A Location, location, location Proceedings of the ACM/SIGDA international symposium on Field programmable gate arrays, (137-146)
  30. ACM
    Ballard G, Demmel J, Holtz O and Schwartz O (2013). Graph expansion and communication costs of fast matrix multiplication, Journal of the ACM, 59:6, (1-23), Online publication date: 1-Dec-2012.
  31. ACM
    Ballard G, Demmel J and Knight N (2012). Communication avoiding successive band reduction, ACM SIGPLAN Notices, 47:8, (35-44), Online publication date: 11-Sep-2012.
  32. ACM
    Ballard G, Demmel J, Holtz O, Lipshitz B and Schwartz O Communication-optimal parallel algorithm for strassen's matrix multiplication Proceedings of the twenty-fourth annual ACM symposium on Parallelism in algorithms and architectures, (193-204)
  33. Ahuja V, Ghosal D and Farrens M Minimizing the Data Transfer Time Using Multicore End-System Aware Flow Bifurcation Proceedings of the 2012 12th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (ccgrid 2012), (595-602)
  34. ACM
    Ballard G, Demmel J and Knight N Communication avoiding successive band reduction Proceedings of the 17th ACM SIGPLAN symposium on Principles and Practice of Parallel Programming, (35-44)
Contributors
  • Carnegie Mellon University
  • National Academy of Sciences

Reviews

David Bruce Henderson

Much of modern society depends on fast, inexpensive computers and on the performance of these computers continuing to rapidly increase, just as they have for the past 50 years. This report from the Committee on Sustaining Growth in Computing Performance of the National Research Council of the National Academy of Science documents the plateauing of growth in computing performance around 2004 and discusses what this means for the future of computing. Chapter 1 discusses the widespread expectation that computer performance will continue to improve and that computer costs will continue to fall. The discussion covers why faster computers are needed and the importance of this performance growth for the sciences, defense, business, and the average citizen. Chapter 2 explains just what is meant by computer "performance," and chapter 3 looks in some detail at the fundamental reasons why performance has now plateaued. Several factors at the crux of this sudden halt in performance growth, such as heat density and fabrication complexity, are discussed. Chapter 4 examines the possibility of restarting growth in performance through parallel processing in both software and hardware. Parallelism is not new, but as the report notes, no parallel solution has yet delivered on the promise of significant performance improvement. Perhaps it has just been easier to rely on the continued growth of chip performance. Chapter 5 concludes with the committee's recommendations on the direction of future research to potentially restart growth in computer performance. The report contains a preface, a table of contents, and an abstract-but no index. The detail in the table of contents, however, is adequate for locating the various topics throughout the report. Clear diagrams, footnotes, and boxed discussions of relevant side topics punctuate the report. A summary chapter is included, which highlights the major findings and recommendations. For the reader with little spare time, simply reading the three pages of the abstract and the 16-page summary is all that is really needed to grasp the committee's findings. The report has several appendices covering historical aspects of computer performance, biographies of those involved in the production of the report, and reprints of relevant earlier papers. Notable is a reprint of Gordon Moore's 1965 paper [1], which led to the phrase "Moore's law," commonly used to describe the exponential growth of computer performance since the 1950s. The report is quite readable and should provide useful guidance on the future direction of research into improvements in computer performance. The summary at the beginning of the report provides a good executive brief of the findings and recommendations. Whether computer performance growth will ever return to the dramatic rates of the past 50 years remains to be seen. Online Computing Reviews Service

Access critical reviews of Computing literature here

Become a reviewer for Computing Reviews.

Recommendations