Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Parallel Job Scheduling: A Performance Perspective

  • Chapter
  • First Online:
Performance Evaluation: Origins and Directions

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1769))

Abstract

When first introduced, parallel systems were dedicated systems that were intended to run a single parallel application at a time. Examples of the types of applications for which these systems were used include scientific modelling, such as computational fluid dynamics, and other grand challenge problems [44]. As the cost of multiprocessor systems continues to decrease and software support becomes increasingly available for parallel application development, a wider range of users are becoming interested in using such systems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

eBook
USD 15.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. G. M. Amdahl. Validity of the single processor approach to achieving large scale computing capabilities. In Proceedings of the AFIPS Spring Joint Computer Conference, pages 483–485, April 1967.

    Google Scholar 

  2. Stergios V. Anastasiadis and Kenneth C. Sevcik. Parallel application scheduling on networks of workstations. Journal of Parallel and Distributed Computing, June 1997. To appear.

    Google Scholar 

  3. Greg Astfalk. Fundamentals and practicalities of MPP. The Leading Edge, 12:839–843, 907–911, 992–998, 1993.

    Article  Google Scholar 

  4. Samir Ayaachi and Sivarama P. Dandamudi. A hierarchical processor scheduling strategy for multiprocessoe systems. In IEEE Symposium on Parallel and Distributed Processing, pages 100–109, 1996.

    Google Scholar 

  5. Timothy B. Brecht and Kaushik Guha. Using parallel program characteristics in dynamic processor allocation policies. Performance Evaluation, 27&28:519–539, 1996.

    Google Scholar 

  6. Douglas C. Burger, Rahmat S. Hyder, Barton P. Miller, and David A. Wood. Paging tradeoffs in distributed-shared-memory multiprocessors. In Proceedings Supercomputing’ 94, pages 590–599, November 1994.

    Google Scholar 

  7. Yuet-Ning Chan, Sivarama P. Dandamudi, and Shikharesh Majumdar. Performance comparison of processor scheduling strategies in a distributed-memory multicomputer system. In Proceedings of the 11th International Parallel Procesing Symposium, pages 139–145, 1997.

    Google Scholar 

  8. Su-Hui Chiang, Rajesh K. Mansharamani, and Mary K. Vernon. Use of application characteristics and limited preemption for run-to-completion parallel processor scheduling policies. In Proceedings of the 1994 ACM SIGMETRICS Conference on Measurement and Modelling of Computer Systems, pages 33–44, 1994.

    Google Scholar 

  9. Xiaotie Deng, Nian Gu, Tim Brecht, and KaiCheng Lu. Preemptive scheduling of parallel jobs on multiprocessors. In Proceedings of the Seventh Annual ACM-SIAM Symposium on Discrete Algorithms, pages 159–167, January 1996.

    Google Scholar 

  10. Lawrence W. Dowdy. On the partitioning of multiprocessor systems. In Performance 1990: An International Conference on Computers and Computer Networks, pages 99–129, March 1990.

    Google Scholar 

  11. Derek L. Eager, John Zahorjan, and Edward D. Lazowska. Speedup versus efficiency in parallel systems. IEEE Transactions on Computers, 38(3):408–423, March 1989.

    Article  Google Scholar 

  12. Jeff Edmonds, Donald D. Chinn, Tim Brecht, and Xiaotie Deng. Non-clairvoyant multiprocessor scheduling of jobs with changing execution characteristcs. In Proceedings of the Twenty-Ninth Annual ACM Symposium on the Theory of Computing, pages 120–129, 1997.

    Google Scholar 

  13. Dror Feitelson. A survey of scheduling in multiprogrammed parallel systems. Technical Report RC19790 (87657), IBM T.J. Watson Research Center, 1994.

    Google Scholar 

  14. Dror G. Feitelson. Packing schemes for gang scheduling. In Job Scheduling Strategies for Parallel Procesing, pages 89–110. Springer Verlag, Lecture Notes in Computer Science, 1996.

    Chapter  Google Scholar 

  15. Dror G. Feitelson. Memory usage in the LANL CM-5 workload. In Dror G. Feitelson and Larry Rudolph, editors, Job Scheduling Strategies for Parallel Processing, Lecture Notes in Computer Science Vol. 1291, pages 78–94. Springer-Verlag, 1997.

    Google Scholar 

  16. Dror G. Feitelson and Morris A. Jette. Improved utilization and responsiveness with gang scheduling. In Dror G. Feitelson and Larry Rudolph, editors, Job Scheduling Strategies for Parallel Processing, Lecture Notes in Computer Science Vol. 1291, pages 238–261. Springer-Verlag, 1997.

    Google Scholar 

  17. Dror G. Feitelson and Bill Nitzberg. Job characteristics of a production parallel scientific workload on the NASA Ames iPSC/860. In Dror G. Feitelson and Larry Rudolph, editors, Job Scheduling Strategies for Parallel Processing, Lecture Notes in Computer Science Vol. 949, pages 337–360. Springer-Verlag, 1995.

    Google Scholar 

  18. Dror G. Feitelson and Larry Rudolph. Distributed hierarchical control for parallel processing. Computer, 23(5):65–77, May 1990.

    Article  Google Scholar 

  19. Dror G. Feitelson and Larry Rudolph. Gang scheduling performance benefits for fine-grain synchronization. Journal of Parallel and Distributed Computing, 16:306–318, 1992.

    Article  MATH  Google Scholar 

  20. Dror G. Feitelson, Larry Rudolph, Uwe Schwiegelshohn, Kenneth C Sevcik, and Parkson Wong. Theory and practice in parallel job scheduling. In 1997 IPPS Workshop on Job Scheduling Strategies for Parallel Procesing, pages-, 1997.

    Google Scholar 

  21. Dipak Ghosal, Guiseppe Serazzi, and Satish K. Tripathi. The processor working set and its use in scheduling multiprocessor systems. IEEE Transactions on Software Engineering, 17(5):443–453, May 1991.

    Article  Google Scholar 

  22. Anoop Gupta, Andrew Tucker, and Shigeru Urushibara. The impact of operating system scheduling policies and synchronization methods on the performance of parallel applications. In Proceedings of the 1991 ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems, pages 120–132, 1991.

    Google Scholar 

  23. Steven Hotovy. Workload evolution on the Cornell Theory Center IBM SP-2. In Dror G. Feitelson and Larry Rudolph, editors, Job Scheduling Strategies for Parallel Processing, Lecture Notes in Computer Science Vol. 1162, pages 27–40. Springer-Verlag, 1996.

    Chapter  Google Scholar 

  24. Joefon Jann, Pratap Pattnaik, Hubertus Franke, Fang Wang, Joseph Skovira, and Joseph Riordan. Modeling of workload in MPPs. In Dror G. Feitelson and Larry Rudolph, editors, Job Scheduling Strategies for Parallel Processing, Lecture Notes in Computer Science Vol. 1291, pages 95–116. Springer-Verlag, 1997.

    Google Scholar 

  25. P. Kwong and S. Majumdar. Study of data distribution strategies in parallel I/O management. In Third International Conference of the Austrian Committee on Parallel Computing (ACPC’96), September 1996.

    Google Scholar 

  26. Walter Lee, Matthew Frank, Victor Lee, Kenneth Mackenzie, and Larry Rudolph. Implications of I/O for gang scheduled workloads. In Dror G. Feitelson and Larry Rudolph, editors, Job Scheduling Strategies for Parallel Processing, Lecture Notes in Computer Science Vol. 1291, pages 215–237. Springer-Verlag, 1997.

    Google Scholar 

  27. Scott T. Leutenegger and Mary K. Vernon. The performance of multiprogrammed multiprocessor scheduling policies. In Proceedings of the 1990 ACM SIGMETRICS Conference on Measurement and Modelling of Computer Systems, pages 226–236, 1990.

    Google Scholar 

  28. Walter Ludwig and Prasoon Tiwari. Scheduling malleable and nonmalleable parallel tasks. In Proceedings of the Fifth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 167–176, 1994.

    Google Scholar 

  29. S. Majumdar, D. L. Eager, and R. B. Bunt. Scheduling in multiprogrammed parallel systems. In Proceedings of the 1988 ACM SIGMETRICS Conference on Measurement and Modelling of Computer Systems, pages 104–113, May 1988.

    Google Scholar 

  30. S. Majumdar, D. L. Eager, and R. B. Bunt. Characterization of programs for scheduling in multiprogrammed parallel systems. Performance Evaluation, 13(2):109–130, October 1991.

    Article  MATH  MathSciNet  Google Scholar 

  31. Shikharesh Majumdar. Processor Scheduling in Multiprogrammed Parallel Systems. PhD thesis, University of Saskatchewan, April 1988.

    Google Scholar 

  32. Shikharesh Majumdar and Yiu Ming Leung. Characterization and management of I/O in multiprogrammed parallel systems. In Proceedings of the Sixth IEEE Symposium on Parallel and Distributed Processing, pages 298–307, October 1994.

    Google Scholar 

  33. Cathy McCann, Raj Vaswani, and John Zahorjan. A dynamic processor allocation policy for multiprogrammed shared-memory multiprocessors. ACM Transactions on Computer Systems, 11(2):146–178, May 1993.

    Article  Google Scholar 

  34. Cathy McCann and John Zahorjan. Processor allocation policies for message-passing parallel computers. In Proceedings of the 1994 ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems, pages 19–32, 1994.

    Google Scholar 

  35. Cathy McCann and John Zahorjan. Scheduling memory constrained jobs on distributed memory parallel computers. In Proceedings of the 1995 ACM SIGMETRICS Joint International Conference on Measurement and Modelling of Computer Systems, pages 208–219, 1995.

    Google Scholar 

  36. Rajeev Motwani, Steven Phillips, and Eric Torng. Non-clairvoyant scheduling. In Proceedings of the Fourth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 422–431, 1993.

    Google Scholar 

  37. Vijay K. Naik, Sanjeev K. Setia, and Mark S. Squillante. Performance analysis of job scheduling policies in parallel supercomputing environments. In Supercomputing’ 93, pages 824–833, 1993.

    Google Scholar 

  38. Michael G. Norman and Peter Thanisch. Models of machines and computation for mapping in multicomputers. ACM Computing Surveys, 25(3):263–302, September 1993.

    Article  Google Scholar 

  39. John K. Ousterhout. Scheduling techniques for concurrent systems. In Proceedings of the 3rd International Conference on Distributed Computing (ICDCS), pages 22–30, October 1982.

    Google Scholar 

  40. Eric W. Parsons and Kenneth C. Sevcik. Multiprocessor scheduling for high-variability service time distributions. In Dror G. Feitelson and Larry Rudolph, editors, Job Scheduling Strategies for Parallel Processing, Lecture Notes in Computer Science Vol. 949, pages 127–145. Springer-Verlag, 1995.

    Google Scholar 

  41. Eric W. Parsons and Kenneth C. Sevcik. Benefits of speedup knowledge in memory-constrained multiprocessor scheduling. In Proceedings of the 1996 International Conference on Performance Theory, Measurement and Evaluation of Computer and Communication Systems (PERFORMANCE’ 96), pages 253–272, 1996. Proceedings published in Performance Evaluation, vol. 27&28.

    Google Scholar 

  42. Eric W. Parsons and Kenneth C. Sevcik. Coordinated allocation of memory and processors in multiprocessors. In Proceedings of the 1996 ACM SIGMETRICS Conference on Measurement and Modelling of Computer Systems, pages 57–67, 1996.

    Google Scholar 

  43. Vinod G. J. Peris, Mark S. Squillante, and Vijay K. Naik. Analysis of the impact of memory in distributed parallel processing systems. In Proceedings of the 1994 ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems, pages 5–18, 1994.

    Google Scholar 

  44. Yale N. Pratt. The i/o subsystem: A candidate for improvement. Computer, 27(3):15–16, March 1994.

    Article  Google Scholar 

  45. E. Rosti, E. Smirni, L. W. Dowdy, G. Serazzi, and B. M. Carlson. Robust partitioning policies of multiprocessor systems. Performance Evaluation, 19:141–165, 1994.

    Article  Google Scholar 

  46. Emilia Rosti, Giuseppe Serazzi, Evgenia Smirni, and Mark S. Squillante. The impact of i/o on program behaviour and parallel scheduling. In Joint International Conference on Measurement and Modeling of Computer Systems, pages 56–65, 1998.

    Google Scholar 

  47. Emilia Rosti, Evgenia Smirni, Lawrence W. Dowdy, Guiseppe Serazzi, and Kenneth C. Sevcik. Processor saving scheduling policies for multiprcessor systems. IEEE Transactions on Computers, 47(3):178–189, February 1998.

    Article  Google Scholar 

  48. Sanjeev Setia and Satish Tripathi. A comparative analysis of static processor partitioning policies for parallel computers. In Proceedings of the International Workshop on Modeling and Simulation of Computer and Telecommunication Systems (MASCOTS), pages 283–286, January 1993.

    Google Scholar 

  49. Sanjeev K. Setia. The interaction between memory allocations and adaptive partitioning in message-passing multiprocessors. In Dror G. Feitelson and Larry Rudolph, editors, Job Scheduling Strategies for Parallel Processing, Lecture Notes in Computer Science Vol. 949, pages 146–164. Springer-Verlag, 1995.

    Google Scholar 

  50. Sanjeev K. Setia, Mark S. Squillante, and Satish K. Tripathi. Processor scheduling on multiprogrammed, distributed memory parallel computers. In Proceedings of the 1993 ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems, pages 158–170, 1993.

    Google Scholar 

  51. K. C. Sevcik. Application scheduling and processor allocation in multiprogrammed parallel processing systems. Performance Evaluation, 19:107–140, 1994.

    Article  MATH  Google Scholar 

  52. Kenneth C. Sevcik. Characterizations of parallelism in applications and their use in scheduling. In Proceedings of the 1989 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Systems, pages 171–180, May 1989.

    Google Scholar 

  53. D. Sleator and R. Tarjan. Amortized efficiency of list update and paging rules. Communications of the ACM, 28(2):202–208, February 1985.

    Article  MathSciNet  Google Scholar 

  54. Mark S. Squillante and Edward D. Lazowska. Using processor-cache affinity information in shared-memory multiprocessor scheduling. IEEE Transactions on Parallel and Distributed Systems, 4(2):131–143, February 1993.

    Article  Google Scholar 

  55. Mark S. Squillante, Fang Wang, and Marios Papaefthymiou. Stochastic analysis of gang scheduling in parallel and distributed systems. Performance Evaluation, 27&28:273–296, 1996.

    Google Scholar 

  56. Josep Torrellas, Andrew Tucker, and Anoop Gupta. Benefits of cache-affinity scheduling in shared-memory multiprocessors: A summary. In Proceedings of the 1993 ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems, pages 272–274, 1993.

    Google Scholar 

  57. J. Turek, J. Wolf, K. Pattipati, and P. Yu. Scheduling parallelizable tasks: Putting it all on the shelf. In Proceedings of the 1992 ACM SIGMETRICS and PERFORMANCE’ 92 International Conference on Measurement and Modeling of Computer Systems, pages 225–236, 1992.

    Google Scholar 

  58. J. Turek, J. Wolf, and P. Yu. Approximate algorithms for scheduling parallelizable tasks. In 4th Annual ACM Symposium on Parallel Algorithms and Architectures (SPAA’ 92), pages 323–332, 1992.

    Google Scholar 

  59. John Turek, Walter Ludwig, Joel L. Wolf, Lisa Fleischer, Prasoon Tiwari, Jason Glasgow, Uwe Schwiegelshohn, and Philip S. Yu. Scheduling parallelizable tasks to minimize average response time. In 6th Annual ACM Symposium on Parallel Algorithms and Architectures, pages 200–209, 1994.

    Google Scholar 

  60. John Turek, Uwe Schwiegelshohn, Joel L. Wolf, and Philip S. Yu. Scheduling parallel tasks to minimize average response time. In Proceedings of the Fifth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 112–121, 1994.

    Google Scholar 

  61. John Zahorjan and Cathy McCann. Processor scheduling in shared memory multiprocessors. In Proceedings of the 1990 ACM SIGMETRICS Conference on Measurement and Modelling of Computer Systems, pages 214–225, 1990.

    Google Scholar 

  62. Songnian Zhou and Timothy Brecht. Processor pool-based scheduling for large-scale NUMA multiprocessors. In Proceedings of the 1991 ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems, pages 133–142, 1991.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Majumdar, S., Parsons, E.W. (2000). Parallel Job Scheduling: A Performance Perspective. In: Haring, G., Lindemann, C., Reiser, M. (eds) Performance Evaluation: Origins and Directions. Lecture Notes in Computer Science, vol 1769. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-46506-5_10

Download citation

  • DOI: https://doi.org/10.1007/3-540-46506-5_10

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-67193-0

  • Online ISBN: 978-3-540-46506-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics